
Answer-first summary for fast verification
Answer: Evaluation
The correct answer is **D: Evaluation**. In the generative AI model lifecycle, the **Evaluation** phase is specifically dedicated to testing and assessing the model's performance before deployment. This stage involves: - **Testing the model's accuracy** using appropriate metrics and benchmarks - **Measuring performance** with relevant evaluation metrics such as BLEU, ROUGE, accuracy, F1 score, or other task-specific metrics - **Assessing task-specific objectives** to ensure the model meets the intended requirements - **Verifying quality and safety standards** to identify potential issues like bias, hallucinations, or harmful outputs **Why other options are incorrect:** - **A: Deployment** - This phase involves putting the model into production after evaluation has confirmed it meets requirements. - **B: Data selection** - This occurs earlier in the lifecycle during data preparation and involves curating training datasets. - **C: Fine-tuning** - This is a training phase where the model is adjusted on specific data, but accuracy testing occurs separately during evaluation. The evaluation stage is critical for ensuring model reliability, identifying areas for improvement, and making data-driven decisions about whether the model is ready for production use.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.