
Answer-first summary for fast verification
Answer: F1 Measure, Area Under the ROC Curve (AUC-ROC)
In the context of a highly imbalanced dataset, the F1 score is crucial as it balances precision and recall, providing a single metric that accounts for both false positives and false negatives. Additionally, the Area Under the ROC Curve (AUC-ROC) is valuable for evaluating the model's ability to distinguish between the classes across all classification thresholds, making it particularly useful in scenarios where the cost of misclassification is high. Together, these metrics offer a comprehensive assessment of the model's performance.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
As a Professional Machine Learning Engineer, you are developing a binary classification model to identify whether scanned documents contain a company's logo. The dataset is highly imbalanced, with 96% of the images not containing the logo. Given the need to deploy this model in a production environment where both false positives and false negatives carry significant costs, which evaluation metrics would you prioritize to ensure the model's performance is accurately assessed? (Choose two correct options)
A
Accuracy
B
Recall
C
Precision
D
F1 Measure
E
Area Under the ROC Curve (AUC-ROC)
No comments yet.