
Ultimate access to all questions.
You are a Machine Learning Engineer at a utility company tasked with improving the detection of defects in underground electric cables using thermal imaging. Your dataset comprises 10,000 thermal images, with a mere 100 images containing visible defects, making the dataset highly imbalanced. The company requires a robust evaluation method that not only assesses the model's ability to correctly identify defects but also considers the implications of false positives in a real-world maintenance scenario where unnecessary excavations could lead to significant costs and operational disruptions. Given these constraints, which of the following methods is the most effective for evaluating your model's performance on a test dataset? Choose the best option.
A
Calculate the accuracy of the model by determining the fraction of images correctly predicted as having a visible defect.
B
Compute the precision and recall metrics separately to understand the model's performance in identifying defects and avoiding false positives.
C
Assess the Area Under the Curve (AUC) value of the Receiver Operating Characteristic (ROC) curve to evaluate the model's ability to distinguish between defective and non-defective images across various thresholds.
D
Use Cosine Similarity to compare the feature representations of the test and training datasets to ensure the model's predictions are consistent across both datasets.
E
Implement a combination of precision-recall curves and F1 score to balance the trade-off between identifying true defects and minimizing false positives, given the operational constraints.