LeetQuiz Logo
Privacy Policy•contact@leetquiz.com
© 2025 LeetQuiz All rights reserved.
Databricks Certified Machine Learning - Associate

Databricks Certified Machine Learning - Associate

Get started today

Ultimate access to all questions.


Which standard evaluation metrics are automatically computed for each run in an AutoML experiment for classification problems? Choose only ONE best answer.

Real Exam




Explanation:

In AutoML (Automated Machine Learning) frameworks, for classification problems, multiple evaluation metrics are automatically computed for each model run to assess performance comprehensively. These include:

  • Accuracy: The proportion of true results (true positives and true negatives) among all cases.
  • Area Under the ROC Curve (AUC-ROC): Indicates the model's ability to distinguish between classes, with 1 being perfect and 0.5 no better than random.
  • Recall: The proportion of actual positives correctly identified (sensitivity).
  • F1 Score: The harmonic mean of precision and recall, balancing the two.

AutoML platforms compute these metrics to offer a detailed view of each model's performance, aiding in selecting the most suitable model based on specific application needs.

Powered ByGPT-5