
Answer-first summary for fast verification
Answer: MLflow Tracking Server.
The MLflow Tracking Server is specifically designed to log and query experiments, enabling data scientists to track metrics, parameters, and artifacts (like models) across different runs of machine learning code. It offers a user-friendly web interface for viewing and comparing experiments, making it ideal for collaborative projects. While MLlib CrossValidator (Option A) is used for hyperparameter tuning in Spark MLlib, it doesn't directly support experiment tracking. The MLflow REST API (Option B) facilitates programmatic access to MLflow features but lacks the visual comparison tools provided by the Tracking Server. Databricks Jobs (Option C) are useful for running and scheduling tasks on the Databricks platform but are not tailored for experiment tracking and comparison. Thus, the MLflow Tracking Server (Option D) is the correct choice for this scenario.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
In a collaborative project where a team of data scientists aims to track and compare the performance of various machine learning experiments, which MLflow component is most suitable for this purpose?
A
MLlib CrossValidator.
B
MLflow REST API.
C
Databricks Jobs.
D
MLflow Tracking Server.