Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
You are conducting multiple machine learning experiments using PySpark on Databricks. How can MLflow be utilized to manage these experiments effectively, ensuring both reproducibility and tracking of model performance?
A
Implement custom logging within your PySpark scripts to track model performance metrics.
B
Store all models in Azure Blob Storage and manually log experiment results in an Excel sheet.
C
Use MLflow Projects to package the PySpark ML code, track experiments with MLflow Tracking, and register models with MLflow Model Registry.
D
Rely solely on Databricks notebooks' revision history to track changes in ML experiments.