Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
To ensure consistency in model evaluation metrics across different MLflow tracking servers, what is the most effective method?
A
Manually export and import MLflow experiments between servers for cross-comparison.
B
Consolidate all tracking servers into one before running experiments to avoid cross-version evaluation needs.
C
MLflow does not support cross-version or cross-server model evaluation comparisons; rely on external tools.
D
Use the MLflow REST API to fetch model metrics from each server and compare them using custom scripts.