
Databricks Certified Machine Learning - Associate
Get started today
Ultimate access to all questions.
In scenarios where efficiency is key, why might a training-validation split be preferred over k-fold cross-validation?
In scenarios where efficiency is key, why might a training-validation split be preferred over k-fold cross-validation?
Real Exam
Explanation:
Opting for a training-validation split instead of k-fold cross-validation means training fewer models, which is particularly beneficial when time or computational resources are scarce. This approach does not inherently remove bias, guarantee reproducibility, or reduce the number of hyperparameter values to test. The choice between these methods depends on the specific constraints and goals of the project.