
Ultimate access to all questions.
A data scientist was advised to always use 5-fold cross-validation in their model creation process. However, a team member suggests that sometimes a simple training-validation split might be preferable, especially when k equals 2 in k-fold cross-validation. Which of the following best describes a potential advantage of using a training-validation split over k-fold cross-validation in this scenario?
A
Using a training-validation split ensures that bias can be completely eliminated from the model.
B
A training-validation split requires the construction of fewer models, saving computational resources.
C
The need for a separate holdout set is eliminated when using a training-validation split.
D
Model reproducibility is inherently guaranteed with a training-validation split.
E
Fewer hyperparameter values need to be evaluated when using a training-validation split.