
Answer-first summary for fast verification
Answer: Set the value of max_concurrent_runs of HyperDriveConfig to 4.
The question focuses on improving sampling convergence for Bayesian optimization in Azure ML's HyperDrive. Bayesian sampling relies on previous run results to inform subsequent hyperparameter selections. With max_concurrent_runs currently set to 10, reducing it to 4 (option B) allows more sequential dependency between runs, enhancing the Bayesian optimization's ability to learn from completed trials. This improves sampling convergence as noted in the community discussion, where reducing concurrent runs increases the benefit from previously completed jobs. Options A and C (slack factor adjustments) relate to early termination policies, which don't directly impact Bayesian sampling convergence. Option D (increasing max_concurrent_runs to 20) would worsen convergence by reducing sequential learning. The community consensus (86% for B) and Microsoft documentation support this approach for Bayesian sampling.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are implementing hyperparameter tuning using Bayesian sampling for model training in an Azure Machine Learning notebook. The workspace uses a compute cluster with 20 nodes. The code uses a Bandit termination policy with a slack factor of 0.2 and a HyperDriveConfig instance with max_concurrent_runs set to 10.
To increase the effectiveness of the tuning process by improving sampling convergence, which sampling convergence method should you select?
A
Set the value of slack factor of early_termination_policy to 09.
B
Set the value of max_concurrent_runs of HyperDriveConfig to 4.
C
Set the value of slack factor of early_termination_policy to 0.1.
D
Set the value of max_concurrent_runs of HyperDriveConfig to 20.
No comments yet.