
Answer-first summary for fast verification
Answer: Run the script in an experiment based on a HyperDriveConfig object
The question requires finding the optimal combination of hyperparameters (batch size and learning rate) that minimizes validation loss. HyperDriveConfig is specifically designed for hyperparameter tuning in Azure ML, allowing systematic exploration of parameter combinations through methods like grid search, random search, or Bayesian optimization. The community discussion shows 100% consensus on option E, with references to official Microsoft documentation confirming HyperDriveConfig as the correct approach for hyperparameter optimization. Other options are less suitable: AutoMLConfig (A) automates the entire ML pipeline, not just hyperparameter tuning; PythonScriptStep (B) runs scripts in pipelines but doesn't optimize hyperparameters; Automated ML interface (C) is a no-code solution that doesn't use custom scripts; ScriptRunConfig (D) runs single training jobs without hyperparameter optimization.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You create a script that trains a convolutional neural network model for multiple epochs and logs the validation loss after each epoch. The script includes arguments for batch size and learning rate. You have identified a set of values for batch size and learning rate that you want to test.
You need to use Azure Machine Learning to find the combination of batch size and learning rate that results in the model with the lowest validation loss.
What should you do?
A
Run the script in an experiment based on an AutoMLConfig object
B
Create a PythonScriptStep object for the script and run it in a pipeline
C
Use the Automated Machine Learning interface in Azure Machine Learning studio
D
Run the script in an experiment based on a ScriptRunConfig object
E
Run the script in an experiment based on a HyperDriveConfig object
No comments yet.