Ultimate access to all questions.
You have recently created a proof-of-concept (POC) deep learning model for a machine learning project. The model architecture meets your expectations, but you need to fine-tune two hyperparameters for optimal performance: the embedding dimension for a categorical feature and the learning rate. To achieve this, you decide to use hyperparameter tuning on Vertex AI with the default Bayesian optimization algorithm. You configure the following hyperparameters: • Embedding dimension set as an INTEGER with a minValue of 16 and maxValue of 64. • Learning rate set as a DOUBLE with a minValue of 10e-05 and maxValue of 10e-02. Your primary goal is to maximize model accuracy, and you are not concerned about the time it takes to train the model. How should you set the hyperparameter scaling for each hyperparameter and the maxParallelTrials?