
Answer-first summary for fast verification
Answer: Prepare the data in BigQuery and associate the data with a Vertex AI dataset. Create an AutoMLTabularTrainingJob to train a classification model.
Option C is the correct answer. It involves preparing the data in BigQuery, associating it with a Vertex AI dataset, and creating an AutoMLTabularTrainingJob to train a classification model. This approach is cost-effective as it leverages BigQuery for data storage and preprocessing, minimizing data movement costs. It's also efficient for rapid iteration since AutoMLTabular automates feature engineering and model selection, enabling quick experimentation with various configurations. Additionally, it provides model interpretability, which helps you understand how your initial model is making predictions.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You work at an ecommerce startup that wants to proactively reduce customer churn by building a prediction model. Your company's recent sales records are stored in a BigQuery table. Your objectives are to: 1) create an initial customer churn prediction model, 2) understand how the model is making predictions, 3) iterate on the model quickly, and 4) minimize costs. Given these requirements, how should you build your first model?
A
Export the data to a Cloud Storage bucket. Load the data into a pandas DataFrame on Vertex AI Workbench and train a logistic regression model with scikit-learn.
B
Create a tf.data.Dataset by using the TensorFlow BigQueryClient. Implement a deep neural network in TensorFlow.
C
Prepare the data in BigQuery and associate the data with a Vertex AI dataset. Create an AutoMLTabularTrainingJob to train a classification model.
D
Export the data to a Cloud Storage bucket. Create a tf.data.Dataset to read the data from Cloud Storage. Implement a deep neural network in TensorFlow.
No comments yet.