
Ultimate access to all questions.
You are developing a recommendation engine for an online clothing store that leverages historical customer transaction data stored in BigQuery and Cloud Storage. Your tasks include performing exploratory data analysis (EDA), preprocessing the data, and training machine learning models. You will need to rerun these steps multiple times as you experiment with different algorithms to find the best performing model. Given that you aim to minimize both cost and development effort during these experiments, how should you configure the environment to achieve this balance?
A
Create a Vertex AI Workbench user-managed notebook using the default VM instance, and use the %%bigquery magic commands in Jupyter to query the tables.
B
Create a Vertex AI Workbench managed notebook to browse and query the tables directly from the JupyterLab interface.
C
Create a Vertex AI Workbench user-managed notebook on a Dataproc Hub, and use the %%bigquery magic commands in Jupyter to query the tables.
D
Create a Vertex AI Workbench managed notebook on a Dataproc cluster, and use the spark-bigquery-connector to access the tables.