
Ultimate access to all questions.
You work for a rapidly growing social media company that handles a high volume of data, including billions of historical user events and 100,000 categorical features. Currently, your team builds TensorFlow recommender models using an on-premises CPU cluster. As the dataset continues to grow, you observe a significant increase in model training time. To address this, you are considering migrating the models to Google Cloud to leverage its scalable infrastructure and minimize training time. What approach should you adopt to achieve this goal?
A
Deploy the training jobs by using TPU VMs with TPUv3 Pod slices, and use the TPUEmbeading API
B
Deploy the training jobs in an autoscaling Google Kubernetes Engine cluster with CPUs
C
Deploy a matrix factorization model training job by using BigQuery ML
D
Deploy the training jobs by using Compute Engine instances with A100 GPUs, and use the tf.nn.embedding_lookup API_