
Answer-first summary for fast verification
Answer: Deploy the training jobs by using TPU VMs with TPUv3 Pod slices, and use the TPUEmbeading API
The correct answer is A. Tensor Processing Units (TPUs) are specialized hardware accelerators designed by Google specifically to accelerate machine learning tasks. TPU VMs with TPUv3 Pod slices provide high scalability, making them ideal for processing large datasets with numerous categorical features like in a recommender system. The TPUEmbedding API is optimized to handle the vast number of categorical features efficiently, significantly reducing training time compared to other options.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You work for a rapidly growing social media company that handles a high volume of data, including billions of historical user events and 100,000 categorical features. Currently, your team builds TensorFlow recommender models using an on-premises CPU cluster. As the dataset continues to grow, you observe a significant increase in model training time. To address this, you are considering migrating the models to Google Cloud to leverage its scalable infrastructure and minimize training time. What approach should you adopt to achieve this goal?
A
Deploy the training jobs by using TPU VMs with TPUv3 Pod slices, and use the TPUEmbeading API
B
Deploy the training jobs in an autoscaling Google Kubernetes Engine cluster with CPUs
C
Deploy a matrix factorization model training job by using BigQuery ML
D
Deploy the training jobs by using Compute Engine instances with A100 GPUs, and use the tf.nn.embedding_lookup API
No comments yet.