
Google Professional Machine Learning Engineer
Get started today
Ultimate access to all questions.
You are working as a machine learning engineer and using Kubeflow Pipelines to develop an end-to-end PyTorch-based MLOps pipeline. This pipeline includes stages such as reading data from BigQuery, processing the data, conducting feature engineering, model training, model evaluation, and deploying the model as a binary file to Cloud Storage. To enhance the performance and accuracy of your models, you are experimenting with several different versions of the feature engineering and model training steps, and running each new version in Vertex AI Pipelines. However, each pipeline run currently takes over an hour to complete, significantly slowing down your development process. You are looking for a solution to speed up the pipeline execution without incurring additional costs. What should you do?
You are working as a machine learning engineer and using Kubeflow Pipelines to develop an end-to-end PyTorch-based MLOps pipeline. This pipeline includes stages such as reading data from BigQuery, processing the data, conducting feature engineering, model training, model evaluation, and deploying the model as a binary file to Cloud Storage. To enhance the performance and accuracy of your models, you are experimenting with several different versions of the feature engineering and model training steps, and running each new version in Vertex AI Pipelines. However, each pipeline run currently takes over an hour to complete, significantly slowing down your development process. You are looking for a solution to speed up the pipeline execution without incurring additional costs. What should you do?