
Google Professional Machine Learning Engineer
Get started today
Ultimate access to all questions.
You are working on deploying a machine learning workflow from a prototype to production. Your feature engineering code is written in PySpark and currently runs on Dataproc Serverless. For model training, you use a Vertex AI custom training job. Currently, these two steps are disconnected, and the model training step must be initiated manually after the feature engineering step completes. To streamline this process and create a scalable, maintainable production workflow that runs end-to-end and tracks the connections between the steps, what should you do?
You are working on deploying a machine learning workflow from a prototype to production. Your feature engineering code is written in PySpark and currently runs on Dataproc Serverless. For model training, you use a Vertex AI custom training job. Currently, these two steps are disconnected, and the model training step must be initiated manually after the feature engineering step completes. To streamline this process and create a scalable, maintainable production workflow that runs end-to-end and tracks the connections between the steps, what should you do?
Exam-Like