
Google Professional Machine Learning Engineer
Get started today
Ultimate access to all questions.
You are developing a TensorFlow Extended (TFX) pipeline with standard components that includes data preprocessing. The pipeline will be deployed to production and must process up to 100 TB of data from BigQuery. To ensure the data preprocessing steps scale efficiently, publish metrics and parameters to Vertex AI Experiments, and track artifacts using Vertex ML Metadata, how should you configure the pipeline run?
You are developing a TensorFlow Extended (TFX) pipeline with standard components that includes data preprocessing. The pipeline will be deployed to production and must process up to 100 TB of data from BigQuery. To ensure the data preprocessing steps scale efficiently, publish metrics and parameters to Vertex AI Experiments, and track artifacts using Vertex ML Metadata, how should you configure the pipeline run?
Exam-Like