
Ultimate access to all questions.
As a Machine Learning Engineer at a large financial institution, you are tasked with developing a scalable and efficient training pipeline for a TensorFlow model designed to predict credit risk. The pipeline must process several terabytes of structured financial data, ensuring rigorous data quality checks before training and comprehensive model quality checks after training but before deployment. Given the institution's strict compliance requirements and the need to minimize both development time and infrastructure maintenance, which of the following approaches best meets these criteria? (Choose two options if E is available, otherwise choose one.)
A
Develop the pipeline using Kubeflow Pipelines DSL with custom components for data validation and model evaluation, and orchestrate it on a self-managed Kubernetes cluster to ensure compliance with internal data governance policies.
B
Construct the pipeline using TensorFlow Extended (TFX) with its standard components for data validation, transformation, and model evaluation, and orchestrate it using Vertex AI Pipelines to leverage Google Cloud's managed services and scalability.
C
Build the pipeline using Apache Beam with custom data processing and validation logic, and orchestrate it using Cloud Composer (managed Apache Airflow) for flexibility in scheduling and monitoring.
D
Implement the pipeline using TensorFlow Extended (TFX) for its built-in data and model validation capabilities, and orchestrate it using Kubeflow Pipelines on Google Kubernetes Engine (GKE) to maintain control over the infrastructure.
E
Combine the use of TensorFlow Extended (TFX) for pipeline construction with Vertex AI Pipelines for orchestration, and additionally implement custom monitoring hooks in Vertex AI for real-time compliance tracking.