
Ultimate access to all questions.
You are building an ML pipeline to process and analyze both streaming and batch datasets. The pipeline must handle data validation, preprocessing, model training, and model deployment in a consistent and automated way. You need to design an efficient and scalable solution that captures model training metadata and is easily reproducible. You also want to be able to reuse custom components for different parts of your pipeline. What should you do?
A
Use Cloud Composer for distributed processing of batch and streaming data in the pipeline.
B
Use Dataflow for distributed processing of batch and streaming data in the pipeline.
C
Use Cloud Build to build and push Docker images for each pipeline component.
D
Implement an orchestration framework such as Kubeflow Pipelines or Vertex AI Pipelines.