
Ultimate access to all questions.
You are running a machine learning model training pipeline on Vertex AI and encounter an out-of-memory error during the evaluation step. You are using TensorFlow Model Analysis (TFMA) with a standard Evaluator TensorFlow Extended (TFX) pipeline component for this evaluation. Your goal is to stabilize the pipeline without compromising evaluation quality and to minimize infrastructure overhead. What should you do to resolve the out-of-memory error?
A
Include the flag -runner=DataflowRunner in beam_pipeline_args to run the evaluation step on Dataflow.
B
Move the evaluation step out of your pipeline and run it on custom Compute Engine VMs with sufficient memory.
C
Migrate your pipeline to Kubeflow hosted on Google Kubernetes Engine, and specify the appropriate node parameters for the evaluation step.
D
Add tfma.MetricsSpec() to limit the number of metrics in the evaluation step.