
Answer-first summary for fast verification
Answer: Include the flag -runner=DataflowRunner in beam_pipeline_args to run the evaluation step on Dataflow.
The correct answer is A. Including the flag -runner=DataflowRunner in beam_pipeline_args to run the evaluation step on Dataflow is the best option. Dataflow is Google Cloud's serverless Apache Beam offering, which can handle scalable processing and should mitigate the out-of-memory error by leveraging its scalable infrastructure. This option keeps the evaluation quality intact and minimizes infrastructure overhead by using a serverless solution. Other options either compromise evaluation quality, require significant infrastructure changes, or add unnecessary complexity.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are running a machine learning model training pipeline on Vertex AI and encounter an out-of-memory error during the evaluation step. You are using TensorFlow Model Analysis (TFMA) with a standard Evaluator TensorFlow Extended (TFX) pipeline component for this evaluation. Your goal is to stabilize the pipeline without compromising evaluation quality and to minimize infrastructure overhead. What should you do to resolve the out-of-memory error?
A
Include the flag -runner=DataflowRunner in beam_pipeline_args to run the evaluation step on Dataflow.
B
Move the evaluation step out of your pipeline and run it on custom Compute Engine VMs with sufficient memory.
C
Migrate your pipeline to Kubeflow hosted on Google Kubernetes Engine, and specify the appropriate node parameters for the evaluation step.
D
Add tfma.MetricsSpec() to limit the number of metrics in the evaluation step.
No comments yet.