
Answer-first summary for fast verification
Answer: Store parameters in Vertex ML Metadata, store the models’ source code in GitHub, and store the models’ binaries in Cloud Storage.
Option C is the correct choice because it aligns with Google Cloud's best practices for MLOps. Vertex ML Metadata is specifically designed for tracking ML pipeline metadata, including parameters, providing better integration and lineage tracking than general-purpose databases like Cloud SQL. Cloud Storage is optimal for storing model binaries due to its scalability, cost-effectiveness, and seamless integration with Vertex AI. GitHub remains suitable for version-controlled source code management. Alternatives are suboptimal: A and B use Cloud SQL for parameters, which lacks ML-specific features; D stores binaries in GitHub, which is inefficient for large files and may incur higher costs. The community discussion strongly supports C with 100% consensus and upvoted reasoning.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are building an MLOps platform to automate your company's ML experiments and model retraining. You need to organize the artifacts for dozens of pipelines. How should you store the pipelines' artifacts?
A
Store parameters in Cloud SQL, and store the models’ source code and binaries in GitHub.
B
Store parameters in Cloud SQL, store the models’ source code in GitHub, and store the models’ binaries in Cloud Storage.
C
Store parameters in Vertex ML Metadata, store the models’ source code in GitHub, and store the models’ binaries in Cloud Storage.
D
Store parameters in Vertex ML Metadata and store the models’ source code and binaries in GitHub.
No comments yet.