
Answer-first summary for fast verification
Answer: Store parameters in Vertex ML Metadata, store the models’ source code in GitHub, and store the models’ binaries in Cloud Storage.
The correct answer is C. Storing parameters in Vertex ML Metadata is advisable because this service is specifically designed to store and track metadata for ML pipelines, offering better integration and functionality within the MLOps context. Storing the models’ source code in GitHub is a good practice as it is a popular version control system suitable for collaboration. Finally, storing the models’ binaries in Cloud Storage is scalable and cost-effective, making it ideal for managing large binary files like model artifacts. Options A and B do not make use of Vertex ML Metadata, which is particularly suited for handling parameter storage and experiment tracking. Option D incorrectly suggests using GitHub for storing large binaries, which can be inefficient and costly.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
Your company is developing an MLOps platform to automate machine learning experiments and manage model retraining workflows efficiently. With dozens of ML pipelines to handle, it's crucial to properly organize and store the different artifacts produced. Considering scalability, integration, and management of metadata, how should you store the pipelines' artifacts?
A
Store parameters in Cloud SQL, and store the models’ source code and binaries in GitHub.
B
Store parameters in Cloud SQL, store the models’ source code in GitHub, and store the models’ binaries in Cloud Storage.
C
Store parameters in Vertex ML Metadata, store the models’ source code in GitHub, and store the models’ binaries in Cloud Storage.
D
Store parameters in Vertex ML Metadata and store the models’ source code and binaries in GitHub.
No comments yet.