
Answer-first summary for fast verification
Answer: Compare the results to the evaluation results from a previous run. If the performance improved, deploy the model to a Vertex AI endpoint with training/serving skew threshold model monitoring. When the model monitoring threshold is triggered, redeploy the pipeline.
Option D is the correct answer because it provides a comprehensive approach to minimize cost while ensuring model performance. By comparing the results to the evaluation results from a previous run, you can ensure that only improved models are deployed. Additionally, by utilizing training/serving skew threshold model monitoring, you can detect discrepancies between the training data and real-world serving data. This helps identify when the model might be outdated due to changes in the data distribution. Once the monitoring threshold is triggered, redeploying the pipeline ensures that the model remains accurate and efficient over time.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You are working as a machine learning engineer for a car dealership company. Your task is to develop an ML model to predict the cost of used automobiles. The prediction is based on various features such as location, condition, model type, color, and engine/battery efficiency. The data is updated every night, and car dealerships will rely on your model to determine appropriate car prices. You have created a Vertex AI pipeline that reads the data, splits it into training, evaluation, and test sets, performs feature engineering, trains the model using the training dataset, and validates the model using the evaluation dataset. Now, you need to design a retraining workflow that minimizes costs while ensuring model performance remains high. What strategy should you adopt?
A
Compare the training and evaluation losses of the current run. If the losses are similar, deploy the model to a Vertex AI endpoint. Configure a cron job to redeploy the pipeline every night.
B
Compare the training and evaluation losses of the current run. If the losses are similar, deploy the model to a Vertex AI endpoint with training/serving skew threshold model monitoring. When the model monitoring threshold is triggered, redeploy the pipeline.
C
Compare the results to the evaluation results from a previous run. If the performance improved, deploy the model to a Vertex AI endpoint. Configure a cron job to redeploy the pipeline every night.
D
Compare the results to the evaluation results from a previous run. If the performance improved, deploy the model to a Vertex AI endpoint with training/serving skew threshold model monitoring. When the model monitoring threshold is triggered, redeploy the pipeline.