
Answer-first summary for fast verification
Answer: Use the Predictor interface to implement a custom prediction routine. Build the custom container, upload the container to Vertex AI Model Registry and deploy it to a Vertex AI endpoint.
The option C, 'Use the Predictor interface to implement a custom prediction routine. Build the custom container, upload the container to Vertex AI Model Registry and deploy it to a Vertex AI endpoint,' is the best choice. This approach allows you to encapsulate both preprocessing and postprocessing steps within your custom container, providing a consistent and unified environment for serving your model. By leveraging Vertex AI, you also benefit from streamlined deployment and management, which aligns with the goal of minimizing infrastructure maintenance and code changes. This method is specifically recommended for scenarios where custom processing routines are required.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You have trained a machine learning model using XGBoost in Python intended for online serving. The model prediction service is expected to be invoked by a backend service developed in Golang, which operates on a Google Kubernetes Engine (GKE) cluster. The ML model necessitates certain preprocessing and postprocessing steps to function correctly at serving time. Your objectives include minimizing code changes and infrastructure maintenance, and deploying the model into production swiftly. Given these requirements, what should you do to implement the preprocessing and postprocessing steps and ensure efficient deployment?
A
Use FastAPI to implement an HTTP server. Create a Docker image that runs your HTTP server, and deploy it on your organization’s GKE cluster.
B
Use FastAPI to implement an HTTP server. Create a Docker image that runs your HTTP server, Upload the image to Vertex AI Model Registry and deploy it to a Vertex AI endpoint.
C
Use the Predictor interface to implement a custom prediction routine. Build the custom container, upload the container to Vertex AI Model Registry and deploy it to a Vertex AI endpoint.
D
Use the XGBoost prebuilt serving container when importing the trained model into Vertex AI. Deploy the model to a Vertex AI endpoint. Work with the backend engineers to implement the pre- and postprocessing steps in the Golang backend service.