
Google Professional Machine Learning Engineer
Get started today
Ultimate access to all questions.
You recently trained a XGBoost model for predicting customer churn that you plan to deploy to a production environment for real-time inference. Before sending a predict request to your model's binary, you need to perform a data preprocessing step to clean and structure the incoming data format. This preprocessing service needs to expose a REST API that accepts requests within your secure internal VPC Service Controls and returns the processed data to the model. You aim to configure this preprocessing step while minimizing both cost and the complexity of deployment. What approach should you take?
You recently trained a XGBoost model for predicting customer churn that you plan to deploy to a production environment for real-time inference. Before sending a predict request to your model's binary, you need to perform a data preprocessing step to clean and structure the incoming data format. This preprocessing service needs to expose a REST API that accepts requests within your secure internal VPC Service Controls and returns the processed data to the model. You aim to configure this preprocessing step while minimizing both cost and the complexity of deployment. What approach should you take?
Explanation:
Option D is the correct answer for this scenario. Building a custom predictor class based on XGBoost Predictor from the Vertex AI SDK leverages pre-built functionality for handling XGBoost models, which reduces development effort and potential errors. Packaging the handler in a custom container image based on a Vertex built-in container image ensures compatibility and smooth deployment. Storing the pickled model in Cloud Storage separates the model from the prediction logic, allowing for easier model updates without redeploying the entire container. This approach minimizes the container image size and costs while ensuring that the preprocessing step and model deployment are managed efficiently.