Google Professional Machine Learning Engineer

Google Professional Machine Learning Engineer

Get started today

Ultimate access to all questions.


You recently trained an XGBoost model on tabular data for use within your organization. You plan to expose the model as an HTTP microservice so that internal teams can make predictions using the model. After deployment, you expect a small number of incoming requests. Given these requirements, you want to productionize the model with the least amount of development effort and ensure low latency for predictions. What should you do?




Explanation:

The best option in this scenario is to use a prebuilt XGBoost Vertex container to create a model and deploy it to Vertex AI Endpoints. This approach minimizes the development effort by leveraging prebuilt containers and managed services provided by Vertex AI, ensuring low latency and high availability. Building and managing a custom container or a Flask-based app would require more work and maintenance. Deploying the model to BigQuery ML would not be as efficient in terms of latency for HTTP-based microservices.