
Ultimate access to all questions.
You have trained a machine learning model and packaged it in a custom Docker container for serving. After deploying the container to Vertex AI Model Registry, you attempt to submit a batch prediction job. However, the job fails with the following error message: 'Error model server never became ready. Please validate that your model file or container configuration are valid.' No additional errors are found in the logs. What steps would you take to diagnose and potentially resolve this issue?
A
Add a logging configuration to your application to emit logs to Cloud Logging
B
Change the HTTP port in your model’s configuration to the default value of 8080
C
Change the healthRoute value in your model’s configuration to /healthcheck
D
Pull the Docker image locally, and use the docker run command to launch it locally. Use the docker logs command to explore the error logs