
Ultimate access to all questions.
As a junior Data Scientist working with Python and TensorFlow in Google's Vertex AI, you've deployed a new model in the test environment for a critical project. After deployment, you encounter unexpected prediction errors that are difficult to diagnose because no logging information is available. Your team lead emphasizes the importance of quickly resolving these issues to meet project deadlines and compliance requirements. Which of the following steps should you take to enable and obtain the necessary logs for debugging? (Choose two options)
A
Enable dynamic logging through the Vertex AI console without the need to redeploy the model, to immediately start capturing logs.
B
Configure Access logging to monitor and record all requests and responses to your model, focusing on latency and access patterns.
C
Undeploy the current model version and redeploy it with logging enabled, ensuring that all future predictions are logged for analysis.
D
Implement Container logging to capture detailed logs from the containers hosting your model, including system outputs and errors, for comprehensive debugging.
E
Use Cloud Logging to manually search and filter logs related to your model's predictions, without enabling any additional logging features.