
Answer-first summary for fast verification
Answer: Create an instance of the MLCIient class.
The correct first step is to create an instance of the MLClient class (Option B). This is the primary entry point for Azure ML SDK v2 operations and is required to connect to the Azure ML workspace. Once the MLClient instance is created, you can access deployment operations via ml_client.online_deployments and then call methods like get_logs() to retrieve the container logs, including inference server console output and scoring script print/log statements. Option A (SSH) and Option C (Docker tools) are incorrect as they are not SDK-based approaches and are unnecessary for this task. Option D (creating an instance of OnlineDeploymentOperations directly) is incorrect because, as per the community discussion and Microsoft documentation, OnlineDeploymentOperations should not be instantiated directly; it is accessed through the MLClient instance (e.g., ml_client.online_deployments). The community consensus, with upvoted comments and references to official documentation, strongly supports Option B as the correct first action.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You have an Azure Machine Learning model deployed to an online endpoint and need to review the container logs, including the inference server console output and print/log statements from the model's scoring script, using the Azure ML Python SDK v2.
What is the first action you should take?
A
Connect by using SSH to the inference server.
B
Create an instance of the MLCIient class.
C
Connect by using Docker tools to the inference server.
D
Create an instance of the OnlineDeploymentOperations class.
No comments yet.