
Answer-first summary for fast verification
Answer: Create an instance of the MLCIient class.
The correct first step is to create an instance of the MLClient class (Option B). This is the primary entry point for Azure ML SDK v2 operations and is required to authenticate and connect to the Azure ML workspace. Once the MLClient instance is created, you can access deployment operations via ml_client.online_deployments and then call methods like get_logs() to retrieve the container logs, including inference server console output and scoring script print/log statements. Option D (OnlineDeploymentOperations) is incorrect because this class should not be instantiated directly; it is accessed through the MLClient instance. Options A and C (SSH/Docker tools) are not part of the SDK-based approach and are unnecessary for log retrieval via the SDK. The community discussion, with high upvotes and references to official documentation, strongly supports B as the correct answer, emphasizing that MLClient is the foundational step for SDK operations.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You have an Azure Machine Learning model deployed to an online endpoint and need to review the container logs, including the inference server console output and print/log statements from the model's scoring script, using the Azure ML Python SDK v2.
What is the first action you should take?
A
Connect by using SSH to the inference server.
B
Create an instance of the MLCIient class.
C
Connect by using Docker tools to the inference server.
D
Create an instance of the OnlineDeploymentOperations class.
No comments yet.