
Ultimate access to all questions.
You deploy a real-time inference service for a trained model that supports a business-critical application. It is essential to monitor both the data submitted to the web service and the predictions it generates.
You need to implement a monitoring solution for the deployed model with minimal administrative effort.
What should you do?
A
View the explanations for the registered model in Azure ML studio.
B
Enable Azure Application Insights for the service endpoint and view logged data in the Azure portal.
C
View the log files generated by the experiment used to train the model.
D
Create an ML Flow tracking URI that references the endpoint, and view the data logged by ML Flow.