
Answer-first summary for fast verification
Answer: Create a batch inference pipeline from the training pipeline.
The question specifies deploying an endpoint for 'asynchronous' predictions on a 'dataset of input data values', which clearly indicates batch inference. Batch inference pipelines in Azure ML are designed for asynchronous processing of large datasets, where predictions are generated offline and results are typically written to storage. Real-time inference (option C) is synchronous and meant for immediate, low-latency predictions, not suitable for dataset processing. Cloning the training pipeline (option A) doesn't create an inference endpoint. Replacing the dataset with an Enter Data Manually module (option D) is for ad-hoc testing, not production deployment. The community discussion strongly supports option B with 100% consensus, highlighting 'asynchronous' as the key indicator for batch inference, and references official Microsoft documentation confirming batch inference is used for applying models to multiple cases asynchronously.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are using Azure Machine Learning designer to create a training pipeline for a regression model. You need to prepare the pipeline to be deployed as an endpoint that performs asynchronous batch predictions on a dataset. What should you do?
A
Clone the training pipeline.
B
Create a batch inference pipeline from the training pipeline.
C
Create a real-time inference pipeline from the training pipeline.
D
Replace the dataset in the training pipeline with an Enter Data Manually module.
No comments yet.