
Answer-first summary for fast verification
Answer: Create a batch inference pipeline
The question describes a scenario requiring nightly batch inference on large volumes of files, which is exactly what Azure ML Batch Inference pipelines are designed for. Batch inference pipelines are optimized for high-throughput, asynchronous processing of large datasets and can be published as REST endpoints for scheduled execution. Option A is correct because batch inference pipelines handle large-scale, non-real-time inference jobs efficiently. Option B is partially relevant but insufficient alone - while setting the compute target to an inference cluster is necessary, it doesn't address the core requirement of creating the appropriate pipeline type for batch processing. Option C (real-time inference) is unsuitable as it's designed for low-latency requests, not large-volume batch processing. Option D (cloning the pipeline) doesn't solve the publishing requirement.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are using Azure Machine Learning designer to create and run a training pipeline. This pipeline needs to be executed nightly to generate predictions from a large volume of files stored in a folder, which is defined as a dataset.
What should you do to publish the pipeline as a REST service for the nightly inferencing runs?
A
Create a batch inference pipeline
B
Set the compute target for the pipeline to an inference cluster
C
Create a real-time inference pipeline
D
Clone the pipeline
No comments yet.