
Answer-first summary for fast verification
Answer: Define a PipelineParameter object for the pipeline and use it to specify the business group-specific input dataset for each pipeline run.
The correct answer is B because PipelineParameter objects allow for dynamic parameterization of pipeline inputs at runtime. This enables each business group to specify their own input dataset location when submitting pipeline runs without requiring code changes or multiple endpoints. Option A (multiple endpoints) is inefficient and not scalable for management. Option C (OutputFileDatasetConfig) is for output datasets, not input parameterization. Option D (local compute) defeats the purpose of publishing a centralized pipeline service and doesn't leverage Azure ML's managed infrastructure.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are using the Azure Machine Learning Python SDK to create a batch inference pipeline. You need to publish this pipeline for use by various business groups within your organization. Each group must be able to specify a distinct input data location for the pipeline to submit to the model for scoring.
What should you do to publish the pipeline?
A
Create multiple endpoints for the published pipeline service and have each business group submit jobs to its own endpoint.
B
Define a PipelineParameter object for the pipeline and use it to specify the business group-specific input dataset for each pipeline run.
C
Define a OutputFileDatasetConfig object for the pipeline and use the object to specify the business group-specific input dataset for each pipeline run.
D
Have each business group run the pipeline on local compute and use a local file for the input data.
No comments yet.