
Answer-first summary for fast verification
Answer: Yes
The solution meets the goal because changing the compute_target parameter creates a new hash for the step configuration in Azure ML pipelines. This breaks the step cache mechanism, forcing both steps to rerun completely rather than using cached results from previous runs. While this is a workaround rather than the most direct approach, it effectively ensures that the updated content from the downstream data source is fully processed. The community discussion shows 100% consensus on answer A with upvoted comments confirming this approach works by bypassing caching.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You create an Azure Machine Learning pipeline named "pipeline1" that has two steps containing Python scripts. The data processed by the first step is passed to the second step.
You update the content of the downstream data source for "pipeline1" and must run the pipeline again. You need to ensure the new run of "pipeline1" fully processes the updated content.
Solution: You change the value of the compute_target parameter of the PythonScriptStep object in the two steps.
Does this solution meet the goal?
A
Yes
B
No
No comments yet.