Ultimate access to all questions.
You create an Azure Machine Learning pipeline named "pipeline1" that has two steps containing Python scripts. The data processed by the first step is passed to the second step.
You update the content of the downstream data source for "pipeline1" and must run the pipeline again. You need to ensure the new run of "pipeline1" fully processes the updated content.
Solution: You change the value of the compute_target
parameter of the PythonScriptStep
object in the two steps.
Does this solution meet the goal?