
Ultimate access to all questions.
You are using the Azure Machine Learning Python SDK to define a multi-step pipeline. You observe that during execution, some steps are skipped and cached output from a previous run is used. You need to ensure all steps run every time, regardless of whether parameters or the source directory contents have changed.
What are two distinct methods to accomplish this? Each correct answer presents a complete solution.
A
Use a PipelineData object that references a datastore other than the default datastore.
B
Set the regenerate_outputs property of the pipeline to True._
C
Set the allow_reuse property of each step in the pipeline to False._
D
Restart the compute cluster where the pipeline experiment is configured to run.
E
Set the outputs property of each step in the pipeline to True.