
Answer-first summary for fast verification
Answer: Publish from the collaboration branch.
## Detailed Explanation In Azure Data Factory (ADF), there are two distinct environments for pipeline execution: ### Development vs Live Mode - **Collaboration Branch**: This is the development environment where changes are made and tested - **Live Mode**: This is the production environment where scheduled triggers actually execute pipelines ### The Problem Analysis When you modified the copy activity sink to point to a new storage account and merged the changes into the collaboration branch, the changes only exist in the development environment. The schedule trigger, however, always executes pipelines in **live mode**, which still contains the old configuration pointing to the original storage account. ### Why Publishing is Required Publishing from the collaboration branch to live mode is necessary because: - It deploys the updated pipeline configuration (with the new storage account) to the live environment - The schedule trigger will then use the updated configuration during its next execution - Without publishing, the live environment remains unchanged and continues using the old storage account ### Why Other Options Are Incorrect **B: Create a pull request** - This is incorrect because the question states that changes have already been "merged into the collaboration branch," indicating that pull request activities are complete. **C: Modify the schedule trigger** - This is unnecessary because the trigger itself is functioning correctly; the issue is with the pipeline configuration in live mode, not the trigger configuration. **D: Configure the change feed of the new storage account** - This is irrelevant to the core problem. The issue is about pipeline configuration deployment, not storage account monitoring or change tracking features. ### Best Practice Always remember that in ADF, changes made in the collaboration branch must be explicitly published to take effect in the live environment where scheduled triggers operate. This separation ensures that development work doesn't accidentally impact production pipelines.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You have an Azure Data Factory pipeline named Pipeline1 that includes a copy activity. The copy activity is configured to write data to an Azure Data Lake Storage Gen2 account, and the pipeline is triggered on a schedule. You modify the copy activity's sink to point to a new storage account and merge this change into the collaboration branch. After the next execution of Pipeline1, you find that the data was not copied to the new storage account. What should you do to ensure the data is copied to the new storage account?
A
Publish from the collaboration branch.
B
Create a pull request.
C
Modify the schedule trigger.
D
Configure the change feed of the new storage account.