
Answer-first summary for fast verification
Answer: Use the checkpointing feature to identify the point of failure and retry the load from that point.
In a production environment, it is crucial to maintain data integrity and provide a mechanism to handle failures. Option B is the correct approach as it uses checkpointing to identify the point of failure and allows for a retry from that point, ensuring that the data is not lost or duplicated. Deleting the failed batch (Option A) could result in data loss, while manually inspecting logs (Option C) is time-consuming and not scalable. Ignoring the failure (Option D) is not a viable solution as it compromises data integrity.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
In a scenario where you are tasked with managing a batch load process in Azure Data Factory, and you encounter a failure during the batch load, how would you handle the situation to ensure data integrity and provide a rollback mechanism?
A
Delete the failed batch and start a new one.
B
Use the checkpointing feature to identify the point of failure and retry the load from that point.
C
Manually inspect the logs and fix the issue before re-running the batch.
D
Ignore the failure and continue with the next batch load.
No comments yet.