
Answer-first summary for fast verification
Answer: Azure Data Factory activity runs in Azure Monitor
## Detailed Explanation To examine pipeline failures from the last 180 days in Azure Data Factory, **Azure Data Factory activity runs in Azure Monitor** is the correct solution for several key reasons: ### Why Option D is Correct 1. **Extended Retention Period**: Azure Monitor provides long-term retention of diagnostic logs and metrics. While Azure Data Factory's native monitoring interface typically retains pipeline run data for only 45 days, Azure Monitor can retain data for much longer periods, including the required 180 days. 2. **Comprehensive Failure Analysis**: Azure Monitor captures detailed activity run information including failure details, error messages, execution times, and resource consumption metrics. This enables thorough investigation of pipeline failures over the extended timeframe. 3. **Diagnostic Settings Integration**: By configuring diagnostic settings in Azure Data Factory to send pipeline and activity run logs to Azure Monitor Log Analytics workspace, you can query and analyze failures using Kusto Query Language (KQL) across the full 180-day period. 4. **Advanced Querying Capabilities**: Azure Monitor Log Analytics provides powerful querying capabilities that allow you to filter specifically for failed pipeline runs, analyze failure patterns, and create custom alerts for recurring issues. ### Why Other Options Are Incorrect - **Option A (Activity log blade)**: The Activity log primarily tracks resource-level operations (create, update, delete) and subscription-level events, not detailed pipeline execution failures over extended periods. - **Option B (Pipeline runs in Azure Data Factory user experience)**: While this shows recent pipeline runs, Azure Data Factory's native interface typically retains pipeline run history for only 45 days, making it insufficient for the 180-day requirement. - **Option C (Resource health blade)**: This monitors the health status of the Data Factory resource itself (availability, performance), not individual pipeline execution failures or their historical data. ### Best Practice Implementation To implement this solution effectively: 1. Configure diagnostic settings in your Azure Data Factory to send pipeline runs and activity runs to a Log Analytics workspace 2. Use Log Analytics queries to filter for failed pipeline runs within the 180-day window 3. Create custom workbooks or dashboards for ongoing monitoring of pipeline failures 4. Set up alerts for critical pipeline failures to enable proactive issue resolution This approach aligns with Azure Data Engineering best practices for long-term monitoring and troubleshooting of data pipeline performance and reliability.
Ultimate access to all questions.
Author: LeetQuiz Editorial Team
You have an Azure Data Factory and need to review pipeline failures from the past 180 days. What should you use?
A
the Activity log blade for the Data Factory resource
B
Pipeline runs in the Azure Data Factory user experience
C
the Resource health blade for the Data Factory resource
D
Azure Data Factory activity runs in Azure Monitor
No comments yet.