
Answer-first summary for fast verification
Answer: the Apache Spark applications option on the Monitor tab
## Detailed Analysis To review the LogQuery output of Job1 after submission from Synapse Studio, we need to understand where Spark job logs and diagnostic data are stored and how they can be accessed. ### Option Analysis: **Option A: the files in the result subfolder of Container1** - **Incorrect**: Container1 is used for data output from the Spark job, not for log queries or diagnostic logs. The result subfolder would contain the actual data output from Job1, not the LogQuery output or execution logs. **Option B: the Spark monitoring URL returned after Job1 is submitted** - **Incorrect**: While this URL provides real-time monitoring during job execution, it's primarily for live tracking and doesn't serve as the primary location for reviewing comprehensive LogQuery output after job completion. **Option C: a table in Workspace2** - **Incorrect**: Although Workspace2 (Log Analytics workspace) receives diagnostics from Workspace1, the data is stored in Log Analytics tables that require manual KQL queries to access. This is not the direct method for reviewing LogQuery output within Synapse Studio. **Option D: the Apache Spark applications option on the Monitor tab** - **Correct**: This is the optimal choice because: - Synapse Studio's Monitor tab provides direct access to Spark application logs and execution details - The Apache Spark applications section specifically displays detailed logs, metrics, and execution information for all Spark jobs - It offers built-in LogQuery capabilities to review job execution logs, errors, and performance metrics - This is the native, integrated method within Synapse Studio for reviewing Spark job logs without requiring external tools or manual queries ### Key Considerations: - The question specifically asks for reviewing LogQuery output **from Synapse Studio** - Workspace1 is configured to send diagnostics to Workspace2, but the most direct and user-friendly method for reviewing logs is through Synapse Studio's built-in monitoring capabilities - The Monitor tab → Apache Spark applications provides comprehensive log viewing with querying capabilities specifically designed for this purpose
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You have an Azure subscription containing an Azure Synapse Analytics workspace named Workspace1, a Log Analytics workspace named Workspace2, and an Azure Data Lake Storage Gen2 container named Container1.
Workspace1 contains an Apache Spark job named Job1 that writes data to Container1. Workspace1 is configured to send diagnostics to Workspace2.
After submitting Job1 from Synapse Studio, what should you use to review the log query output of the job?
A
the files in the result subfolder of Container1
B
the Spark monitoring URL returned after Job1 is submitted
C
a table in Workspace2
D
the Apache Spark applications option on the Monitor tab
No comments yet.