
Answer-first summary for fast verification
Answer: Stage’s detail screen and Executor’s log files
## Explanation In the Spark UI, the primary indicators that a partition is spilling to disk are found in: 1. **Stage's detail screen** - This shows metrics for each stage, including spill metrics (disk spill bytes, memory spill bytes, etc.) 2. **Executor's log files** - These contain detailed logging information about spill events during task execution ### Why other options are incorrect: - **A**: Query's detail screen provides high-level query information but not detailed spill metrics for individual partitions - **C**: Driver's log files don't typically contain partition-level spill information; spill happens at executor level - **D**: Executor's detail screen shows resource usage but not specific spill indicators - **E**: Query's detail screen doesn't show partition-level spill details ### Key Points: - Spilling occurs when data doesn't fit in memory and needs to be written to disk - The Stage detail screen in Spark UI provides aggregated spill statistics for each stage - Executor logs contain detailed messages about when and how much data is spilled during task execution - Monitoring these indicators helps identify memory pressure and optimize Spark job performance
Author: Keng Suppaseth
Ultimate access to all questions.
No comments yet.
Where in the Spark UI are two of the primary indicators that a partition is spilling to disk?
A
Query’s detail screen and Job’s detail screen
B
Stage’s detail screen and Executor’s log files
C
Driver’s and Executor’s log files
D
Executor’s detail screen and Executor’s log files
E
Stage’s detail screen and Query’s detail screen