
Databricks Certified Data Engineer - Professional
Get started today
Ultimate access to all questions.
In the Spark UI, where can you find two main indicators that a partition is spilling to disk during the execution of wide transformations?
In the Spark UI, where can you find two main indicators that a partition is spilling to disk during the execution of wide transformations?
Exam-Like
Explanation:
To diagnose spill in Spark, two primary indicators are found in the Stage’s detail screen and Executor’s log files. The Stage’s detail screen provides per-task metrics like 'Shuffle spill (memory)' and 'Shuffle spill (disk)', which directly indicate spill activity. The Executor’s log files may contain explicit messages about spilling data to disk. Other options are incorrect because spill metrics are task-specific (not in Query/Job/Driver details) and the Executor’s detail screen lacks per-partition spill data.