You are troubleshooting a production ETL job on an Azure Databricks all-purpose cluster (autoscaling enabled, Delta Lake tables) processing terabytes of event data with wide transformations, high-cardinality joins, and aggregations. The job shows inconsistent durations, occasional timeouts, and elevated DBU costs despite scaling. No task failures are present. You analyze the Spark UI (Jobs tab with Event Timeline, Stages tab with summary/task metrics and shuffle stats, Executors tab). Which option best describes the key insights available and their primary application to optimize runtime, reduce DBU consumption, improve scalability predictability, and maintain audit compliance? | Databricks Certified Data Engineer - Professional Quiz - LeetQuiz