
Answer-first summary for fast verification
Answer: Implementing Databricks' native support for adaptive query execution to dynamically optimize plans without explicit historical analysis.
Adaptive Query Execution (AQE) in Spark automatically optimizes query plans at runtime based on statistics collected during execution. While AQE doesn’t literally “analyze historical logs,” it does dynamically adjust execution strategies (e.g., join strategies, shuffle partition sizes) to handle variability in data and execution times. Combined with query history in the Databricks SQL Query History UI and Spark UI metrics, you can iteratively improve performance without building custom ML pipelines. AQE is native, low-maintenance, and designed for exactly this type of problem. Enable AQE in Spark: ```python spark.conf.set("spark.sql.adaptive.enabled", "true") ```
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You've noticed that SQL queries in Azure Databricks notebooks have inconsistent execution times. What method would you use to automatically improve these queries' performance by analyzing past execution patterns?
A
Setting up an Azure Logic Apps workflow to periodically review query performance logs and suggest indexes or optimizations in a report.
B
Utilizing Azure Monitor to track query performance over time and manually adjusting queries based on observed trends.
C
Implementing Databricks' native support for adaptive query execution to dynamically optimize plans without explicit historical analysis.
D
Developing a machine learning model within Databricks to analyze query execution logs, predict optimal configurations, and apply adjustments.