You are optimizing a large Delta Lake table (~8 TB) that supports frequent **ad-hoc exploratory queries** from analysts in a cost-sensitive environment. The table contains clickstream data with high-cardinality filtering columns (user_id, session_id), medium-cardinality columns (event_type, country_code), and a commonly used time-based column (event_date — ~3 years of daily data). Queries often filter on combinations of event_date + event_type or event_date + country_code, with occasional filters on user_id ranges or session patterns, but almost never equality filters on high-cardinality IDs alone. The table currently has thousands of small files per partition due to streaming ingestion, and analysts frequently complain about slow response times on Databricks SQL warehouses (serverless, Photon-enabled). You must significantly improve query performance and reduce DBU consumption while preserving full ACID compliance, time travel, schema evolution, and governance controls (including row-level security via Unity Catalog). Which of the following approaches represents the **most effective single action** (or primary strategy) under these constraints? | Databricks Certified Data Engineer - Professional Quiz - LeetQuiz