Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
What is the most effective strategy for managing log compaction in a Delta Lake table with extensive transaction logs without losing historical data?
A
Disabling the retention duration check by setting spark.databricks.delta.retentionDurationCheck.enabled to false
B
Organizing log files using the OPTIMIZE command with ZORDER
C
Manually deleting older log files that exceed the retention period
D
Using the VACUUM command with a retention period that optimizes storage while preserving history