
Ultimate access to all questions.
A data engineering team is utilizing Databricks to write data to a Delta table named 'sensor_readings'. Their goal is to ensure that any duplicate records, based on a specific key column, are removed before writing to the table. Which of the following code snippets should they use to achieve this efficiently?_
A
sensor_readings.distinct().write.format('delta').mode('overwrite').saveAsTable('sensor_readings')
B
spark.sql('SELECT DISTINCT * FROM sensor_readings').write.format('delta').mode('overwrite').saveAsTable('sensor_readings')*
C
sensor_readings.write.format('delta').mode('upsert').option('key_column', 'value').save()
D
sensor_readings.dropDuplicates('key_column').write.format('delta').mode('append').saveAsTable('sensor_readings')_