You are working with a Delta Lake table 'transactions' that contains duplicate rows. Write a PySpark code snippet to deduplicate the rows based on all columns and save the result back to the same table.
Simulated
Explanation:
The correct answer is A because it correctly uses the 'dropDuplicates' method without specifying columns, which deduplicates based on all columns. The result is then written back to the same Delta Lake table in overwrite mode.