Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
A data engineer aims to incrementally ingest JSON data into a Delta table in near real-time. Which method correctly achieves this?
A
spark.readStream.format('cloudFiles').option('cloudFiles.format', 'json').load(source_path).writeStream.option('checkpointLocation', checkpointPath).start('target_table')
B
spark.readStream.format('autoloader').option('autoloader.format', 'json').load(source_path).writeStream.option('checkpointLocation', checkpointPath).trigger(real-time=True).start('target_table')
C
spark.readStream.format('autoloader').option('autoloader.format', 'json').load(source_path).writeStream.option('checkpointLocation', checkpointPath).start('target_table')
D
spark.readStream.format('cloudFiles').option('cloudFiles.format', 'json').load(source_path).writeStream.trigger(real-time=True).start('target_table')
E
spark.readStream.format('cloudFiles').option('cloudFiles.format', 'json').load(source_path).writeStream.trigger(availableNow=True).start('target_table')