
Answer-first summary for fast verification
Answer: `trigger(processingTime="2 minutes")`
In Spark Structured Streaming, to process data in micro-batches at user-specified intervals, you use the `processingTime` keyword within the `trigger` method. This allows you to specify a time duration as a string. The correct syntax is `trigger(processingTime="2 minutes")`. Reference: [Structured Streaming Triggers](https://docs.databricks.com/structured-streaming/triggers.html#configure-structured-streaming-trigger-intervals).
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
Given the following Structured Streaming query:
spark.table("orders")
.withColumn("total_after_tax", col("total") + col("tax"))
.writeStream
.option("checkpointLocation", checkpointPath)
.outputMode("append")
.______________
.table("new_orders")
spark.table("orders")
.withColumn("total_after_tax", col("total") + col("tax"))
.writeStream
.option("checkpointLocation", checkpointPath)
.outputMode("append")
.______________
.table("new_orders")
Fill in the blank to make the query execute a micro-batch to process data every 2 minutes.
A
trigger(once="2 minutes")
B
trigger(processingTime="2 minutes")
C
processingTime("2 minutes")
D
trigger("2 minutes")
E
trigger()