Ultimate access to all questions.
A data engineer has set up a Structured Streaming job to read from a table, aggregate the data, and then perform a streaming write into a new table. The code block used is as follows:
spark.table("sales")
.groupBy("store")
.agg(sum("sales").alias("sum_sales"))
.writeStream
.option("checkpointLocation", checkpointPath)
.outputMode("complete")
.______
.table("aggregatedSales")
If the goal is to execute only a single micro-batch to process all available data, which line of code should fill in the blank?