Ultimate access to all questions.
A data engineer has set up a Structured Streaming job to read from a table, process the data, and then write it into a new table in a streaming fashion. The code snippet used is as follows:
spark.table("sales")
.withColumn("avg_price", col("sales") / col("units"))
.writeStream
.option("checkpointLocation", checkpointPath)
.outputMode("complete")
.table("new_sales")
If the trigger method is not specified in the code, what is the default processingTime the system will use for processing the next batch of data?