Ultimate access to all questions.
As part of a batch ingestion pipeline, a data engineer is utilizing the following code block to read from a composable table named 'transactions'. Given that this code block currently works for batch data, what modifications are necessary to ensure it can also function when the 'transactions' table is used as a stream source?
transations_df = (spark.read.schema(schema)
.format("delta")
.table("transations")
)