Ultimate access to all questions.
A production-deployed Structured Streaming job is incurring higher than expected cloud storage costs. Currently, each microbatch processes in under 3 seconds during normal execution, with at least 12 microbatch executions per minute containing zero records. The streaming write uses default trigger settings. The job runs in a workspace with instance pools provisioned to minimize startup time for batch jobs, alongside many other Databricks jobs.
Assuming all other variables remain constant and records must be processed within 10 minutes, which configuration adjustment will meet this requirement while addressing the cost issue?