Ultimate access to all questions.
Consider a scenario where you are processing time series data from smart home devices using Spark Structured Streaming. The data includes energy consumption readings from various devices. How would you structure your Spark job to process this data efficiently, including handling data across partitions and within one partition? Additionally, describe how you would manage any potential schema changes in the incoming data.