
Ultimate access to all questions.
In the context of designing a scalable and cost-effective data processing solution on Google Cloud, you are tasked with processing large volumes of data that includes both historical batch data and real-time streaming data. The solution must minimize operational complexity and infrastructure costs while ensuring high availability and scalability. Considering these requirements, how does Google Cloud Dataflow optimally support both batch and streaming data processing? Choose the best option.
A
By requiring separate infrastructure setups for batch and streaming processing, thus allowing specialized optimization for each type.
B
By offering a unified programming model through Apache Beam that seamlessly handles both batch and streaming data within the same pipeline.
C
By providing distinct APIs for batch and streaming processing, enabling developers to choose the most suitable approach for their specific needs.
D
By leveraging Cloud Functions for streaming data and Cloud Run for batch processing, thereby utilizing serverless technologies for cost efficiency.