
Ultimate access to all questions.
In a big data processing environment, you are tasked with optimizing the performance of a data pipeline that processes a large volume of data with varying data types. How would you approach this task to ensure efficient processing and resource utilization?
A
Implement a single, generic data processing function that can handle all data types.
B
Create separate processing functions for different data types and dynamically select the appropriate function based on the data type.
C
Use a fixed schema for all data, regardless of its original structure.
D
Leverage a schema-on-read approach to handle varying data types efficiently.