
Ultimate access to all questions.
A junior data engineer is implementing logic for a Lakehouse table called silver_device_recordings. The source data consists of 100 unique fields in a deeply nested JSON structure.
The silver_device_recordings table will serve downstream applications, including multiple production monitoring dashboards and a production model. Currently, 45 out of the 100 fields are utilized in at least one of these applications.
Given the highly nested schema and large number of fields, the data engineer is evaluating the optimal approach for schema declaration.
Which of the following statements about Delta Lake and Databricks provides relevant considerations for their decision-making process?
A
The Tungsten encoding used by Databricks is optimized for storing string data; newly-added native support for querying JSON strings means that string types are always most efficient.
B
Because Delta Lake uses Parquet for data storage, data types can be easily evolved by just modifying file footer information in place.
C
Schema inference and evolution on Databricks ensure that inferred types will always accurately match the data types used by downstream systems.
D
Because Databricks will infer schema using types that allow all observed data to be processed, setting types manually provides greater assurance of data quality enforcement.