
Ultimate access to all questions.
You are implementing a batch dataset in Parquet format. Data files will be produced using Azure Data Factory and stored in Azure Data Lake Storage Gen2. The files will be consumed by an Azure Synapse Analytics serverless SQL pool. You need to minimize storage costs for the solution. What should you do?
A
Use Snappy compression for the files.
B
Use OPENROWSET to query the Parquet files.
C
Create an external table that contains a subset of columns from the Parquet files.
D
Store all data as string in the Parquet files.