
Ultimate access to all questions.
A company is performing an initial data load of multiple terabytes into Snowflake as part of a migration. They have control over the number and size of the source CSV extract files.
What is Snowflake's recommended approach for maximizing the performance of this data load?
A
Use auto-ingest Snowpipes to load large files in a serverless model.
B
Produce the largest files possible, reducing the overall number of files to process.
C
Produce a larger number of smaller files and process the ingestion with size Small virtual warehouses.
D
Use an external tool to issue batched row-by-row inserts within BEGIN TRANSACTION and COMMIT commands.