
Ultimate access to all questions.
A data engineer is designing a data pipeline. The source system generates files in a shared directory that is also used by other processes. As a result, the files should be kept as is and will accumulate in the directory. The data engineer needs to identify which files are new since the previous run in the pipeline, and set up the pipeline to only ingest those new files with each run.
Which of the following tools can the data engineer use to solve this problem?
A
Databricks SQL
B
Delta Lake
C
Unity Catalog
D
Data Explorer
E
Auto Loader
Explanation:
Auto Loader is specifically designed for incremental data ingestion scenarios where files accumulate in a directory and you need to process only new files since the last run. Here's why:
The scenario describes exactly what Auto Loader is designed for: a shared directory where files accumulate and need to be processed incrementally while preserving existing files. Auto Loader's ability to track processed files and only ingest new ones makes it the perfect solution for this requirement.