
Answer-first summary for fast verification
Answer: Auto Loader
## Explanation Auto Loader is specifically designed to solve this exact problem. Here's why: ### Auto Loader Features: - **Incremental File Processing**: Auto Loader automatically tracks which files have been processed and only loads new files in subsequent runs - **File Tracking**: It maintains state about processed files using checkpointing - **Efficient Discovery**: Uses efficient file listing and directory scanning to identify new files - **Exactly-Once Processing**: Ensures files are processed only once even if the pipeline restarts ### Why Other Options Don't Fit: - **A. Databricks SQL**: This is a query service, not a file ingestion tool - **B. Delta Lake**: While Delta Lake handles incremental data processing, it doesn't inherently solve the file discovery problem - **C. Unity Catalog**: This is for data governance and cataloging, not file ingestion - **D. Data Explorer**: This is for exploring and visualizing data, not for incremental file processing ### Auto Loader Implementation: ```python # Example Auto Loader code for incremental file processing df = spark.readStream.format("cloudFiles") \ .option("cloudFiles.format", "json") \ .option("cloudFiles.includeExistingFiles", "false") \ .load("path/to/shared/directory") ``` Auto Loader's ability to track processed files and only ingest new ones makes it the perfect solution for this scenario where files accumulate in a shared directory and need to be processed incrementally.
Author: LeetQuiz .
Ultimate access to all questions.
No comments yet.
Question 27 A data engineer is designing a data pipeline. The source system generates files in a shared directory that is also used by other processes. As a result, the files should be kept as is and will accumulate in the directory. The data engineer needs to identify which files are new since the previous run in the pipeline, and set up the pipeline to only ingest those new files with each run.
Which of the following tools can the data engineer use to solve this problem?
A
Databricks SQL
B
Delta Lake
C
Unity Catalog
D
Data Explorer
E
Auto Loader