Ultimate access to all questions.
You have an Azure Data Lake Storage account with a staging zone. You need to design a daily process to ingest incremental data from this zone, transform it using an R script, and then load the transformed data into an Azure Synapse Analytics data warehouse.
Proposed Solution: Use an Azure Data Factory schedule trigger to run a pipeline that executes an Azure Databricks notebook and then inserts the data into the data warehouse.
Does this solution meet the goal?