Ultimate access to all questions.
You are designing an Azure Data Factory pipeline to ingest and transform data from an on-premises SQL Server to Azure Blob Storage. The data includes a large table of customer transactions that needs to be partitioned by date. Which activities and connectors would you use in the pipeline to achieve this, and how would you configure the pipeline to handle the partitioning?