
Ultimate access to all questions.
You are designing an Azure Data Factory data flow to ingest data from a CSV file with columns username, comment, and date. The data flow includes a source, a derived column transformation for type casting, and a sink to an Azure Synapse Analytics dedicated SQL pool.
To meet the requirements that all valid rows are written to the destination, truncation errors on the comment column are proactively avoided, and rows with comment values that would cause truncation are written to a blob storage file, which two actions should you take?
A
To the data flow, add a sink transformation to write the rows to a file in blob storage.
B
To the data flow, add a Conditional Split transformation to separate the rows that will cause truncation errors.
C
To the data flow, add a filter transformation to filter out rows that will cause truncation errors.
D
Add a select transformation to select only the rows that will cause truncation errors.