Ultimate access to all questions.
You are designing an Azure Data Factory data flow to ingest data from a CSV file with columns username, comment, and date. The data flow includes a source, a derived column transformation for type casting, and a sink to an Azure Synapse Analytics dedicated SQL pool.
To meet the requirements that all valid rows are written to the destination, truncation errors on the comment column are proactively avoided, and rows with comment values that would cause truncation are written to a blob storage file, which two actions should you take?