Ultimate access to all questions.
An external customer supplies you with a daily data dump from their database, which is transferred to Google Cloud Storage (GCS) as comma-separated values (CSV) files. Your objective is to analyze this data using Google BigQuery. However, there is a possibility that some rows within these CSV files might be incorrectly formatted or corrupted. What approach would you take to construct an efficient data pipeline that ensures the integrity of the data while loading it into BigQuery?