
Ultimate access to all questions.
As a Microsoft Fabric Analytics Engineer, you are planning a migration of a large dataset from a Fabric data source to a lakehouse. The dataset includes both structured and unstructured data, and the migration must be completed with minimal downtime while ensuring cost-effectiveness and scalability. Considering these requirements, which of the following approaches would you choose? (Choose the best option.)
A
Utilize a single data pipeline with a batch processing approach to migrate all data types simultaneously, prioritizing simplicity over processing efficiency.
B
Separate the migration process by using dataflows for structured data and notebooks for unstructured data, without integrating these processes to streamline the migration.
C
Employ Fast Copy for the entire dataset to expedite the migration process, disregarding the need for data transformation or processing during migration.
D
Implement a comprehensive solution combining data pipelines for bulk data transfer, dataflows for structured data processing, and notebooks for unstructured data transformation, ensuring efficiency and minimal downtime.