
Ultimate access to all questions.
You are designing a data pipeline for a financial services company that requires processing large volumes of transactional data with complex dependencies. The pipeline must extract data from multiple sources including Amazon S3, transform it using AWS Glue, and load it into Amazon Redshift. How would you configure AWS services to handle these dependencies and ensure data integrity?
A
Use AWS Data Pipeline to manually define each step and its dependencies.
B
Leverage AWS Step Functions to create a state machine that defines the workflow with explicit dependencies between each step, ensuring data integrity and correct execution order.
C
Set up a series of AWS Lambda functions that are triggered sequentially based on completion of the previous function.
D
Use Amazon CloudWatch Events to trigger each step of the pipeline without considering dependencies.