
Ultimate access to all questions.
You are designing a data pipeline for a financial services company that needs to process and analyze large volumes of transaction data daily. The pipeline should be able to handle data from various sources, including batch and real-time data. Which AWS services would you use to create this pipeline, and how would you configure them to meet the requirements?
A
Use Amazon Kinesis for real-time data streaming, AWS Glue for ETL processing, and Amazon Redshift for data warehousing. Configure Kinesis to capture real-time data, Glue to process both batch and real-time data, and Redshift to store and analyze the processed data.
B
Use AWS Data Pipeline for scheduling and workflow management, AWS Lambda for serverless computing, and Amazon DynamoDB for data storage. Configure Data Pipeline to schedule Lambda functions based on dependencies, and Lambda to process data stored in DynamoDB.
C
Use Amazon S3 for data storage, AWS Step Functions for workflow management, and AWS Glue for ETL processing. Configure Step Functions to manage the workflow, Glue to process data stored in S3, and trigger the workflow based on a schedule.
D
Use Amazon Kinesis for real-time data streaming, AWS Lambda for serverless computing, and Amazon Elasticsearch for data analysis. Configure Kinesis to capture real-time data, Lambda to process the data, and Elasticsearch to store and analyze the processed data.