
Ultimate access to all questions.
You are designing a data pipeline for a financial services company that needs to process and analyze trade execution data from various sources, including stock exchanges and trading platforms. The pipeline should be able to handle high throughput and low latency. Which AWS services would you use to create this pipeline, and how would you configure them to meet the requirements?
A
Use Amazon Kinesis for real-time data streaming, AWS Lambda for serverless computing, and Amazon DynamoDB for data storage. Configure Kinesis to capture trade execution data in real-time, Lambda to process the data, and DynamoDB to store the processed data.
B
Use Amazon S3 for data storage, AWS Glue for ETL processing, and Amazon Redshift for data warehousing. Configure Glue to process the trade execution data as it arrives, and Redshift to store and analyze the processed data.
C
Use Amazon Kinesis Data Streams for real-time data streaming, AWS Glue for ETL processing, and Amazon Elasticsearch for data analysis. Configure Data Streams to capture the trade execution data in real-time, Glue to process the data, and Elasticsearch to store and analyze the processed data.
D
Use Amazon Kinesis Data Firehose for real-time data ingestion, AWS Lambda for serverless computing, and Amazon Redshift for data warehousing. Configure Firehose to capture the trade execution data in real-time, Lambda to process the data, and Redshift to store and analyze the processed data.