
Ultimate access to all questions.
You are tasked with creating a data pipeline for a social media platform that needs to analyze user interactions. The pipeline must extract data from Amazon Kinesis Data Streams, transform it using AWS Lambda, and load it into Amazon Redshift. How would you design this pipeline to handle the variability in data volume and ensure reliable processing?
A
Use Amazon S3 as an intermediary storage, dump data from Kinesis to S3, trigger Lambda functions from S3 events, and then load data from S3 to Redshift.
B
Directly stream data from Kinesis to Lambda and then to Redshift without any intermediary storage.
C
Manually run AWS Glue jobs to extract data from Kinesis, transform it, and then load it into Redshift.
D
Use Amazon SQS to queue Kinesis events and have Lambda functions poll the queue for processing.