
Ultimate access to all questions.
You are tasked with creating a data pipeline for a healthcare provider that needs to analyze patient data. The pipeline must extract data from Amazon DynamoDB, transform it using AWS Lambda, and load it into Amazon Redshift. How would you design this pipeline to handle sensitive data and ensure compliance with healthcare regulations?
A
Use DynamoDB Streams to capture changes in the database, trigger AWS Lambda functions for transformation, and load the transformed data directly into Redshift, ensuring all data is encrypted in transit and at rest.
B
Manually run AWS Glue jobs periodically to extract data from DynamoDB, transform it, and then load it into Redshift.
C
Use Amazon S3 as an intermediary storage, dump data from DynamoDB to S3, trigger Glue jobs from S3 events, and then load data from S3 to Redshift.
D
Set up a cron job on an EC2 instance to periodically check DynamoDB for new entries and then trigger Lambda functions.