
Ultimate access to all questions.
You are designing a data pipeline for a logistics company that needs to track shipments in real-time. The pipeline must extract data from Amazon DynamoDB, transform it using AWS Glue, and load it into Amazon Elasticsearch Service for real-time analytics. How would you configure this pipeline to ensure real-time processing and data consistency?
A
Use DynamoDB Streams to capture changes in the database, trigger AWS Lambda functions for transformation, and load the transformed data directly into Elasticsearch.
B
Manually run AWS Glue jobs periodically to extract data from DynamoDB, transform it, and then load it into Elasticsearch.
C
Use Amazon S3 as an intermediary storage, dump data from DynamoDB to S3, trigger Glue jobs from S3 events, and then load data from S3 to Elasticsearch.
D
Set up a cron job on an EC2 instance to periodically check DynamoDB for new entries and then trigger Lambda functions.