
Answer-first summary for fast verification
Answer: The put data calls will be rejected with a ProvisionedThroughputExceeded exception
Overall explanation Correct option: The put data calls will be rejected with a ProvisionedThroughputExceeded exception The capacity limits of an Amazon Kinesis data stream are defined by the number of shards within the data stream. The limits can be exceeded by either data throughput or the number of PUT records. While the capacity limits are exceeded, the put data call will be rejected with a ProvisionedThroughputExceeded exception. If this is due to a temporary rise of the data stream’s input data rate, retry by the data producer will eventually lead to completion of the requests. If this is due to a sustained rise of the data stream’s input data rate, you should increase the number of shards within your data stream to provide enough capacity for the put data calls to consistently succeed. Incorrect options: The put data calls will be rejected with a AccessDeniedException exception once the limit is reached - Access Denied error is thrown when the accessing system does not have enough permissions. Since data was getting ingested into Data Streams before reaching the capacity, this error is not possible. Data is lost unless the partition key of the data records is changed in order to write data to a different shard in the stream - Partition key is used to segregate and route records to different shards of a data stream. A partition key is specified by your data producer while adding data to an Amazon Kinesis data stream. The use case talks about provisioning only one shard. It is not possible to set up more shards by simply changing the partition key. Hence, this choice is incorrect. Contact AWS support to request an increase in the number of shards - This is a made-up option that acts as a distractor.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
A development team is setting up Kinesis Data Streams to collect real-time data from various appliances. For testing purposes, the team has initially set the configuration with a single shard, which imposes specific capacity limits on throughput.
What is the expected outcome if the data producer attempts to add more data to the data stream than what the configured shard capacity can handle?
A
Contact AWS support to request an increase in the number of shards
B
The put data calls will be rejected with a AccessDeniedException exception once the limit is reached
C
Data is lost unless the partition key of the data records is changed in order to write data to a different shard in the stream
D
The put data calls will be rejected with a ProvisionedThroughputExceeded exception
No comments yet.