
Answer-first summary for fast verification
Answer: Use an error retry and exponential backoff mechanism, Decrease the frequency or size of your requests
Use an error retry and exponential backoff mechanism Decrease the frequency or size of your requests You can use PutRecords API call to write multiple data records into a Kinesis data stream in a single call. Each PutRecords request can support up to 500 records. Each record in the request can be as large as 1 MiB, up to a limit of 5 MiB for the entire request, including partition keys. Each shard can support writes up to 1,000 records per second, up to a maximum data write of 1 MiB per second. The response Records array includes both successfully and unsuccessfully processed records. Kinesis Data Streams attempts to process all records in each PutRecords request. A single record failure does not stop the processing of subsequent records. As a result, PutRecords doesn't guarantee the ordering of records. An unsuccessfully processed record includes ErrorCode and ErrorMessage values. ErrorCode reflects the type of error and can be one of the following values: ProvisionedThroughputExceededException or InternalFailure. ProvisionedThroughputExceededException indicates that the request rate for the stream is too high, or the requested data is too large for the available throughput. Reduce the frequency or size of your requests. To address the given use case, you can apply these best practices: Reshard your stream to increase the number of shards in the stream. Reduce the frequency or size of your requests. Distribute read and write operations as evenly as possible across all of the shards in Data Streams. Use an error retry and exponential backoff mechanism. Incorrect options: Merge the shards to decrease the number of shards in the stream Increase the frequency or size of your requests These two options contradict the explanation provided above, so these options are incorrect. Decrease the number of KCL consumers - This option has been added as a distractor. The number of KCL consumers is irrelevant for the given use case since the ProvisionedThroughputExceededException is due to the PutRecords API call being used by the producers.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
A data analytics company leverages Amazon Kinesis to process data from Internet-of-Things (IoT) devices. The development team has observed that the IoT data ingested by Kinesis undergoes periodic spikes. During these spikes, the PutRecords API call sometimes fails, and the logs reveal the following response for the failed call:
HTTP/1.1 200 OK x-amzn-RequestId: <RequestId> Content-Type: application/x-amz-json-1.1 Content-Length: <PayloadSizeBytes> Date: <Date> { "FailedRecordCount": 2, "Records": [ { "SequenceNumber": "49543463076548007577105092703039560359975228518395012686", "ShardId": "shardId-000000000000" }, { "ErrorCode": "ProvisionedThroughputExceededException", "ErrorMessage": "Rate exceeded for shard shardId-000000000001 in stream exampleStreamName under account 111111111111." }, { "ErrorCode": "InternalFailure", "ErrorMessage": "Internal service failure." } ] } As an AWS Certified Developer Associate, which of the following strategies would you recommend to mitigate this issue? (Select two)
A
Increase the frequency or size of your requests
B
Decrease the number of KCL consumers
C
Merge the shards to decrease the number of shards in the stream
D
Use an error retry and exponential backoff mechanism
E
Decrease the frequency or size of your requests