
Explanation:
The best approach is to add read capacity without disrupting graph traversals. Add Neptune read replicas and use the reader endpoint so read traffic is load-balanced across replicas while a single writer handles the moderate write volume. This matches a read-heavy workload with spiky demand and maintains consistency of graph traversals. The option Scale up the writer instance size only increases vertical capacity and does not provide horizontal read scaling or load distribution for spikes. The option Configure Amazon Neptune Global Database focuses on cross-Region replication and disaster recovery, introducing extra latency and complexity without improving same-Region read throughput. The option Partition the graph across multiple Neptune clusters by key breaks graph traversals and is unsupported for cross-cluster queries, making it impractical for low-latency graph queries. For read-heavy Neptune workloads, think read replicas + reader endpoint. Remember Neptune has a single writer per cluster and cross-Region features are not for same-Region scaling. Sharding graphs across clusters is typically a red flag due to traversal complexity.
Ultimate access to all questions.
How should a Neptune cluster be scaled to support low-latency, read-heavy graph queries spiking to about 25,000 requests per second while writes remain moderate?
A
Scale up the writer instance size
B
Configure Amazon Neptune Global Database
C
Add Neptune read replicas and use the reader endpoint
D
Partition the graph across multiple Neptune clusters by key
No comments yet.