
Answer-first summary for fast verification
Answer: Reduce the number of data nodes in the cluster to 2 Add UltraWarm nodes to handle the expected capacity. Configure the indexes to transition to UltraWarm when OpenSearch Service ingests the data. Transition the input data to S3 Glacier Deep Archive after 1 month by using an S3 Lifecycle policy.
Option B is the most cost-effective solution for the company's requirements. Here's why: 1. **Reducing Data Nodes**: By reducing the number of data nodes in the cluster to 2, the company can lower the cost associated with the more expensive, high-performance data nodes. 2. **Adding UltraWarm Nodes**: UltraWarm nodes are cost-effective for read-only workloads, which is what the company is using the data for. They are designed for long-term data storage and retrieval, making them suitable for the company's 1-month data retention period. 3. **Index Transition to UltraWarm**: Configuring the indexes to transition to UltraWarm when the data is ingested ensures that the data is stored in a cost-effective manner from the beginning. 4. **S3 Lifecycle Policy**: Using an S3 Lifecycle policy to transition the input data to S3 Glacier Deep Archive after 1 month is a cost-saving measure. S3 Glacier Deep Archive is one of the lowest-cost storage classes in AWS, suitable for long-term archiving of data that is infrequently accessed. 5. **Compliance Retention**: By transitioning the data to S3 Glacier Deep Archive, the company can still retain a copy of all input data for compliance purposes, even after deleting the index from the OpenSearch Service cluster. This combination of strategies optimizes cost by using a smaller number of high-performance data nodes for the initial data ingestion and analysis, and then leveraging the cost benefits of UltraWarm nodes and S3 Glacier Deep Archive for long-term storage and compliance.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
A company is using Amazon OpenSearch Service to analyze data. The company loads data into an OpenSearch Service cluster with 10 data nodes from an Amazon S3 bucket that uses S3 Standard storage. The data resides in the cluster for 1 month for read-only analysis. After 1 month, the company deletes the index that contains the data from the cluster. For compliance purposes, the company must retain a copy of all input data. The company is concerned about ongoing costs and asks a solutions architect to recommend a new solution. Which solution will meet these requirements MOST cost-effectively?
A
Replace all the data nodes with UltraWarm nodes to handle the expected capacity. Transition the input data from S3 Standard to S3 Glacier Deep Archive when the company loads the data into the cluster.
B
Reduce the number of data nodes in the cluster to 2 Add UltraWarm nodes to handle the expected capacity. Configure the indexes to transition to UltraWarm when OpenSearch Service ingests the data. Transition the input data to S3 Glacier Deep Archive after 1 month by using an S3 Lifecycle policy.
C
Reduce the number of data nodes in the cluster to 2. Add UltraWarm nodes to handle the expected capacity. Configure the indexes to transition to UltraWarm when OpenSearch Service ingests the data. Add cold storage nodes to the cluster Transition the indexes from UltraWarm to cold storage. Delete the input data from the S3 bucket after 1 month by using an S3 Lifecycle policy.
D
Reduce the number of data nodes in the cluster to 2. Add instance-backed data nodes to handle the expected capacity. Transition the input data from S3 Standard to S3 Glacier Deep Archive when the company loads the data into the cluster.