
Answer-first summary for fast verification
Answer: Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.
## Explanation **Correct Answer: A** **Why Option A is correct:** 1. **700 TB of data** is a massive amount (700 TB = 700,000 GB). 2. **500 Mbps bandwidth** would take too long to transfer 700 TB: - 500 Mbps = 62.5 MB/s (500 ÷ 8) - 700 TB = 700,000 GB = 700,000,000 MB - Time required: 700,000,000 MB ÷ 62.5 MB/s = 11,200,000 seconds ≈ 129.6 days - This exceeds the **1-month (30 days) requirement**. 3. **AWS Snowball** is designed for large-scale data transfers where internet bandwidth is insufficient. Snowball devices are shipped to your location, you load data onto them, and AWS handles the physical transfer to AWS. 4. **Amazon S3 Glacier Deep Archive** is the lowest-cost storage class in AWS, ideal for long-term retention (7 years) with infrequent access (regulatory requests). 5. **Lifecycle policy** automatically transitions data from S3 Standard to Glacier Deep Archive after initial upload. **Why other options are incorrect:** **Option B:** - VPN over public internet would use the 500 Mbps connection, which would take ~130 days (exceeds 1-month deadline) - Direct upload to S3 Glacier via CLI would be slow and inefficient for 700 TB **Option C:** - AWS Direct Connect (500 Mbps) has the same bandwidth limitation as the internet connection - Direct Connect has setup costs and monthly fees that make it more expensive than Snowball for one-time migration - Still takes ~130 days to transfer **Option D:** - AWS DataSync uses the available network bandwidth - With 500 Mbps, it would still take ~130 days - DataSync is better for ongoing synchronization, not one-time bulk migration of 700 TB **Key Considerations:** - **Time constraint:** 1 month vs. ~130 days needed for network transfer - **Cost optimization:** Snowball has a flat fee per device, while network transfer costs would include data transfer fees and potentially Direct Connect setup fees - **Storage optimization:** Glacier Deep Archive is the most cost-effective for 7-year retention with infrequent access - **Practicality:** Snowball is specifically designed for this exact scenario - migrating large datasets to AWS when network bandwidth is insufficient
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
A company has 700 TB of backup data stored in network attached storage (NAS) in its data center. This backup data need to be accessible for infrequent regulatory requests and must be retained 7 years. The company has decided to migrate this backup data from its data center to AWS. The migration must be complete within 1 month. The company has 500 Mbps of dedicated bandwidth on its public internet connection available for data transfer.
What should a solutions architect do to migrate and store the data at the LOWEST cost?
A
Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.
B
Deploy a VPN connection between the data center and Amazon VPC. Use the AWS CLI to copy the data from on premises to Amazon S3 Glacier.
C
Provision a 500 Mbps AWS Direct Connect connection and transfer the data to Amazon S3. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.
D
Use AWS DataSync to transfer the data and deploy a DataSync agent on premises. Use the DataSync task to copy files from the on-premises NAS storage to Amazon S3 Glacier.