
Ultimate access to all questions.
A company operates an application that processes and stores image data on-premises. This application handles millions of new image files daily, each averaging 1 MB in size. The application processes these files in 1 GB batches, zipping them together before archiving them as a single file on an on-premises NFS server for long-term storage. The company has a Microsoft Hyper-V environment with available compute resources but lacks sufficient storage capacity. They aim to archive these images on AWS and require the capability to retrieve archived data within one week of a request. The company maintains a 10 Gbps AWS Direct Connect link between their on-premises data center and AWS, and they need to manage bandwidth limits and schedule data transfers to AWS during off-peak business hours. What is the most cost-effective solution to meet these requirements?
A
Deploy an AWS DataSync agent on a new GPU-based Amazon EC2 instance. Configure the DataSync agent to transfer the batch files from the on-premises NFS server to Amazon S3 Glacier Instant Retrieval. Subsequently, delete the data from the on-premises storage.
B
Deploy an AWS DataSync agent as a Hyper-V VM on premises. Configure the DataSync agent to transfer the batch files from the on-premises NFS server to Amazon S3 Glacier Deep Archive. After the transfer, delete the data from the on-premises storage.
C
Deploy an AWS DataSync agent on a new general purpose Amazon EC2 instance. Configure the DataSync agent to transfer the batch files from the on-premises NFS server to Amazon S3 Standard. After the transfer, delete the data from the on-premises storage. Implement an S3 Lifecycle rule to transition objects from S3 Standard to S3 Glacier Deep Archive after one day.
D
Deploy an AWS Storage Gateway Tape Gateway on premises in the Hyper-V environment. Connect the Tape Gateway to AWS and use automatic tape creation, specifying an Amazon S3 Glacier Deep Archive pool. Eject the tape once the batch of images is copied.