Ultimate access to all questions.
A company requires a cost-effective solution for running large batch-processing jobs on data stored in an Amazon S3 bucket. These jobs involve simulations, and the results are not time-sensitive, allowing the process to tolerate interruptions. Each job needs to process between 15-20 GB of data from the S3 bucket, with the output being stored in a separate S3 bucket for further analysis. Which solution provides the most cost-effective approach to meet these requirements?