
Ultimate access to all questions.
A company has fine-tuned a custom model using an existing large language model (LLM) from Amazon Bedrock. They need to deploy this model to production to serve a consistent, steady rate of requests per minute.
What is the most cost-effective solution to meet these requirements?
A
Deploy the model by using an Amazon EC2 compute optimized instance.
B
Use the model with on-demand throughput on Amazon Bedrock.
C
Store the model in Amazon S3 and host the model by using AWS Lambda.
D
Purchase Provisioned Throughput for the model on Amazon Bedrock.