
Ultimate access to all questions.
Which solution will meet these requirements?
A
Deploy a large, complex reasoning model on Amazon Bedrock. Purchase provisioned throughput and optimize for batch processing.
B
Deploy a low-latency, real-time optimized model on Amazon Bedrock. Purchase provisioned throughput and set up automatic scaling policies.
C
Deploy a large language model (LLM) on an Amazon SageMaker real-time endpoint that uses dedicated GPU instances.
D
Deploy a mid-sized language model on an Amazon SageMaker serverless endpoint that is optimized for batch processing.
No comments yet.