
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A machine learning team wants to integrate LLM capabilities into an existing app quickly. They want—no model training, no GPU resource management, multiple foundation models available, optional fine-tuning in the future. Which approach should they use?
A
Deploy their own models in ECS with autoscaling
B
Use Amazon SageMaker JumpStart for deployment
C
Use Amazon Bedrock with FM access and fine-tuning support
D
Use Amazon Kendra for text generation
Explanation:
Amazon Bedrock is the correct choice because:
No model training required - Bedrock provides access to pre-trained foundation models from leading AI companies (Anthropic, AI21 Labs, Cohere, Meta, etc.)
No GPU resource management - AWS manages all the underlying infrastructure, including GPU resources
Multiple foundation models available - Bedrock offers a choice of various foundation models through a single API
Optional fine-tuning support - Bedrock supports fine-tuning capabilities for customization when needed
Quick integration - Bedrock provides APIs that can be easily integrated into existing applications
Why other options are incorrect:
A (ECS with autoscaling): Requires managing your own models and GPU resources, which contradicts the requirements
B (SageMaker JumpStart): While it provides pre-built models, it still requires more setup and management than Bedrock
D (Amazon Kendra): Kendra is a search service, not a text generation service, and doesn't provide LLM capabilities