
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A machine learning team wants to integrate LLM capabilities into an existing app quickly. They want—no model training, no GPU resource management, multiple foundation models available, optional fine-tuning in the future. Which approach should they use?
A
Deploy their own models in ECS with autoscaling
B
Use Amazon SageMaker JumpStart for deployment
C
Use Amazon Bedrock with FM access and fine-tuning support
D
Use Amazon Kendra for text generation
Explanation:
Amazon Bedrock is the correct choice because:
No model training required - Bedrock provides access to pre-trained foundation models from leading AI companies (Anthropic, AI21 Labs, Cohere, Stability AI, etc.)
No GPU resource management - Bedrock is a fully managed service, so AWS handles all infrastructure, scaling, and GPU management
Multiple foundation models available - Bedrock offers access to various foundation models through a single API
Optional fine-tuning in the future - Bedrock supports fine-tuning capabilities for customizing models to specific use cases
Quick integration - Bedrock provides APIs that can be easily integrated into existing applications
Why other options are incorrect:
A. Deploy their own models in ECS with autoscaling: This requires managing infrastructure, GPU resources, and model training
B. Use Amazon SageMaker JumpStart for deployment: While JumpStart provides pre-built models, it still requires some infrastructure management and is more focused on the full ML lifecycle
D. Use Amazon Kendra for text generation: Kendra is an intelligent search service, not a foundation model service for text generation
Amazon Bedrock is specifically designed for scenarios where teams want to quickly integrate foundation model capabilities without the overhead of infrastructure management or model training.