
Ultimate access to all questions.
Answer-first summary for fast verification
Answer: Use Amazon Bedrock with FM access and fine-tuning support
## Explanation Amazon Bedrock is the correct choice because: 1. **No model training required** - Bedrock provides access to pre-trained foundation models from leading AI companies (Anthropic, AI21 Labs, Cohere, Stability AI, etc.) 2. **No GPU resource management** - Bedrock is a fully managed service, so AWS handles all infrastructure, scaling, and GPU management 3. **Multiple foundation models available** - Bedrock offers access to various foundation models through a single API 4. **Optional fine-tuning in the future** - Bedrock supports fine-tuning capabilities for customizing models to specific use cases 5. **Quick integration** - Bedrock provides APIs that can be easily integrated into existing applications **Why other options are incorrect:** - **A. Deploy their own models in ECS with autoscaling**: This requires managing infrastructure, GPU resources, and model training - **B. Use Amazon SageMaker JumpStart for deployment**: While JumpStart provides pre-built models, it still requires some infrastructure management and is more focused on the full ML lifecycle - **D. Use Amazon Kendra for text generation**: Kendra is an intelligent search service, not a foundation model service for text generation Amazon Bedrock is specifically designed for scenarios where teams want to quickly integrate foundation model capabilities without the overhead of infrastructure management or model training.
Author: Ritesh Yadav
No comments yet.
A machine learning team wants to integrate LLM capabilities into an existing app quickly. They want—no model training, no GPU resource management, multiple foundation models available, optional fine-tuning in the future. Which approach should they use?
A
Deploy their own models in ECS with autoscaling
B
Use Amazon SageMaker JumpStart for deployment
C
Use Amazon Bedrock with FM access and fine-tuning support
D
Use Amazon Kendra for text generation