
Answer-first summary for fast verification
Answer: Use Amazon Bedrock with FM access and fine-tuning support
**Amazon Bedrock** is the correct choice because: 1. **No model training required** - Bedrock provides access to pre-trained foundation models from leading AI companies (Anthropic, AI21 Labs, Cohere, Meta, etc.) 2. **No GPU resource management** - AWS manages all the underlying infrastructure, including GPU resources 3. **Multiple foundation models available** - Bedrock offers a choice of various foundation models through a single API 4. **Optional fine-tuning support** - Bedrock supports fine-tuning capabilities for customization when needed 5. **Quick integration** - Bedrock provides APIs that can be easily integrated into existing applications **Why other options are incorrect:** - **A (ECS with autoscaling)**: Requires managing your own models and GPU resources, which contradicts the requirements - **B (SageMaker JumpStart)**: While it provides pre-built models, it still requires more setup and management than Bedrock - **D (Amazon Kendra)**: Kendra is a search service, not a text generation service, and doesn't provide LLM capabilities
Author: Ritesh Yadav
Ultimate access to all questions.
No comments yet.
A machine learning team wants to integrate LLM capabilities into an existing app quickly. They want—no model training, no GPU resource management, multiple foundation models available, optional fine-tuning in the future. Which approach should they use?
A
Deploy their own models in ECS with autoscaling
B
Use Amazon SageMaker JumpStart for deployment
C
Use Amazon Bedrock with FM access and fine-tuning support
D
Use Amazon Kendra for text generation