
Ultimate access to all questions.
Answer-first summary for fast verification
Answer: Fine-tuning
**Explanation:** Fine-tuning is the most appropriate approach in this scenario because: 1. **Large labeled dataset**: The company has "thousands of labeled examples" which is sufficient for fine-tuning a foundation model. 2. **Specific domain adaptation**: The goal is to tailor the model to generate product descriptions with a specific brand tone, which requires learning from domain-specific examples. 3. **Amazon Bedrock support**: Amazon Bedrock supports fine-tuning of foundation models, allowing customization for specific use cases. **Why other options are less suitable:** - **A) Reinforcement learning**: Typically used for optimizing model behavior through reward signals, not for initial domain adaptation with labeled data. - **C) Few-shot prompting**: Uses a few examples in the prompt, but with thousands of examples available, fine-tuning would be more effective for consistent brand tone. - **D) Zero-shot inference**: No examples provided to the model, which wouldn't help achieve the specific brand tone requirement. Fine-tuning allows the foundation model to learn the specific patterns, terminology, and style from the company's labeled examples, resulting in better-aligned product descriptions.
Author: Jin H
No comments yet.
A company wants to tailor an existing foundation model on Amazon Bedrock to generate product descriptions aligned with its brand tone. They already have thousands of labeled examples. Which approach is best suited?
A
Reinforcement learning
B
Fine-tuning
C
Few-shot prompting
D
Zero-shot inference