
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A company wants to tailor an existing foundation model on Amazon Bedrock to generate product descriptions aligned with its brand tone. They already have thousands of labeled examples. Which approach is best suited?
A
Reinforcement learning
B
Fine-tuning
C
Few-shot prompting
D
Zero-shot inference
Explanation:
Fine-tuning (Option B) is the correct approach because:
Large labeled dataset: The company has "thousands of labeled examples" which is ideal for fine-tuning. Fine-tuning requires substantial labeled data to adjust the model's weights to learn specific patterns.
Tailoring to specific domain: The company wants to align product descriptions with their brand tone, which requires the model to learn specific stylistic patterns that fine-tuning can capture.
Amazon Bedrock support: Amazon Bedrock supports fine-tuning of foundation models, allowing customers to customize models with their own data.
Why other options are not suitable:
Reinforcement learning (A): Typically used for training models through reward-based feedback, not for adapting a foundation model to a specific writing style with labeled examples.
Few-shot prompting (C): Uses a small number of examples in the prompt itself. While useful, it's less effective than fine-tuning when you have thousands of labeled examples.
Zero-shot inference (D): Uses the model without any examples, which wouldn't capture the specific brand tone the company wants.
Fine-tuning allows the foundation model to learn from the company's specific examples and produce outputs that match their brand voice consistently, making it the most effective approach given their resources.