
Answer-first summary for fast verification
Answer: Add a role description to the prompt context that instructs the model of the age range that the response should target.
## Detailed Explanation ### Question Analysis The question asks for the solution with the **LEAST implementation effort** to automatically adjust a generative AI model's response style based on user age range. The key constraints are: - Must adapt response style (complexity, tone, vocabulary) to different age groups - Age range information is provided to the model - Minimal implementation effort is the primary criterion ### Evaluation of Options **A: Fine-tune the model using additional training data** - **Why it's less suitable**: Fine-tuning requires significant effort including: - Curating large datasets for each age group - Training infrastructure setup and computational resources - Model retraining and validation cycles - Ongoing maintenance as age groups or requirements change - This approach is resource-intensive and contradicts the "least effort" requirement. **B: Add a role description to the prompt context** - **Why it's optimal**: This leverages prompt engineering, which: - Requires minimal changes to existing infrastructure - Can be implemented immediately without model retraining - Allows dynamic adjustment by simply modifying the prompt - Is cost-effective with no additional training costs - Example prompt: "Explain this concept to a 10-year-old child" or "Provide a detailed explanation suitable for a university student" - This approach directly uses the age range information provided and guides the model's response style through contextual instructions. **C: Use chain-of-thought reasoning** - **Why it's less suitable**: While chain-of-thought can help with reasoning, it: - Adds complexity to prompt design - May require multiple inference steps - Doesn't directly address style adaptation based on age - Implementation effort is higher than simple prompt engineering **D: Summarize the response text depending on user age** - **Why it's less suitable**: This approach: - Requires post-processing logic after model generation - May oversimplify or lose important content - Doesn't fundamentally change the model's understanding or style - Adds complexity with summarization algorithms ### Best Practice Justification In AWS AI/ML solutions, prompt engineering is recognized as a low-effort, high-impact approach for adapting generative AI models. AWS Bedrock and other services emphasize prompt engineering as a first-line solution for customizing model behavior without retraining. The approach in option B aligns with AWS best practices for minimizing implementation complexity while achieving desired outcomes. ### Conclusion Option B represents the most efficient solution because it uses existing model capabilities through strategic prompt design, requires no model modifications or additional training, and can be implemented immediately with minimal development effort.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
An education provider is developing a Q&A application that uses a generative AI model to explain complex concepts. They need to automatically adjust the response style based on the age range of the user asking the question. Which solution meets these requirements with the LEAST implementation effort?
A
Fine-tune the model by using additional training data that is representative of the various age ranges that the application will support.
B
Add a role description to the prompt context that instructs the model of the age range that the response should target.
C
Use chain-of-thought reasoning to deduce the correct style and complexity for a response suitable for that user.
D
Summarize the response text depending on the age of the user so that younger users receive shorter responses.