
Ultimate access to all questions.
Answer-first summary for fast verification
Answer: Chain-of-thought prompting
## Explanation **Chain-of-thought prompting** is the correct technique for encouraging LLMs to show intermediate reasoning steps when solving complex problems. ### Key Differences: **Chain-of-thought prompting**: - Encourages the model to break down complex problems into intermediate steps - Shows the reasoning process step-by-step before arriving at the final answer - Particularly effective for mathematical problems, logical reasoning, and multi-step tasks - Helps improve accuracy and transparency in problem-solving **Output refinement prompting**: - Focuses on improving or polishing the final output - Typically involves asking the model to revise or enhance its response - Doesn't specifically encourage showing intermediate reasoning steps ### Why Chain-of-thought is correct for this scenario: 1. The research team needs the LLM to **explain its reasoning steps** 2. They're working with **complex mathematical problems** that require step-by-step analysis 3. Chain-of-thought prompting is specifically designed to make the model's thinking process transparent 4. This technique helps in debugging, understanding model behavior, and improving problem-solving accuracy ### Example of Chain-of-thought prompting: Instead of asking: "What is 15% of 200?" You would ask: "Let's think step by step. What is 15% of 200?" The model would then respond with something like: "First, 15% means 15/100 = 0.15 Then, 0.15 × 200 = 30 So 15% of 200 is 30." This approach is particularly valuable in Amazon Bedrock for research and development purposes where understanding the model's reasoning process is crucial.
Author: Ritesh Yadav
A research team needs an LLM on Amazon Bedrock to explain its reasoning steps when solving complex mathematical problems. Which prompting technique encourages the model to show intermediate reasoning?
A
Output refinement prompting
B
Chain-of-thought prompting
No comments yet.