
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A research team needs an LLM on Amazon Bedrock to explain its reasoning steps when solving complex mathematical problems. Which prompting technique encourages the model to show intermediate reasoning?
A
Output refinement prompting
B
Chain-of-thought prompting
Explanation:
Chain-of-thought prompting is the correct technique for encouraging LLMs to show intermediate reasoning steps when solving complex problems.
Chain-of-thought prompting:
Encourages the model to break down complex problems into intermediate steps
Shows the reasoning process step-by-step before arriving at the final answer
Particularly effective for mathematical problems, logical reasoning, and multi-step tasks
Helps improve accuracy and transparency in problem-solving
Output refinement prompting:
Focuses on improving or polishing the final output
Typically involves asking the model to revise or enhance its response
Doesn't specifically encourage showing intermediate reasoning steps
The research team needs the LLM to explain its reasoning steps
They're working with complex mathematical problems that require step-by-step analysis
Chain-of-thought prompting is specifically designed to make the model's thinking process transparent
This technique helps in debugging, understanding model behavior, and improving problem-solving accuracy
Instead of asking: "What is 15% of 200?"
You would ask: "Let's think step by step. What is 15% of 200?"
The model would then respond with something like:
"First, 15% means 15/100 = 0.15
Then, 0.15 × 200 = 30
So 15% of 200 is 30."
This approach is particularly valuable in Amazon Bedrock for research and development purposes where understanding the model's reasoning process is crucial.