
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A research team needs an LLM on Amazon Bedrock to explain its reasoning steps when solving complex mathematical problems. Which prompting technique encourages the model to show intermediate reasoning?
A
Output refinement prompting
B
Chain-of-thought prompting
Explanation:
Chain-of-thought prompting is the correct technique for encouraging LLMs to show intermediate reasoning steps when solving complex problems.
Chain-of-thought prompting:
Output refinement prompting:
Instead of asking: "What is 15% of 200?" You would ask: "Let's think step by step. What is 15% of 200?"
The model would then respond with something like: "First, 15% means 15/100 = 0.15 Then, 0.15 × 200 = 30 So 15% of 200 is 30."
This approach is particularly valuable in Amazon Bedrock for research and development purposes where understanding the model's reasoning process is crucial.