
Answer-first summary for fast verification
Answer: Chain-of-thought prompting
The AI practitioner is using **Chain-of-Thought (CoT) prompting**. This technique involves structuring prompts to encourage large language models (LLMs) to break down complex problems into intermediate reasoning steps before arriving at a final answer. In this scenario, the practitioner explicitly asks the model to "show its work by explaining its reasoning step by step" when solving numerical reasoning challenges. This aligns perfectly with CoT prompting, which enhances the model's ability to handle multi-step problems by mimicking human-like reasoning processes. **Why Chain-of-Thought Prompting is Correct:** - **Step-by-Step Reasoning**: CoT prompting explicitly requests the model to articulate intermediate steps, which is exactly what the phrase in the prompt does. - **Improved Accuracy**: For numerical or logical problems, this technique helps reduce errors by making the reasoning process transparent and verifiable. - **Best Practice in AI**: CoT is a well-established prompt engineering method, particularly useful for tasks requiring structured problem-solving, such as mathematical challenges. **Why Other Options Are Incorrect:** - **B: Prompt Injection**: This refers to malicious attempts to manipulate a model's output by inserting unauthorized instructions, which is not relevant here as the practitioner is legitimately guiding the model. - **C: Few-Shot Prompting**: This involves providing examples within the prompt to demonstrate desired outputs, but the scenario does not mention any examples being included. - **D: Prompt Templating**: This involves creating reusable prompt structures, but the focus here is on the specific instruction for step-by-step reasoning, not on templating or formatting. Thus, based on AWS AI best practices and the definition of prompt engineering techniques, Chain-of-Thought prompting is the optimal choice.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
An AI practitioner is using Amazon Bedrock to host an Amazon Titan model for solving numerical reasoning challenges. They append the phrase “Ask the model to show its work by explaining its reasoning step by step” to the end of their prompt.
Which prompt engineering technique is being applied?
A
Chain-of-thought prompting
B
Prompt injection
C
Few-shot prompting
D
Prompt templating