
Ultimate access to all questions.
Answer-first summary for fast verification
Answer: Provide recent secure-code examples and ask the LLM to generalize (few-shot)
## Explanation The correct answer is **A** because: 1. **Few-shot learning with recent examples** provides the LLM with concrete, up-to-date patterns of secure coding practices that it can learn from and generalize. 2. **Zero-shot outputs contain outdated security patterns** - this indicates the model's base knowledge is not current enough, so providing recent examples directly addresses this gap. 3. **Chain-of-thought (B)** helps with reasoning but doesn't necessarily provide the model with updated security knowledge. 4. **Role-playing (C)** without examples relies on the model's existing knowledge, which is already outdated. 5. **Higher temperature (D)** increases creativity/randomness but doesn't ensure modern security practices are followed. **Key takeaway**: When dealing with domain-specific, time-sensitive knowledge like security best practices, providing recent examples (few-shot prompting) is the most effective way to guide the LLM toward current standards.
Author: Ritesh Yadav
No comments yet.
A product team wants an LLM to generate secure coding guidelines for internal developers. Zero-shot outputs contain outdated security patterns. Which prompting approach best ensures the model reflects modern best practices?
A
Provide recent secure-code examples and ask the LLM to generalize (few-shot)
B
Ask the LLM to "think step-by-step" using chain-of-thought
C
Role-play as a "Security Auditor" without examples
D
Use sampling temperature of 1.2 for creativity