
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A product team wants an LLM to generate secure coding guidelines for internal developers. Zero-shot outputs contain outdated security patterns. Which prompting approach best ensures the model reflects modern best practices?
A
Provide recent secure-code examples and ask the LLM to generalize (few-shot)
B
Ask the LLM to "think step-by-step" using chain-of-thought
C
Role-play as a "Security Auditor" without examples
D
Use sampling temperature of 1.2 for creativity
Explanation:
The correct answer is A because:
Few-shot learning with recent examples provides the LLM with concrete, up-to-date patterns of secure coding practices that it can learn from and generalize.
Zero-shot outputs contain outdated security patterns - this indicates the model's base knowledge is not current enough, so providing recent examples directly addresses this gap.
Chain-of-thought (B) helps with reasoning but doesn't necessarily provide the model with updated security knowledge.
Role-playing (C) without examples relies on the model's existing knowledge, which is already outdated.
Higher temperature (D) increases creativity/randomness but doesn't ensure modern security practices are followed.
Key takeaway: When dealing with domain-specific, time-sensitive knowledge like security best practices, providing recent examples (few-shot prompting) is the most effective way to guide the LLM toward current standards.