
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A product team wants an LLM to generate secure coding guidelines for internal developers. Zero-shot outputs contain outdated security patterns. Which prompting approach best ensures the model reflects modern best practices?
A
Provide recent secure-code examples and ask the LLM to generalize (few-shot)
B
Ask the LLM to "think step-by-step" using chain-of-thought
C
Role-play as a "Security Auditor" without examples
D
Use sampling temperature of 1.2 for creativity
Explanation:
Explanation:
When zero-shot prompting (providing no examples) results in outdated security patterns, the best approach is few-shot prompting (option A). Here's why:
Few-shot prompting provides concrete examples: By giving the LLM recent, up-to-date secure code examples, you're directly showing it what modern best practices look like. The model can then generalize from these specific examples to generate guidelines that reflect current security standards.
Chain-of-thought (option B) helps with reasoning but doesn't necessarily provide the model with updated information about modern security practices.
Role-playing (option C) might help with perspective but still lacks the concrete examples needed to update the model's knowledge about current security patterns.
Adjusting temperature (option D) affects creativity/randomness but doesn't address the core issue of outdated information.
Key takeaway: When dealing with domain-specific knowledge that requires current best practices (like security), providing recent examples (few-shot) is more effective than relying on the model's pre-existing knowledge, which may be outdated.