
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A legal firm wants to ensure their AI assistant never produces rude, harmful, or hallucinated responses. They want a way to explicitly specify what the model should avoid. What should they use?
A
Zero-shot prompting
B
Chain-of-thought prompting
C
Negative prompting
D
Persona prompting
Explanation:
Negative prompting is the correct answer because it allows users to explicitly specify what the AI model should avoid generating. This technique involves providing instructions about what NOT to include in the responses, which is exactly what the legal firm needs to prevent rude, harmful, or hallucinated content.
Why other options are incorrect:
Key takeaway: Negative prompting is particularly useful in professional settings like legal firms where controlling the output quality and preventing inappropriate content is critical. It provides a direct mechanism to set boundaries on AI behavior.