
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A legal firm wants to ensure their AI assistant never produces rude, harmful, or hallucinated responses. They want a way to explicitly specify what the model should avoid. What should they use?
A
Zero-shot prompting
B
Chain-of-thought prompting
C
Negative prompting
D
Persona prompting
Explanation:
Negative prompting is the correct answer because it allows users to explicitly specify what the AI model should avoid generating. This technique involves providing instructions about what NOT to include in the responses, which is exactly what the legal firm needs to prevent rude, harmful, or hallucinated content.
Why other options are incorrect:
A. Zero-shot prompting: This involves giving the model a task without any examples. It doesn't specifically address what to avoid.
B. Chain-of-thought prompting: This technique encourages the model to show its reasoning step-by-step, which helps with complex problem-solving but doesn't directly address content filtering.
D. Persona prompting: This involves giving the AI a specific character or role to play, which can influence the style of responses but doesn't explicitly define what content to avoid.
Key takeaway: Negative prompting is particularly useful in professional settings like legal firms where controlling the output quality and preventing inappropriate content is critical. It provides a direct mechanism to set boundaries on AI behavior.