
Answer-first summary for fast verification
Answer: Safety Guardrail
The correct answer is A (Safety Guardrail) because it specifically addresses preventing the model from generating content in prohibited categories like politics, which aligns with the requirement to block political questions and respond with a standard message. Safety guardrails are designed to enforce ethical or policy-based boundaries, ensuring the chatbot avoids harmful or inappropriate outputs. While contextual guardrails (C) focus on maintaining relevance to a domain, they do not inherently enforce hard restrictions on specific topics. Compliance guardrails (D) are more about legal or regulatory adherence, which is not the primary concern here. The community discussion shows strong support for A with the highest upvotes and detailed reasoning, emphasizing that safety guardrails block explicit topic categories to prevent undesirable outputs.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
A Generative AI Engineer is building a conversational chatbot for insurance queries using a large language model (LLM). To ensure the chatbot remains focused and adheres to company policy, it must refuse to answer any questions about politics and instead reply with a standard message: "Sorry, I cannot answer that. I am a chatbot that can only answer questions around insurance." Which type of framework should be used to implement this?
A
Safety Guardrail
B
Security Guardrail
C
Contextual Guardrail
D
Compliance Guardrail
No comments yet.