
Answer-first summary for fast verification
Answer: Use only approved social media and news accounts to prevent unexpected toxic data from getting to the LLM.
The question asks for a guardrail to prevent toxic outputs from a system that sources information from social media and articles. Option B directly addresses the root cause by restricting data sources to approved social media and news accounts, which prevents toxic content from entering the system in the first place. This is a proactive and effective approach, as supported by the community discussion where B has 75% consensus and the highest upvoted comment. Option A (reducing context items) may limit information but doesn't specifically target toxicity. Option C (batch toxicity analysis) is reactive and occurs after potential harm. Option D (rate limiting) controls usage volume but doesn't address content toxicity.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
A Generative AI Engineer is developing a system to answer questions about current news events by sourcing information from articles and social media. They are concerned that toxic social media content could lead to toxic system outputs.
Which guardrail can be implemented to prevent toxic outputs?
A
Reduce the amount of context items the system will include in consideration for its response.
B
Use only approved social media and news accounts to prevent unexpected toxic data from getting to the LLM.
C
Log all LLM system responses and perform a batch toxicity analysis monthly.
D
Implement rate limiting.
No comments yet.