
Answer-first summary for fast verification
Answer: Implement moderation APIs.
## Detailed Explanation To address the requirement of preventing a chatbot from returning inappropriate or unwanted images, the most effective and direct solution is to implement moderation APIs. Here's the reasoning: ### Why Option A (Implement moderation APIs) is Correct: 1. **Real-time Content Filtering**: Moderation APIs (such as Amazon Rekognition Content Moderation) can analyze images in real-time before they are returned to users. They detect explicit, offensive, or harmful content based on predefined categories (e.g., violence, nudity, hate symbols). 2. **AWS Best Practice**: For AI applications generating or handling user content, AWS recommends using built-in moderation services to ensure safety and compliance. This aligns with AWS Well-Architected Framework principles for security and reliability. 3. **Scalability and Efficiency**: Automated moderation scales with the chatbot's usage, unlike manual reviews, and reduces operational overhead while maintaining consistent filtering standards. 4. **Proactive Prevention**: By integrating moderation at the response generation stage, inappropriate images are blocked before reaching users, minimizing exposure risks and potential reputational damage. ### Why Other Options Are Less Suitable: - **Option B (Retrain the model with a general public dataset)**: While retraining might improve overall model behavior, it does not guarantee prevention of inappropriate outputs in all cases. Generative models can still produce undesirable content due to biases or edge cases in training data. This approach is reactive, less reliable for real-time filtering, and resource-intensive compared to moderation APIs. - **Option C (Perform model validation)**: Validation assesses model performance during development but does not actively filter content in production. It's a pre-deployment step that ensures quality but lacks ongoing protection against inappropriate outputs during user interactions. - **Option D (Automate user feedback integration)**: This relies on post-hoc user reports, which means inappropriate images are already exposed to users before action is taken. It's reactive rather than preventive, potentially causing harm before mitigation. ### Conclusion: Implementing moderation APIs provides a robust, automated, and real-time solution to screen image content, ensuring the chatbot adheres to safety guidelines without compromising user experience. This approach is widely adopted in industry best practices for content moderation in AI-driven applications.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
A company has developed a chatbot that generates image responses to natural language queries. They need to prevent the chatbot from outputting offensive or undesirable images. What solution fulfills this requirement?
A
Implement moderation APIs.
B
Retrain the model with a general public dataset.
C
Perform model validation.
D
Automate user feedback integration.