
Answer-first summary for fast verification
Answer: Hate, Violence
Amazon Bedrock Guardrails is a feature designed to help organizations implement safety controls for their generative AI applications by filtering harmful content. Based on AWS documentation and best practices for AI safety, Guardrails can filter content across specific predefined categories to prevent inappropriate or dangerous outputs. The correct content categories that Amazon Bedrock Guardrails can filter are: **A. Hate** - Guardrails can detect and filter content that promotes hatred, discrimination, or hostility based on protected characteristics such as race, ethnicity, religion, gender, or other attributes. **C. Violence** - Guardrails can identify and block content that depicts, glorifies, or incites violence, including physical harm, threats, or graphic descriptions of violent acts. These categories are explicitly supported by Bedrock Guardrails as part of AWS's commitment to responsible AI, allowing companies to mitigate risks associated with harmful generative AI outputs. Other options like Politics, Gambling, and Religion are not standard filtering categories in Bedrock Guardrails; while they might be relevant in specific contexts, they are not part of the core safety features provided by AWS for this service. The selection of Hate and Violence aligns with common AI safety frameworks that prioritize preventing content that could cause real-world harm or violate ethical guidelines.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.