
Answer-first summary for fast verification
Answer: Guardrails for Amazon Bedrock
**Guardrails for Amazon Bedrock** is the correct answer because it provides content filtering and safety controls to ensure AI-generated content is appropriate for specific audiences. ### Key Points: 1. **Guardrails for Amazon Bedrock** allows you to configure content filters to prevent inappropriate or harmful content generation 2. It can filter out topics, words, and content types that are not suitable for children 3. This service helps ensure compliance with safety policies and content guidelines 4. It works alongside foundation models in Amazon Bedrock to provide an additional layer of safety ### Why other options are incorrect: - **A. Amazon Rekognition**: This is a computer vision service for image and video analysis, not for content filtering in text generation - **B. Amazon Bedrock playgrounds**: These are interactive environments for testing models, not for enforcing content safety policies - **D. Agents for Amazon Bedrock**: These are for creating conversational AI agents that can perform tasks, not specifically for content filtering Guardrails for Amazon Bedrock is specifically designed to address the requirement of ensuring generated content is appropriate for children by filtering out inappropriate topics and content.
Author: Ritesh Yadav
Ultimate access to all questions.
A company wants to build an interactive application for children that generates new stories based on classic stories. The company wants to use Amazon Bedrock and needs to ensure that the results and topics are appropriate for children.
Which AWS service or feature will meet these requirements?
A
Amazon Rekognition
B
Amazon Bedrock playgrounds
C
Guardrails for Amazon Bedrock
D
Agents for Amazon Bedrock
No comments yet.