
Ultimate access to all questions.
Answer-first summary for fast verification
Answer: Guardrails for Amazon Bedrock
**Guardrails for Amazon Bedrock** is specifically designed to help developers implement safeguards that filter harmful content, prevent biased outputs, and ensure compliance with organizational policies. This feature allows companies to define and apply content filters, sensitive information filters, and deny harmful topics, making it the ideal choice for healthcare applications where patient safety and regulatory compliance are critical. **Why other options are incorrect:** - **A) Knowledge Bases for Amazon Bedrock**: This is for connecting foundation models to company data sources for RAG (Retrieval Augmented Generation), not for content filtering. - **C) Continued pretraining**: This refers to further training foundation models on domain-specific data, which doesn't directly address content filtering or compliance. - **D) Prompt chaining**: This is a technique for breaking down complex tasks into multiple prompts, not for implementing safety controls.
Author: Jin H
A healthcare company is using Amazon Bedrock to build a patient support chatbot. They need to ensure the chatbot avoids generating harmful, biased, or non-compliant responses. Which Bedrock feature should they use?
A
Knowledge Bases for Amazon Bedrock
B
Guardrails for Amazon Bedrock
C
Continued pretraining
D
Prompt chaining
No comments yet.