
Answer-first summary for fast verification
Answer: Use data-encryption, guardrails, and human review of AI-generated outputs
## Explanation Option B is the correct answer because it follows AWS's responsible AI best practices: - **Data encryption**: Protects sensitive medical data (MRI images) during storage and transmission - **Guardrails**: Implement safety mechanisms to ensure AI-generated outputs are appropriate and don't contain harmful content - **Human review**: Maintains human oversight over AI-generated synthetic data to verify quality and appropriateness before use in model training This approach balances innovation with ethical considerations, privacy protection, and safety requirements that are critical in healthcare applications. **Why other options are incorrect:** - **A**: Disabling logging removes audit trails and monitoring capabilities, which is not a responsible practice - **C**: Fully automated approval without oversight can lead to undetected errors or biases in sensitive medical applications - **D**: Publishing synthetic medical data publicly could violate patient privacy and regulatory requirements (HIPAA, GDPR)
Author: Ritesh Yadav
Ultimate access to all questions.
A medical-imaging company wants to use generative AI to create synthetic MRI data for model training. Which best practice ensures responsible AI use in AWS?
A
Disable logging to protect privacy
B
Use data-encryption, guardrails, and human review of AI-generated outputs
C
Fully automate model approval with no oversight
D
Publish all synthetic data publicly for open research
No comments yet.