
Answer-first summary for fast verification
Answer: Use Amazon Comprehend toxicity detection.
## Explanation **Correct Answer: B - Use Amazon Comprehend toxicity detection.** **Why this is correct:** 1. **Amazon Comprehend** is Amazon's natural language processing (NLP) service that includes a **Toxicity Detection** feature specifically designed to identify harmful language, hate speech, and toxic content. 2. **No labeled data required**: Amazon Comprehend's toxicity detection is a pre-trained model that works out-of-the-box without requiring any labeled training data from the user. 3. **Perfect fit for the use case**: The service is specifically designed for analyzing text content like social media comments to identify harmful language. **Why other options are incorrect:** **A. Use Amazon Rekognition moderation.** - Amazon Rekognition is primarily for **image and video analysis**, not text analysis. - While it has content moderation features, they are focused on visual content (inappropriate images/videos), not text comments. **C. Use Amazon SageMaker built-in algorithms to train the model.** - This would require **labeled data** to train the model, which contradicts the requirement that "the company will not use labeled data to train the model." - SageMaker is for building custom ML models, which typically require training data. **D. Use Amazon Polly to monitor comments.** - Amazon Polly is a **text-to-speech** service that converts text into lifelike speech. - It has no capability for content analysis, toxicity detection, or harmful language identification. **Key AWS Service Distinctions:** - **Amazon Comprehend**: NLP service for text analysis (sentiment, entities, key phrases, toxicity) - **Amazon Rekognition**: Computer vision service for image/video analysis - **Amazon SageMaker**: ML platform for building, training, and deploying custom models - **Amazon Polly**: Text-to-speech service The requirement for "no labeled data" makes Amazon Comprehend's pre-trained toxicity detection the ideal solution.
Author: Ritesh Yadav
Ultimate access to all questions.
No comments yet.
A company wants to identify harmful language in the comments section of social media posts by using an ML model. The company will not use labeled data to train the model. Which strategy should the company use to identify harmful language?
A
Use Amazon Rekognition moderation.
B
Use Amazon Comprehend toxicity detection.
C
Use Amazon SageMaker built-in algorithms to train the model.
D
Use Amazon Polly to monitor comments.