
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A company wants to identify harmful language in the comments section of social media posts by using an ML model. The company will not use labeled data to train the model. Which strategy should the company use to identify harmful language?
A
Use Amazon Rekognition moderation.
B
Use Amazon Comprehend toxicity detection.
C
Use Amazon SageMaker built-in algorithms to train the model.
D
Use Amazon Polly to monitor comments.
Explanation:
Correct Answer: B - Use Amazon Comprehend toxicity detection.
Why this is correct:
Why other options are incorrect:
A. Use Amazon Rekognition moderation.
C. Use Amazon SageMaker built-in algorithms to train the model.
D. Use Amazon Polly to monitor comments.
Key AWS Service Distinctions:
The requirement for "no labeled data" makes Amazon Comprehend's pre-trained toxicity detection the ideal solution.