
Ultimate access to all questions.
You are working on implementing a text moderation solution using Azure AI Content Safety to filter out inappropriate content from user-generated content on a gaming platform. Describe the steps you would take to implement this solution, including how you would configure the service to handle different types of content and how you would integrate it with the platform's existing chat system.
A
Create an Azure AI Content Safety account, configure the service to filter out profanity and explicit content in the chat system, and integrate it with the platform's existing chat system using a custom middleware.
B
Create an Azure AI Content Safety account, configure the service to filter out profanity, explicit content, personal attacks, and hate speech in the chat system, and integrate it with the platform's existing chat system using a server-side script.
C
Create an Azure AI Content Safety account, configure the service to filter out profanity, explicit content, personal attacks, hate speech, and gaming-specific content in the chat system, and integrate it with the platform's existing chat system using a custom plugin.
D
Create an Azure AI Content Safety account, configure the service to filter out all types of content in the chat system, and integrate it with the platform's existing chat system using a database trigger.