
Ultimate access to all questions.
You are working on implementing a text moderation solution using Azure AI Content Safety to filter out inappropriate content from user-generated content on a customer support platform. Describe the steps you would take to implement this solution, including how you would configure the service to handle different types of content and how you would integrate it with the platform's existing ticketing system.
A
Create an Azure AI Content Safety account, configure the service to filter out profanity and explicit content in the ticketing system, and integrate it with the platform's existing ticketing system using a custom middleware.
B
Create an Azure AI Content Safety account, configure the service to filter out profanity, explicit content, personal attacks, and hate speech in the ticketing system, and integrate it with the platform's existing ticketing system using a server-side script.
C
Create an Azure AI Content Safety account, configure the service to filter out profanity, explicit content, personal attacks, hate speech, and customer support-specific content in the ticketing system, and integrate it with the platform's existing ticketing system using a custom plugin.
D
Create an Azure AI Content Safety account, configure the service to filter out all types of content in the ticketing system, and integrate it with the platform's existing ticketing system using a database trigger.