
Ultimate access to all questions.
You are working on implementing an image moderation solution using Azure AI Content Safety to automatically detect and filter out inappropriate images from a large dataset of images. Describe the steps you would take to batch process the dataset using the Azure AI Content Safety service and how you would handle any images that are flagged as inappropriate.
A
Use the Azure AI Content Safety service's REST API to send individual image URLs to the service for moderation, and manually review and handle any images flagged as inappropriate.
B
Use the Azure AI Content Safety service's REST API to send a batch of image URLs to the service for moderation, and automatically block any images flagged as inappropriate without further review.
C
Use the Azure AI Content Safety service's REST API to send a batch of image URLs to the service for moderation, and set up a workflow to review and handle any images flagged as inappropriate based on predefined criteria.
D
Use the Azure AI Content Safety service's REST API to send a batch of image URLs to the service for moderation, and automatically delete any images flagged as inappropriate without any review or handling process.