
Answer-first summary for fast verification
Answer: Use the Azure AI Content Safety service's REST API to send a batch of image URLs to the service for moderation, and set up a workflow to review and handle any images flagged as inappropriate based on predefined criteria.
Option C is the most comprehensive and accurate answer. It covers the necessary steps to batch process a large dataset of images using the Azure AI Content Safety service's REST API, and suggests setting up a workflow to review and handle any images flagged as inappropriate based on predefined criteria. This approach allows for efficient moderation of the dataset while ensuring that inappropriate images are handled appropriately, either by review or by applying specific actions based on the moderation results.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are working on implementing an image moderation solution using Azure AI Content Safety to automatically detect and filter out inappropriate images from a large dataset of images. Describe the steps you would take to batch process the dataset using the Azure AI Content Safety service and how you would handle any images that are flagged as inappropriate.
A
Use the Azure AI Content Safety service's REST API to send individual image URLs to the service for moderation, and manually review and handle any images flagged as inappropriate.
B
Use the Azure AI Content Safety service's REST API to send a batch of image URLs to the service for moderation, and automatically block any images flagged as inappropriate without further review.
C
Use the Azure AI Content Safety service's REST API to send a batch of image URLs to the service for moderation, and set up a workflow to review and handle any images flagged as inappropriate based on predefined criteria.
D
Use the Azure AI Content Safety service's REST API to send a batch of image URLs to the service for moderation, and automatically delete any images flagged as inappropriate without any review or handling process.
No comments yet.