Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
As an AI architect for a photo-sharing social media platform, how would you design and implement an AI service to automatically prevent users from uploading explicit images, replacing the current manual moderation process?
A
Train an image clustering model by using TensorFlow in a Vertex AI Workbench instance. Deploy this model to a Vertex AI endpoint and configure it for online inference. Run this model each time a new image is uploaded to identify and block inappropriate uploads.
B
Develop a custom TensorFlow model in a Vertex AI Workbench instance. Train the model on a dataset of manually labeled images. Deploy the model to a Vertex AI endpoint. Run periodic batch inference to identify inappropriate uploads and report them to the content moderation team.
C
Create a dataset using manually labeled images. Ingest this dataset into AutoML. Train an image classification model and deploy into a Vertex AI endpoint. Integrate this endpoint with the image upload process to identify and block inappropriate uploads. Monitor predictions and periodically retrain the model.
D
Send a copy of every user-uploaded image to a Cloud Storage bucket. Configure a Cloud Run function that triggers the Cloud Vision API to detect explicit content each time a new image is uploaded. Report the classifications to the content moderation team for review.