
Ultimate access to all questions.
An AI practitioner trained a custom model on Amazon Bedrock using a training dataset containing confidential data. How can the AI practitioner ensure the custom model does not generate inference responses based on that confidential data?
A
Delete the custom model. Remove the confidential data from the training dataset. Retrain the custom model.
B
Mask the confidential data in the inference responses by using dynamic data masking.
C
Encrypt the confidential data in the inference responses by using Amazon SageMaker.
D
Encrypt the confidential data in the custom model by using AWS Key Management Service (AWS KMS).