
Answer-first summary for fast verification
Answer: Delete the custom model. Remove the confidential data from the training dataset. Retrain the custom model.
## Detailed Explanation To ensure a custom model trained on Amazon Bedrock does not generate inference responses based on confidential data, the most effective approach is to address the root cause: the model's training on that sensitive data. ### Why Option A is Correct **Option A (Delete the custom model, remove confidential data from training dataset, retrain the custom model)** is the optimal solution because: 1. **Addresses the fundamental issue**: When a model is trained on confidential data, that information becomes embedded in the model's parameters through the learning process. The model may have memorized patterns or specific details from the confidential data, which could influence its outputs during inference. 2. **Eliminates the source of risk**: By removing the confidential data from the training dataset and retraining the model from scratch, you ensure the new model has never been exposed to the sensitive information. This prevents any possibility of the model generating responses based on that data. 3. **Follows AWS best practices for data privacy**: AWS recommends that sensitive data should be excluded from training datasets when privacy is a concern. Retraining with a cleaned dataset aligns with this principle. ### Why Other Options Are Less Suitable **Option B (Mask the confidential data in inference responses using dynamic data masking)**: - This approach only addresses the output after the model has already generated it, not the underlying issue of the model having learned from confidential data. - The model could still generate responses influenced by the confidential patterns, even if the output is masked. - Dynamic data masking is typically used for database queries, not for controlling model behavior. **Option C (Encrypt the confidential data in inference responses using Amazon SageMaker)**: - Similar to option B, this treats the symptom rather than the cause. - Encryption protects data in transit or at rest but doesn't prevent the model from generating responses based on learned confidential patterns. - This approach doesn't address the core problem of the model having been trained on sensitive data. **Option D (Encrypt the confidential data in the custom model using AWS KMS)**: - This would protect the model artifact itself but wouldn't prevent the model from generating responses based on the confidential data it learned during training. - The model's behavior during inference would remain unchanged, as encryption doesn't alter the learned parameters or patterns. ### Key Considerations - **Model retraining is necessary**: Once a model has learned from data, that knowledge is integrated into its parameters. The only way to ensure it doesn't use confidential information is to train a new model without that data. - **Data governance**: This scenario highlights the importance of proper data governance before training. Ideally, confidential data should be identified and removed before the initial training process. - **Cost and time implications**: While option A requires retraining (which incurs computational costs and time), it's the only approach that guarantees the model won't generate responses based on the confidential data. In summary, option A provides a comprehensive solution by addressing the root cause through data removal and model retraining, ensuring the custom model operates without any influence from the confidential information.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
An AI practitioner trained a custom model on Amazon Bedrock using a training dataset containing confidential data. How can the AI practitioner ensure the custom model does not generate inference responses based on that confidential data?
A
Delete the custom model. Remove the confidential data from the training dataset. Retrain the custom model.
B
Mask the confidential data in the inference responses by using dynamic data masking.
C
Encrypt the confidential data in the inference responses by using Amazon SageMaker.
D
Encrypt the confidential data in the custom model by using AWS Key Management Service (AWS KMS).