
Answer-first summary for fast verification
Answer: Create Amazon SageMaker Model Cards with intended uses and training and inference details.
## Detailed Explanation ### Question Analysis The ML team develops custom ML models and shares model artifacts with other teams while retaining training code and data. They need a mechanism specifically for the **ML team to audit models** when publishing them. The key requirement is establishing documentation that enables effective auditing of custom ML models. ### Evaluation of Options **A: Create documents with relevant information. Store the documents in Amazon S3.** - While storing documentation in S3 provides accessibility, this approach lacks standardization and structure. - Creating ad-hoc documents doesn't ensure comprehensive coverage of all necessary audit information. - This solution doesn't provide a systematic framework for tracking model details, performance metrics, or ethical considerations. - Less suitable because it relies on manual processes without AWS-native tooling designed specifically for model documentation. **B: Use AWS AI Service Cards for transparency and understanding models.** - AWS AI Service Cards are designed for **pre-built AWS AI services** (like Amazon Rekognition, Amazon Comprehend, etc.), not for custom ML models developed by teams. - These cards provide transparency about AWS-managed services but cannot be customized for documenting team-specific model development processes. - Completely unsuitable for custom ML models as they don't support documenting custom training data, algorithms, or team-specific evaluation metrics. **C: Create Amazon SageMaker Model Cards with intended uses and training and inference details.** - **Optimal solution** because Amazon SageMaker Model Cards are specifically designed for documenting custom ML models. - They provide a standardized, structured format that includes: - Model purpose and intended use cases - Training methodology and data details - Performance metrics and evaluation results - Ethical considerations and limitations - Bias analysis and risk assessments - Version history and lineage tracking - This comprehensive documentation enables effective auditing by providing a clear, organized record of model development, evaluation, and intended usage. - Model Cards promote transparency, accountability, and reproducibility—all essential for auditing custom ML models. - As an AWS-native solution, it integrates seamlessly with the SageMaker ecosystem that the ML team likely uses for model development. **D: Create model training scripts. Commit the model training scripts to a Git repository.** - While version control for training scripts is a good practice, it addresses only one aspect of model documentation. - This solution doesn't capture model performance metrics, intended uses, ethical considerations, or other critical audit information. - Git repositories track code changes but don't provide a structured format for documenting the complete model lifecycle. - Less suitable because it's insufficient for comprehensive model auditing requirements. ### Why Option C is the Best Choice 1. **Purpose-Built Solution**: SageMaker Model Cards are specifically designed for documenting ML models, unlike generic documentation approaches. 2. **Comprehensive Coverage**: They capture all essential information needed for auditing—from training details to ethical considerations. 3. **Standardization**: Provides a consistent format that ensures all necessary audit information is captured systematically. 4. **AWS Integration**: Seamlessly works with the SageMaker platform that ML teams typically use for model development and deployment. 5. **Audit-Ready**: The structured format makes it easy for the ML team to review, validate, and audit models before and after publication. ### Key Distinction While options A and D represent good general practices, they don't provide the specialized, comprehensive documentation framework needed specifically for model auditing. Option B is completely misaligned with the requirement for custom models. Only option C delivers a purpose-built solution that addresses all aspects of the ML team's auditing needs.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
An ML research team creates custom ML models and shares the model artifacts with other teams for integration. The ML team keeps the training code and data. They need to establish a mechanism for the ML team to audit the models.
What solution should the ML team implement when publishing the custom ML models?
A
Create documents with the relevant information. Store the documents in Amazon S3.
B
Use AWS AI Service Cards for transparency and understanding models.
C
Create Amazon SageMaker Model Cards with intended uses and training and inference details.
D
Create model training scripts. Commit the model training scripts to a Git repository.