
Answer-first summary for fast verification
Answer: Model complexity
**Explanation:** Model complexity directly relates to the explainability of AI decisions. Here's why: 1. **Complexity vs. Explainability Trade-off**: More complex models (like deep neural networks with many layers) often achieve higher accuracy but are less interpretable. Simpler models (like linear regression or decision trees) are more explainable but may have lower accuracy. 2. **Foundation Models and Explainability**: Foundation models (FMs) are typically large, complex models pre-trained on vast datasets. Their complexity makes it challenging to understand exactly how they arrive at specific decisions, which is problematic for regulated industries like finance. 3. **Financial Industry Requirements**: In financial institutions, loan approval decisions must be explainable for: - **Regulatory compliance** (e.g., fair lending laws) - **Audit trails** for accountability - **Customer transparency** when decisions are challenged - **Risk management** to identify potential biases 4. **Other Options Analysis**: - **B. Training time**: While longer training might indicate more complex models, it doesn't directly determine explainability. - **C. Number of hyperparameters**: More hyperparameters can increase complexity, but the relationship to explainability is indirect. - **D. Deployment time**: This relates to operational efficiency, not explainability. 5. **AWS Context**: AWS offers services like Amazon SageMaker Clarify that help explain ML model predictions, which is particularly important for complex models in regulated industries. For financial institutions using AI for loan approvals, choosing models with appropriate complexity levels or using explainability tools is crucial for meeting security and audit requirements.
Author: Ritesh Yadav
Ultimate access to all questions.
A financial institution is building an AI solution to make loan approval decisions by using a foundation model (FM). For security and audit purposes, the company needs the AI solution's decisions to be explainable. Which factor relates to the explainability of the AI solution's decisions?
A
Model complexity
B
Training time
C
Number of hyperparameters
D
Deployment time
No comments yet.