
Ultimate access to all questions.
A financial institution is building an AI solution to make loan approval decisions by using a foundation model (FM). For security and audit purposes, the company needs the AI solution's decisions to be explainable. Which factor relates to the explainability of the AI solution's decisions?
Explanation:
Explanation:
Model complexity directly relates to the explainability of AI decisions. Here's why:
Complexity vs. Explainability Trade-off: More complex models (like deep neural networks with many layers) often achieve higher accuracy but are less interpretable. Simpler models (like linear regression or decision trees) are more explainable but may have lower accuracy.
Foundation Models and Explainability: Foundation models (FMs) are typically large, complex models pre-trained on vast datasets. Their complexity makes it challenging to understand exactly how they arrive at specific decisions, which is problematic for regulated industries like finance.
Financial Industry Requirements: In financial institutions, loan approval decisions must be explainable for:
Other Options Analysis:
AWS Context: AWS offers services like Amazon SageMaker Clarify that help explain ML model predictions, which is particularly important for complex models in regulated industries.
For financial institutions using AI for loan approvals, choosing models with appropriate complexity levels or using explainability tools is crucial for meeting security and audit requirements.