
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A financial institution is building an AI solution to make loan approval decisions by using a foundation model (FM). For security and audit purposes, the company needs the AI solution's decisions to be explainable. Which factor relates to the explainability of the AI solution's decisions?
A
Model complexity
B
Training time
C
Number of hyperparameters
D
Deployment time
Explanation:
Explanation:
Model complexity directly relates to the explainability of AI decisions. Here's why:
Complexity vs. Explainability Trade-off: More complex models (like deep neural networks with many layers) often achieve higher accuracy but are less interpretable. Simpler models (like linear regression or decision trees) are more explainable but may have lower accuracy.
Foundation Models and Explainability: Foundation models (FMs) are typically large, complex models pre-trained on vast datasets. Their complexity makes it challenging to understand exactly how they arrive at specific decisions, which is problematic for regulated industries like finance.
Financial Industry Requirements: In financial institutions, loan approval decisions must be explainable for:
Other Options Analysis:
AWS Context: AWS offers services like Amazon SageMaker Clarify that help explain ML model predictions, which is particularly important for complex models in regulated industries.
For financial institutions using AI for loan approvals, choosing models with appropriate complexity levels or using explainability tools is crucial for meeting security and audit requirements.