
Ultimate access to all questions.
As a Machine Learning Engineer at a leading financial institution, you've developed a binary classification model using Google Cloud's AutoML Tables to predict the likelihood of customers making timely loan repayments. This model plays a critical role in the loan approval process. Following a model's decision to reject a loan application, the risk management team requests a detailed explanation to understand the factors influencing this decision. The explanation must not only clarify the model's reasoning but also adhere to regulatory compliance requirements for transparency in automated decision-making. Given these constraints, which method should you employ to provide a comprehensive and compliant explanation for the model's decision? Choose the best option.
A
Utilize the global feature importance scores from the model evaluation page to explain the general behavior of the model across all predictions.
B
Conduct a sensitivity analysis by systematically altering each feature's value to observe changes in the model's output, aiming to identify specific thresholds that affect the classification.
C
Review the correlation coefficients between features and the target variable as provided in the dataset's summary statistics to infer the model's decision-making process.
D
Apply local interpretability techniques, such as SHAP or LIME, to the specific prediction in question to detail how each feature contributed to the model's decision for this individual case.
E
Combine both global feature importance and local interpretability methods to provide a dual-layered explanation that covers general model behavior and specific decision rationale.