
Answer-first summary for fast verification
Answer: Train and deploy a BigQuery ML classification model trained on historic loan default data. Enable feature-based explanations for each prediction. Report the prediction, probability of default, and feature attributions for each loan application.
Option C is the correct choice because it directly addresses all key requirements: (1) The data is already in BigQuery, making BigQuery ML the most efficient platform for model training and deployment; (2) It enables feature-based explanations, which are essential for compliance by providing transparent reasoning for loan rejections; (3) It provides both prediction probabilities and feature attributions, offering comprehensive insights for decision-making. Option A (AutoML with linear regression) may not provide adequate feature explanations and linear regression is less suitable for classification tasks. Option B (LLM approach) is inappropriate for structured tabular data and lacks the reliability needed for financial decisions. Option D (custom TensorFlow model) requires more complex implementation and may not have built-in explanation capabilities without additional development.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
As an ML engineer at a bank tasked with reducing loan defaults using AI, you have access to labeled historical loan default data in BigQuery. For compliance, you must provide explanations for any loan rejections. What is your recommended course of action?
A
Import the historic loan default data into AutoML. Train and deploy a linear regression model to predict default probability. Report the probability of default for each loan application.
B
Create a custom application that uses the Gemini large language model (LLM). Provide the historic data as context to the model, and prompt the model to predict customer defaults. Report the prediction and explanation provided by the LLM for each loan application.
C
Train and deploy a BigQuery ML classification model trained on historic loan default data. Enable feature-based explanations for each prediction. Report the prediction, probability of default, and feature attributions for each loan application.
D
Load the historic loan default data into a Vertex AI Workbench instance. Train a deep learning classification model using TensorFlow to predict loan default. Run inference for each loan application, and report the predictions.
No comments yet.