
Ultimate access to all questions.
You are an analyst at a large banking firm, tasked with creating a robust and scalable machine learning (ML) pipeline. This pipeline will be used to train several regression and classification models to support various business needs. Given the nature of banking data, model interpretability is critical, as stakeholders need clear insights into the decision-making process of the models. Additionally, you are expected to productionize this pipeline quickly to deliver fast results. What should you do?
A
Use Tabular Workflow for Wide & Deep through Vertex AI Pipelines to jointly train wide linear models and deep neural networks
B
Use Google Kubernetes Engine to build a custom training pipeline for XGBoost-based models
C
Use Tabular Workflow for TabNet through Vertex AI Pipelines to train attention-based models
D
Use Cloud Composer to build the training pipelines for custom deep learning-based models