Ultimate access to all questions.
As a data scientist at a leading bank, you are tasked with developing a machine learning model to predict loan default risk. The dataset, stored in BigQuery, consists of hundreds of millions of records, meticulously cleaned and prepared for analysis. Your objective is to leverage TensorFlow and Vertex AI for model development and comparison, ensuring the solution is scalable and minimizes data ingestion bottlenecks. Given the constraints of handling such a massive dataset efficiently, which approach should you adopt? Please choose the best option.