Google Professional Machine Learning Engineer

Google Professional Machine Learning Engineer

Get started today

Ultimate access to all questions.


As a junior Data Scientist working on a classification model with TensorFlow, you're faced with a limited dataset. You're familiar with the standard practice of dividing data into training, test, and validation sets but are concerned about the adequacy of your dataset size for satisfactory model performance. Given the constraints of a small dataset, which of the following approaches would best ensure model performance without overfitting? Choose the best option.