
Google Professional Data Engineer
Get started today
Ultimate access to all questions.
Your company has developed a TensorFlow neural network model that includes a large number of neurons and layers. While the model shows excellent performance with the training data, its performance significantly drops when evaluated with new, unseen data. What strategy can you employ to rectify this issue?
Your company has developed a TensorFlow neural network model that includes a large number of neurons and layers. While the model shows excellent performance with the training data, its performance significantly drops when evaluated with new, unseen data. What strategy can you employ to rectify this issue?
Explanation:
The correct answer is C: Dropout Methods. Dropout is a technique used to prevent overfitting in neural networks by randomly dropping units (along with their connections) from the neural network during training. This helps to ensure that the model generalizes well to new data. Threading (A) and Serialization (B) do not address the issue of model performance on new data. Dimensionality Reduction (D) is more about reducing the number of input features, which doesn't directly address model overfitting when the model already fits training data well.