
Answer-first summary for fast verification
Answer: Error caused by the model being overly simple
## Explanation In machine learning, **bias** refers to the error introduced by approximating a real-world problem with a simplified model. This is often called **underfitting**, where the model is too simple to capture the underlying patterns in the data. Let's break down each option: - **Option A (Correct)**: "Error caused by the model being overly simple" - This accurately describes bias in ML. High bias means the model makes strong assumptions about the data and fails to capture its complexity. - **Option B**: "Model sensitivity to fluctuations in training data" - This describes **variance**, not bias. Variance refers to how much the model's predictions would change if trained on different data. - **Option C**: "Random variations in predictions" - This also describes variance or noise, not bias. - **Option D**: "The storage cost of model parameters" - This is unrelated to the statistical concept of bias in ML. ### Key Concept: Bias-Variance Tradeoff In machine learning, there's a fundamental tradeoff between bias and variance: - **High bias, low variance**: Simple models that are consistent but inaccurate (underfitting) - **Low bias, high variance**: Complex models that fit training data well but may not generalize (overfitting) The goal is to find the right balance that minimizes total error.
Author: Ritesh Yadav
Ultimate access to all questions.
What does "bias" in ML refer to?
A
Error caused by the model being overly simple
B
Model sensitivity to fluctuations in training data
C
Random variations in predictions
D
The storage cost of model parameters
No comments yet.