
Ultimate access to all questions.
Why is standardizing or normalizing features crucial in Spark ML data preprocessing?
A
It simplifies feature selection
B
It adds complexity to the model
C
It ensures all features are on a similar scale, preventing any particular feature from dominating
D
It slows down model training