
Ultimate access to all questions.
You are tasked with developing a machine learning model to predict trends in the stock market. The dataset you are using contains a wide range of features, such as stock prices, trading volumes, economic indicators, and sentiment scores, each varied in magnitude. During the data exploration phase, you notice that some features have much larger ranges than others. You are concerned that the features with large magnitudes may disproportionately influence the model. To prevent this issue and ensure a balanced input to your model, what preprocessing step should you take?
A
Standardize the data by transforming it with a logarithmic function.
B
Apply a principal component analysis (PCA) to minimize the effect of any particular feature.
C
Use a binning strategy to replace the magnitude of each feature with the appropriate bin number.
D
Normalize the data by scaling it to have values between 0 and 1.