
Answer-first summary for fast verification
Answer: Recurrent Neural Networks (RNN), due to their ability to process sequential data and learn from temporal patterns, making them ideal for time-series forecasting.
Recurrent Neural Networks (RNN) are the most suitable for this task because they are specifically designed to handle sequential and time-series data. Their architecture allows them to remember previous inputs in the sequence, making them ideal for forecasting tasks where the order of data points (such as daily inventory updates) is crucial. RNNs can effectively learn from the temporal dynamics of the data, including trends and seasonal patterns, to make accurate predictions. This contrasts with other options like CNNs, which are not designed for sequential data, or reinforcement learning, which does not directly address the need for temporal pattern recognition in forecasting.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
As an ML engineer at a leading grocery retailer, you are tasked with developing an inventory prediction model to optimize stock levels across multiple stores. The model must incorporate various features such as region, location, historical sales data, seasonal trends, and promotional events to forecast inventory needs accurately. It should also dynamically adapt to daily updates in inventory data to ensure predictions remain relevant. Given the complexity of the data and the need for the model to learn from sequential data over time, which of the following algorithms is most suitable for this task? (Choose one correct option)
A
Classification algorithms, as they can categorize inventory levels into predefined classes based on input features.
B
Recurrent Neural Networks (RNN), due to their ability to process sequential data and learn from temporal patterns, making them ideal for time-series forecasting.
C
Convolutional Neural Networks (CNN), which are best suited for image recognition tasks and spatial data analysis.
D
Reinforcement Learning, as it can optimize inventory levels through trial and error based on reward feedback.
No comments yet.