
Answer-first summary for fast verification
Answer: Batch inference
## Analysis of Inference Requirements The question describes an ecommerce company that: 1. Receives **multiple gigabytes of customer data daily** 2. Needs to perform **inferences once per day** 3. Uses this data to train an ML model for **product demand forecasting** ## Evaluation of Each Inference Type ### **A: Batch Inference** ✅ - **Optimal choice**: Batch inference is specifically designed for processing large volumes of data at scheduled intervals. - **Alignment with requirements**: The company processes gigabytes of data daily and requires once-per-day inference, which matches the batch processing paradigm perfectly. - **Cost-effectiveness**: Batch inference allows for efficient resource utilization by processing all data together, reducing overhead compared to frequent small requests. - **Scheduling capability**: Batch jobs can be easily scheduled to run once daily, aligning with the "once each day" requirement. ### **B: Asynchronous Inference** - **Less suitable**: Asynchronous inference is designed for individual requests that don't require immediate responses, but it's not optimized for processing gigabytes of data in a single scheduled operation. - **Mismatch**: While it can handle queued requests, it's not the most efficient approach for daily bulk processing of large datasets. ### **C: Real-time Inference** - **Inappropriate**: Real-time inference is designed for low-latency, immediate responses to individual requests. - **Contradiction**: The requirement for "once each day" inference directly contradicts the continuous, immediate nature of real-time inference. - **Resource inefficiency**: Using real-time inference for daily batch processing would be unnecessarily expensive and complex. ### **D: Serverless Inference** - **Suboptimal**: Serverless inference abstracts infrastructure management but can be used for various inference patterns. - **Not specific enough**: While serverless could technically support batch processing, it doesn't specifically address the scheduled, high-volume nature of the requirement. - **Cost considerations**: For gigabytes of daily data, dedicated batch processing would typically be more cost-effective than serverless per-invocation pricing. ## Conclusion **Batch inference (Option A)** is the clear optimal choice because it: 1. **Matches the volume requirement** - Designed for processing large datasets 2. **Aligns with the timing requirement** - Can be scheduled for once-daily execution 3. **Provides cost efficiency** - Processes all data together, minimizing resource overhead 4. **Supports the use case** - Product demand forecasting typically benefits from daily batch updates rather than continuous real-time inference The other options either don't match the specific requirements (real-time, asynchronous) or are too general (serverless) for this clearly defined batch processing scenario.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
An ecommerce company processes multiple gigabytes of customer data daily to train a machine learning model for product demand forecasting. They require a solution to run inferences once per day.
Which inference type satisfies this requirement?
A
Batch inference
B
Asynchronous inference
C
Real-time inference
D
Serverless inference