
Ultimate access to all questions.
Question: 7
You are designing a recommendation engine for an e-commerce platform. The source documents consist of product descriptions, customer reviews, and seller guidelines, ranging from 50 to 1000 words. Customer queries are typically short (1-2 sentences) and focus on finding specific products or features. You want to optimize the system for fast, accurate responses to queries while minimizing unnecessary memory usage. Which context length for the embedding model would be most appropriate for your use case?