
Answer-first summary for fast verification
Answer: Split instruction manuals into chunks and embed into a vector store. Use the question to retrieve best matched chunks of manual, and use the LLM to generate a response to the user based upon the manual retrieved.
Option A describes the standard Retrieval-Augmented Generation (RAG) approach, which is optimal for this use case. It involves chunking instruction manuals, embedding them into a vector store, retrieving the most relevant chunks based on the user's question, and using an LLM to generate a response. This method efficiently handles large documents by retrieving only relevant information, reducing context window requirements and improving accuracy. Option B uses ALS matrix factorization, which is more suited for recommendation systems than document retrieval. Option C averages embeddings for entire manuals, which loses granular information and may retrieve irrelevant manuals. Option D summarizes all manuals upfront, which is inefficient, loses detail, and may not scale well. The community discussion strongly favors A with 75% consensus and upvoted comments confirming it as the correct RAG implementation.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
A Generative AI Engineer at a home appliance company is designing an LLM-based application to answer customer questions about home appliances by using the associated instruction manuals.
Which set of high-level tasks should the engineer's system perform?
A
Split instruction manuals into chunks and embed into a vector store. Use the question to retrieve best matched chunks of manual, and use the LLM to generate a response to the user based upon the manual retrieved.
B
Create an interaction matrix of historical user questions and appliance instruction manuals. Use ALS to factorize the matrix and create embeddings. Calculate the embeddings of new queries and use them to find the best manual. Use an LLM to generate a response to the question based upon the manual retrieved.
C
Calculate averaged embeddings for each instruction manual, compare embeddings to user query to find the best manual. Pass the best manual with user query into an LLM with a large context window to generate a response to the employee.
D
Use an LLM to summarize all of the instruction manuals. Provide summaries of each manual and user query into an LLM with a large context window to generate a response to the user.