
Answer-first summary for fast verification
Answer: Provide few shot examples of desired output format to the system and/or user prompt.
The question describes a scenario where LLM-generated summaries are of good quality but include unwanted explanations of the summarization process. Option D (providing few-shot examples) is optimal because it directly addresses the output format issue by showing the model exactly what is desired - summaries without explanations. This technique is well-established in prompt engineering for controlling LLM output structure. The community discussion strongly supports D with 75% consensus and upvoted comments explaining how few-shot examples guide the model's behavior. Option B (tuning chunk size or embedding models) is less suitable as it addresses retrieval/document processing rather than output formatting. Option A (splitting by newlines) is a post-processing workaround rather than fixing the root cause. Option C (revisiting document ingestion) is irrelevant to the output format problem.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
A Generative AI Engineer is building an LLM system to retrieve and summarize news articles from 1918 based on a user's query. The summaries are of good quality, but they frequently include an unwanted explanation of the summarization process itself. What change can the engineer implement to reduce this issue?
A
Split the LLM output by newline characters to truncate away the summarization explanation.
B
Tune the chunk size of news articles or experiment with different embedding models.
C
Revisit their document ingestion logic, ensuring that the news articles are being ingested properly.
D
Provide few shot examples of desired output format to the system and/or user prompt.
No comments yet.