
Ultimate access to all questions.
A Generative AI Engineer is building an LLM system to retrieve and summarize news articles from 1918 based on a user's query. The summaries are of good quality, but they frequently include an unwanted explanation of the summarization process itself. What change can the engineer implement to reduce this issue?
A
Split the LLM output by newline characters to truncate away the summarization explanation.
B
Tune the chunk size of news articles or experiment with different embedding models.
C
Revisit their document ingestion logic, ensuring that the news articles are being ingested properly.
D
Provide few shot examples of desired output format to the system and/or user prompt.