
Ultimate access to all questions.
A Generative AI Engineer has developed a RAG application that enables employees to retrieve answers from an internal knowledge base, such as Confluence pages or Google Drive. The prototype is functional and has received positive feedback from internal testers. The engineer now wants to conduct a formal evaluation of the system's performance and identify areas for improvement.
How should the Generative AI Engineer evaluate the system?
A
Use cosine similarity score to comprehensively evaluate the quality of the final generated answers.
B
Curate a dataset that can test the retrieval and generation components of the system separately. Use MLflow’s built in evaluation metrics to perform the evaluation on the retrieval and generation components.
C
Benchmark multiple LLMs with the same data and pick the best LLM for the job.
D
Use an LLM-as-a-judge to evaluate the quality of the final answers generated.