Similar Questions in Reliability & Evaluation
Medium
How do you programmatically check if an LLM is making things up that aren't in the provided search results?
View
Medium
How do you evaluate a RAG system’s performance when the answer is not present in the retrieved documents? (Does it correctly say "I don't know"?)
View
Easy
Why is a standard unit test (asserting that output == "expected") often a bad way to test an LLM? How do you handle a model that gives three different, but correct, answers to the same prompt?
View