Posts tagged with "hallucination mitigation"
1 post found

Jan 2, 2026 generation faithfulness code grounding LLM-as-a-Judge automated code verification GraphRAG accuracy type-safe code generation hallucination mitigation
Evaluating Generation and Grounding in Multi-Repo Systems
Retrieving nodes is only half the battle; the LLM must synthesize code that adheres to cross-repo constraints. This post explores measuring faithfulness, checking execution-level correctness against internal SDKs, and using LLM-as-a-Judge to verify that generated code respects the security and type contracts of separate repositories.