Integrity · AI
Detecting Truth Drift: why evidence-backed memory matters
Prevent agent hallucinations and maintain high data integrity with automated evidence verification.
The problem with 'done'
In a fast-moving project, it is easy for an agent or developer to mark a task as complete without actually shipping the fix. We call this 'Truth Drift'—when the memory layer says a feature is finished, but the code says otherwise.
Truth drift is the primary cause of agent hallucinations. If an agent believes a dependency is ready when it is not, it will build on a broken foundation.
Automated Evidence Verification
vem now automatically flags tasks marked 'done' without provided evidence (like commit hashes, test logs, or PR links) as 'suspicious'.
This signal allows teams to quickly audit their memory layer and ensures that agents are only retrieving context that has been verified by real-world actions.
- Suspicious tasks are excluded from high-confidence search
- Admins receive alerts for drift-heavy projects
- CLI prompts for evidence before closing tasks