News

SHARED INTEL Q&A: AI retrieval systems can still hallucinate; deterministic logic offers a fix

  • None--securityboulevard.com
  • published date: 2026-01-21 00:00:00 UTC

None

<div class="single-post post-38388 post type-post status-publish format-standard has-post-thumbnail hentry category-q-a category-top-stories" id="post-featured" morss_own_score="5.733075435203094" morss_score="11.097648546096126"> <h1>SHARED INTEL Q&amp;A: AI retrieval systems can still hallucinate; deterministic logic offers a fix</h1> <div class="entry" morss_own_score="5.729146221786064" morss_score="111.87207969129155"> <img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/uploads/Hallucination_Mirage-1850pc-960x564.png"> <h5>By Byron V. Acohido</h5> <p>AI hallucination is still the deal-breaker.</p> <p><em><strong>Related:</strong> <a href="http://dall%C2%B7e%20prompt/">Retrieval Augmented Generation (RAG) strategies</a></em></p> <p><a href="https://www.lastwatchdog.com/wp/wp-content/uploads/BOoks-to-ashes_squr.png"><img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/uploads/BOoks-to-ashes_squr-100x99.png"></a>As companies rush AI into production, executives face a basic constraint: you cannot automate a workflow if you cannot trust the output. A model that fabricates facts becomes a risk exposure. CISOs now have to explain this clearly to boards, who expect assurances that fabrication risk will be controlled, not hoped away.</p> <p>Earlier this year, retrieval-augmented generation, or RAG, <a href="https://www.forbes.com/councils/forbestechcouncil/2025/06/23/how-retrieval-augmented-generation-could-solve-ais-hallucination-problem/">gained attention</a> as a practical check on hallucination. The idea was straightforward: before answering, the model retrieves grounding material from a trusted source and uses that to shape its response. This improved reliability in many early use cases.</p> <p>But first-generation RAG had a hidden weakness. A major academic study (“<a href="http://chrome-extension//efaidnbmnnnibpcajpcglclefindmkaj/https:/aclanthology.org/2024.acl-long.585.pdf">RAGTruth</a>”) showed that even when RAG retrieves accurate source material, AI systems can still misstate it or draw the wrong conclusion. The research comes from the ACL Anthology, the leading global library for peer-reviewed AI language research.</p> <p><a href="https://www.lastwatchdog.com/wp/wp-content/uploads/Gutenberg-moment-narr.png"><img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/uploads/Gutenberg-moment-narr-520x194.png"></a>More broadly, today’s RAG systems rely on probabilistic similarity. Small changes in how a question is asked can push the model toward different source material, meaning two users may receive different answers with no clear audit trail. That instability limits trust in regulated environments.</p> <p>A second wave of RAG innovation argues for something more deterministic. Instead of inferring relationships among documents, the system traverses only the links defined in authoritative frameworks, such as regulations or internal controls. Same question. Same source path. Verifiable answer.</p> <p>If this approach holds, regulated enterprises may gain a new way to trust AI in production. The Q&amp;A that follows examines this emerging direction through the work of <a href="https://dividegraph.com/">DivideGraph</a> founder <a href="https://www.linkedin.com/in/tyler-messa/">Tyler Messa</a>.</p> <p><strong>LW:</strong> For leaders new to the topic, what problem was first-generation RAG trying to solve?</p> <p morss_own_score="7.0" morss_score="9.0"><strong>Messa:</strong> Think of early RAG as turning AI into an “open-book test.” Before it, models hallucinated because they were pulling answers from memory. RAG let them reference source material before responding.</p> <p>For many low-risk business tasks, that was good enough. But in regulated environments, “good enough” isn’t a standard. Boards and regulators expect accuracy that can be demonstrated, not hoped for.</p> <p><strong>LW:</strong> Where did first-generation RAG fall short?</p> <div><a href="https://www.lastwatchdog.com/wp/wp-content/uploads/TYler-Messa-hdsht.jpg"><img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/uploads/TYler-Messa-hdsht-100x123.jpg"></a> Messa</div> <p morss_own_score="7.0" morss_score="9.0"><strong>Messa:</strong> The weakness showed up when I tried using it with the Cyber Risk Institute Profile — a framework that harmonizes more than 2,500 cybersecurity requirements. I didn’t need creativity. I needed accuracy.</p> <p>Instead, the AI treated the framework like searchable text rather than structured logic. Worse, it often invented relationships between requirements that didn’t exist. It could take the right source material and hallucinate itself into the wrong conclusion.</p> <p>The other problem was instability. I couldn’t reliably get the same result twice, and I couldn’t get a complete audit trail. In compliance, that’s fatal. Regulators don’t accept “the AI thinks so.” They expect systems anchored to authoritative frameworks with verifiable reasoning.</p> <p><strong>LW:</strong> How is your approach different?</p> <p><strong>Messa:</strong> The analogy I use is Autocorrect vs. Google Maps.</p> <p>Traditional RAG behaves like Autocorrect — it predicts what’s likely based on probability. That’s dangerous when the cost of being wrong can be billions of dollars.</p> <p>DivideGraph works like Google Maps for compliance. We decomposed regulations into precise components and rebuilt the intended logic as a navigable system. When the system answers, it follows that map with a turn-by-turn audit trail.</p> <p>The AI isn’t “thinking.” It’s the voice reading directions. The graph calculates the path. That means every answer is repeatable, verifiable, and anchored to frameworks regulators already recognize.</p> <p><strong>LW:</strong> Where does deterministic RAG make the biggest impact?</p> <p><strong>Messa:</strong> Anywhere a wrong answer creates real risk: fines, legal exposure, outages, breaches. More broadly, it closes the gap between policy and operations.</p> <p>Compliance can become continuous instead of episodic. Change management becomes safer because the system understands dependencies. And leadership finally gets an accurate, real-time understanding of risk posture.</p> <p><strong>LW:</strong> Is anyone else doing this?</p> <p morss_own_score="7.0" morss_score="9.0"><strong>Messa:</strong> To my knowledge, no. Most platforms are still trying to predict compliance. Banks are uniquely positioned as early adopters because the industry already did foundational work: the CRI Profile provides a harmonized framework to compute against.</p> <p>To adopt this model, two conditions matter: the cost of being wrong has to be high, and there has to be a standardized framework to anchor to.</p> <p><strong>LW:</strong> If this gains traction, how do you see it spreading?</p> <p morss_own_score="7.0" morss_score="9.0"><strong>Messa:</strong> Deterministic systems will become the trust layer for AI. You can’t responsibly build financial decisioning or fraud systems on probabilistic guesswork. We have decades of regulatory intelligence trapped in PDFs. Deterministic RAG operationalizes that intelligence.</p> <p>This isn’t about replacing human oversight. It’s about making oversight computational.</p> <p><strong><a href="https://www.lastwatchdog.com/wp/wp-content/uploads/Agentic-AI-debate-NARR.png"><img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/uploads/Agentic-AI-debate-NARR-520x223.png"></a>LW:</strong> What would this change for auditors?</p> <p morss_own_score="7.0" morss_score="9.0"><strong>Messa:</strong> Everything. Today, AI compliance claims are hard to prove. You can show prompts and documents, but you can’t show reasoning because probabilistic systems don’t have explicit reasoning.</p> <p>With a graph, every answer has a chain of logic. Auditors can see exactly which regulation required which control and how it maps to evidence. That levels the playing field for smaller banks who can’t afford armies of consultants. And it gives regulators the ability to examine systemic risk at a sector level.</p> <p><strong>LW:</strong> What proof will enterprises need before trusting deterministic RAG?</p> <p morss_own_score="7.0" morss_score="9.0"><strong>Messa:</strong> The most important signal is the ability to say “no.” A trustworthy system refuses requests that violate law, logic, or safe operation. It understands time, so it doesn’t reference rescinded rules. It understands concepts rather than just matching words. And it produces complete, verifiable traceability.</p> <p>Confidence comes from precision.</p> <p><img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/uploads/Byron-Acohido-BW-column-mug-100x123.png"></p> <p>Acohido</p> <p><em><a href="https://www.lastwatchdog.com/pulitzer-centennial-highlights-role-journalism/">Pulitzer Prize-winning </a>business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.</em></p> <hr> <p> <a href="https://www.facebook.com/sharer.php?u=https://www.lastwatchdog.com/shared-intel-q-deterministic-logic-offers-a-fix/"><img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/plugins/simple-share-buttons-adder/buttons/somacro/facebook.png" title="Facebook"></a><a href="https://plus.google.com/share?url=https://www.lastwatchdog.com/shared-intel-q-deterministic-logic-offers-a-fix/"><img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/plugins/simple-share-buttons-adder/buttons/somacro/google.png" title="Google+"></a><a href="/cdn-cgi/l/email-protection#6c531f190e06090f18513f242d3e2928495e5c2522382920495e5c3d495e5a4f5c5f54572d56495e5c2d25495e5c1e09181e05091a0d00495e5c1f151f1809011f495e5c0f0d02495e5c1f18050000495e5c040d0000190f05020d180957495e5c080918091e010502051f18050f495e5c00030b050f495e5c030a0a091e1f495e5c0d495e5c0a05144a0d011c570e03081551495e5c0418181c1f5643431b1b1b42000d1f181b0d180f0408030b420f0301431f040d1e0908410502180900411d41080918091e010502051f18050f4100030b050f41030a0a091e1f410d410a051443"><img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/plugins/simple-share-buttons-adder/buttons/somacro/email.png" title="Email"></a></p> <p>January 21st, 2026 <span> | <a href="https://www.lastwatchdog.com/category/q-a/">Q &amp; A</a> | <a href="https://www.lastwatchdog.com/category/top-stories/">Top Stories</a></span></p> <p> </p></div> </div><div class="spu-placeholder" style="display:none"></div><div class="addtoany_share_save_container addtoany_content addtoany_content_bottom"><div class="a2a_kit a2a_kit_size_20 addtoany_list" data-a2a-url="https://securityboulevard.com/2026/01/shared-intel-qa-ai-retrieval-systems-can-still-hallucinate-deterministic-logic-offers-a-fix/" data-a2a-title="SHARED INTEL Q&amp;A: AI retrieval systems can still hallucinate; deterministic logic offers a fix"><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fshared-intel-qa-ai-retrieval-systems-can-still-hallucinate-deterministic-logic-offers-a-fix%2F&amp;linkname=SHARED%20INTEL%20Q%26A%3A%20AI%20retrieval%20systems%20can%20still%20hallucinate%3B%20deterministic%20logic%20offers%20a%20fix" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_linkedin" href="https://www.addtoany.com/add_to/linkedin?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fshared-intel-qa-ai-retrieval-systems-can-still-hallucinate-deterministic-logic-offers-a-fix%2F&amp;linkname=SHARED%20INTEL%20Q%26A%3A%20AI%20retrieval%20systems%20can%20still%20hallucinate%3B%20deterministic%20logic%20offers%20a%20fix" title="LinkedIn" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fshared-intel-qa-ai-retrieval-systems-can-still-hallucinate-deterministic-logic-offers-a-fix%2F&amp;linkname=SHARED%20INTEL%20Q%26A%3A%20AI%20retrieval%20systems%20can%20still%20hallucinate%3B%20deterministic%20logic%20offers%20a%20fix" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_reddit" href="https://www.addtoany.com/add_to/reddit?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fshared-intel-qa-ai-retrieval-systems-can-still-hallucinate-deterministic-logic-offers-a-fix%2F&amp;linkname=SHARED%20INTEL%20Q%26A%3A%20AI%20retrieval%20systems%20can%20still%20hallucinate%3B%20deterministic%20logic%20offers%20a%20fix" title="Reddit" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_email" href="https://www.addtoany.com/add_to/email?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fshared-intel-qa-ai-retrieval-systems-can-still-hallucinate-deterministic-logic-offers-a-fix%2F&amp;linkname=SHARED%20INTEL%20Q%26A%3A%20AI%20retrieval%20systems%20can%20still%20hallucinate%3B%20deterministic%20logic%20offers%20a%20fix" title="Email" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share"></a></div></div><p class="syndicated-attribution">*** This is a Security Bloggers Network syndicated blog from <a href="https://www.lastwatchdog.com">The Last Watchdog</a> authored by <a href="https://securityboulevard.com/author/0/" title="Read other posts by bacohido">bacohido</a>. Read the original post at: <a href="https://www.lastwatchdog.com/shared-intel-q-deterministic-logic-offers-a-fix/">https://www.lastwatchdog.com/shared-intel-q-deterministic-logic-offers-a-fix/</a> </p>