News

AI, agents, and the trust gap

  • None--securityboulevard.com
  • published date: 2026-01-15 00:00:00 UTC

None

<p><span style="font-weight: 400;">As we speak, I’m remodeling my kitchen and relied heavily on ChatGPT to research sinks, compare reviews, and determine whether I can do all the pipe-work myself. </span></p><p><a href="https://www.kasada.io/ai-agents-and-the-trust-gap/screenshot-2026-01-15-at-11-16-32-am/" rel="attachment wp-att-17304"><img fetchpriority="high" decoding="async" class="wp-image-17304 size-fusion-800 aligncenter" src="https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.32-AM-800x610.png" alt="Example of AI prompt" width="800" height="610" srcset="https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.32-AM-200x153.png 200w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.32-AM-300x229.png 300w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.32-AM-400x305.png 400w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.32-AM-600x458.png 600w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.32-AM-768x586.png 768w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.32-AM-800x610.png 800w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.32-AM-1024x781.png 1024w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.32-AM-1200x916.png 1200w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.32-AM-1536x1172.png 1536w" sizes="(max-width: 800px) 100vw, 800px"></a></p><p><span style="font-weight: 400;">Between my personal life and my role as a technical product marketer, I’m all for AI. </span></p><p><span style="font-weight: 400;">But something Jono, our Head of Product, said recently initially felt counterintuitive: fraud rings won’t be using ChatGPT’s agentic mode any time soon to do the type of fraud we usually see. That is, they won’t be opening up ChatGPT accounts en masse and using the agentic mode to replace what they can do with </span><a href="https://www.kasada.io/solver-services-fraudsters-bypass-bot-management/" rel="nofollow"><span style="font-weight: 400;">solvers</span></a><span style="font-weight: 400;">, residential proxies, and </span><a href="https://hackernoon.com/scrape-smarter-not-harder-let-mcp-and-ai-write-your-next-scraper-for-you" rel="nofollow noopener"><span style="font-weight: 400;">scraping API companies</span></a><span style="font-weight: 400;"> like <a href="https://hackernoon.com/scrape-smarter-not-harder-let-mcp-and-ai-write-your-next-scraper-for-you" rel="nofollow noopener">FireCrawl</a>.</span><span style="font-weight: 400;"><br> </span></p><p><span style="font-weight: 400;">The more I thought about it, the more it made sense.</span></p><p><span style="font-weight: 400;">At least for now, AI tools are better described as general‑purpose hacker tooling than a full substitute for existing fraud infrastructure. We do see them used to generate fake accounts or produce code—and in some cases, you can literally see ChatGPT‑generated code patterns show up in request logs. But they’re not yet a wholesale replacement.</span></p><p><span style="font-weight: 400;">That nuance matters.</span></p><p><span style="font-weight: 400;">Like many people, I’m bullish on AI but conservative when it comes to risk—especially financial and security risk. I want to capture upside while limiting downside. Most Fortune 500 companies we work with feel the same way.</span></p><p><span style="font-weight: 400;">And this is where the real tension shows up.</span></p><p><span style="font-weight: 400;">Marketing teams want to open the floodgates. Security teams want to lock them shut. Somewhere in between is a workable middle ground—but it’s not obvious where that line should be.</span></p><p><span style="font-weight: 400;">So what does “sensible and forward‑looking” actually look like?</span></p><p><span style="font-weight: 400;">Here’s how we’ve been thinking about it at Kasada.</span></p><h2><b>The same AI company is multiple things</b></h2><p><span style="font-weight: 400;">OpenAI’s ChatGPT isn’t one thing. It’s a browser acting on behalf of a user </span><i><span style="font-weight: 400;">and</span></i><span style="font-weight: 400;"> an automated scraper </span><i><span style="font-weight: 400;">and</span></i><span style="font-weight: 400;"> an agentic commerce client, depending on context.</span></p><p><span style="font-weight: 400;">All three may have different cryptographic signatures, but can we trust their intended use? All three are “really ChatGPT,” but they carry completely different risk profiles:</span></p><ul> <li style="font-weight: 400;" aria-level="1"><b>Browser mode</b><span style="font-weight: 400;">: A user is in the loop. They’re shopping, researching, maybe adding to cart. This looks like a customer journey with an AI assist.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Scraper mode</b><span style="font-weight: 400;">: No user interaction. Automated requests pulling product data, pricing, and inventory. This might be competitive intelligence. It might be training data for a competitor.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Agentic mode</b><span style="font-weight: 400;">: An agent attempting to complete transactions on a user’s behalf. Sign up, checkout, booking, redemptions. High-value actions with real consequences.</span></li> </ul><p><span style="font-weight: 400;">One ChatGPT. Three discrete governance problems.</span></p><p><span style="font-weight: 400;">Treating all of that as “just ChatGPT” creates blind spots. Each mode represents a separate governance problem, and collapsing them into one policy guarantees either over‑blocking or under‑protection.</span></p><h2><b>Each industry wants different things</b></h2><p><span style="font-weight: 400;">There’s no universal default for how AI traffic </span><i><span style="font-weight: 400;">should</span></i><span style="font-weight: 400;"> be handled.</span></p><p><span style="font-weight: 400;">We count some of the biggest companies across e-commerce, hospitality, media, and financial services as customers. And there isn’t a clear pattern on how AI helps them, they just know they have to adapt. Every industry wants something different.</span></p><p><span style="font-weight: 400;">If you’re selling sneakers, you probably want AI search visibility. You want to show up when someone’s shopping agent looks for “best running shoes under $150.” But you don’t want that same agent creating accounts or burning through promo codes.</span></p><p><span style="font-weight: 400;">If you’re a media platform like Reddit, you may want almost none of it. You don’t want your content scraped for training data. However, you likely still want search referral traffic.</span></p><p><span style="font-weight: 400;">The same endpoint—say, product search—might be wide open for one business and locked down for another. There is no one‑size‑fits‑all policy—and that’s the point.</span></p><h2><b>Prompts are not secure by default</b></h2><p><span style="font-weight: 400;">Some teams assume they can rely on LLM guardrails or system prompts to constrain agent behavior. The agent’s prompt might say, “never attempt checkout without explicit user confirmation.”</span></p><p><span style="font-weight: 400;">That is wishful thinking. </span></p><p><span style="font-weight: 400;">Prompts can be overridden. Agents can be jailbroken. The model itself might hallucinate past its constraints.</span></p><p><span style="font-weight: 400;">You can’t treat the agent’s instructions as a reliable security boundary. </span></p><p><span style="font-weight: 400;">Governance has to happen at your edge—where you can verify identity, enforce permissions, detect anomalies, and observe behavior over time.</span></p><p><em>(If you’re interested in this topic, <a href="https://www.lennysnewsletter.com/p/the-coming-ai-security-crisis" rel="nofollow noopener">Lenny’s podcast</a> has a great interview with Sander Schulhoff, an AI security researcher.)</em></p><p><a href="https://www.lennysnewsletter.com/p/the-coming-ai-security-crisis" rel="attachment wp-att-17305 nofollow noopener"><img decoding="async" class="wp-image-17305 size-fusion-800 aligncenter" src="https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.46-AM-800x460.png" alt="Podcast about AI agent security" width="800" height="460" srcset="https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.46-AM-200x115.png 200w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.46-AM-300x172.png 300w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.46-AM-400x230.png 400w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.46-AM-600x345.png 600w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.46-AM-768x442.png 768w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.46-AM-800x460.png 800w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.46-AM-1024x589.png 1024w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.46-AM-1200x690.png 1200w, https://www.kasada.io/wp-content/uploads/2026/01/Screenshot-2026-01-15-at-11.16.46-AM-1536x883.png 1536w" sizes="(max-width: 800px) 100vw, 800px"></a></p><h2><b>Verification is table stakes</b></h2><p><span style="font-weight: 400;">The industry is converging on cryptographic request signing. Standards like </span><a href="https://datatracker.ietf.org/wg/webbotauth/about/" rel="nofollow noopener"><span style="font-weight: 400;">Web Bot Auth</span></a><span style="font-weight: 400;"> allow agents to prove who they are, not just claim it.</span></p><p><span style="font-weight: 400;">This is necessary infrastructure.</span></p><p><span style="font-weight: 400;">But verification alone doesn’t answer the harder question: should this request be allowed?</span></p><p><span style="font-weight: 400;">Knowing a request came from OpenAI doesn’t tell you whether it’s a browsing request, a scraper, or an agent attempting a high‑risk action. Nor does it tell you what that agent should be permitted to do on </span><i><span style="font-weight: 400;">your</span></i><span style="font-weight: 400;"> site.</span></p><p><span style="font-weight: 400;">Meaningful control requires:</span></p><ul> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Permissions per endpoint</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Permissions per action</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">The ability to distinguish between different modes from the same provider</span></li> </ul><p><span style="font-weight: 400;">Identity without authorization is just better labeling.</span></p><h2><b>It’s still early days</b></h2><p><span style="font-weight: 400;">Everyone’s talking about agentic commerce like it’s already here. Agents booking flights. Agents completing purchases. End-to-end automation.</span></p><p><span style="font-weight: 400;">That’s not what we’re seeing in the traffic.</span></p><p><span style="font-weight: 400;">The reality is messier. Agents browse. They research. They add items to carts. But the “book my flight from zero to 100” future? It’s not here yet—even if technical teams are running POCs with the latest standards.</span></p><p><span style="font-weight: 400;">Will it arrive? Probably. Soon? Maybe.</span></p><p><span style="font-weight: 400;">Which is exactly why rigid, static rules written today are likely to break tomorrow.</span></p><h2><b>The opportunity in the gap</b></h2><p><span style="font-weight: 400;">This is the part that matters most now.</span></p><p><span style="font-weight: 400;">Everyone’s preparing for a future that hasn’t fully arrived. Since every industry is different, I’d start by creating a simple diagram defining the benefit and risk. Does the benefit of AEO outweigh the probability of it being used for LLM training? It depends. </span></p><p><span style="font-weight: 400;">You have time to build the framework now, while the traffic is still small enough to understand. That gives teams time to establish visibility, set sane defaults, establish visibility, and create permissions that flex as capabilities mature.</span></p><p><span style="font-weight: 400;">The teams that wait until agentic commerce is “big enough to matter” will discover it mattered earlier than they thought—just quietly and without controls in place.</span></p><p><span style="font-weight: 400;">The hype says agents will transform everything overnight.</span></p><p><span style="font-weight: 400;">The traffic says you have a window to get this right.</span></p><p><span style="font-weight: 400;">Use it.</span></p><p><span style="font-weight: 400;">If you’re thinking about how to distinguish AI browsing from scraping from agentic action—and how to apply controls that evolve as capabilities mature—</span><a href="https://hubs.la/Q03-_dwP0" rel="nofollow noopener"><span style="font-weight: 400;">register for our upcoming webinar</span></a><span style="font-weight: 400;"> on January 29th.</span></p><p><span style="font-weight: 400;">Jono and I look forward to seeing you there.</span></p><p>The post <a rel="nofollow" href="https://www.kasada.io/ai-agents-and-the-trust-gap/">AI, agents, and the trust gap</a> appeared first on <a rel="nofollow" href="https://www.kasada.io/">Kasada</a>.</p><div class="spu-placeholder" style="display:none"></div><div class="addtoany_share_save_container addtoany_content addtoany_content_bottom"><div class="a2a_kit a2a_kit_size_20 addtoany_list" data-a2a-url="https://securityboulevard.com/2026/01/ai-agents-and-the-trust-gap/" data-a2a-title="AI, agents, and the trust gap"><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fai-agents-and-the-trust-gap%2F&amp;linkname=AI%2C%20agents%2C%20and%20the%20trust%20gap" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_linkedin" href="https://www.addtoany.com/add_to/linkedin?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fai-agents-and-the-trust-gap%2F&amp;linkname=AI%2C%20agents%2C%20and%20the%20trust%20gap" title="LinkedIn" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fai-agents-and-the-trust-gap%2F&amp;linkname=AI%2C%20agents%2C%20and%20the%20trust%20gap" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_reddit" href="https://www.addtoany.com/add_to/reddit?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fai-agents-and-the-trust-gap%2F&amp;linkname=AI%2C%20agents%2C%20and%20the%20trust%20gap" title="Reddit" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_email" href="https://www.addtoany.com/add_to/email?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fai-agents-and-the-trust-gap%2F&amp;linkname=AI%2C%20agents%2C%20and%20the%20trust%20gap" title="Email" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share"></a></div></div><p class="syndicated-attribution">*** This is a Security Bloggers Network syndicated blog from <a href="https://www.kasada.io">Kasada</a> authored by <a href="https://securityboulevard.com/author/0/" title="Read other posts by Edgar Cerecerez">Edgar Cerecerez</a>. Read the original post at: <a href="https://www.kasada.io/ai-agents-and-the-trust-gap/">https://www.kasada.io/ai-agents-and-the-trust-gap/</a> </p>