News

[un]prompted: Key Insights from the AI Security Practitioners Conference – FireTail Blog

  • None--securityboulevard.com
  • published date: 2026-03-17 00:00:00 UTC

None

<p>Mar 17, 2026 – Jeremy Snyder – The State of AI Security: Moving Beyond TheoryThe biggest shift evident at the [un]prompted AI Security Practitioners Conference was the move from purely theoretical discussions about “what could go wrong” to concrete, battle-tested methodologies for “what is going wrong and how we fix it.” It’s clear that AI security is rapidly evolving, from initial employee DLP use cases, to organization-wide focus around securing all THINGS A.I..This Week in AI SecurityWe published an episode of our This Week in AI Security Podcast right after the event, which you can watch here below.In the episode, I shared some of my key thoughts around several major themes:LLMs for Vulnerability Discovery and the Zero-Day Clock: Many researchers shared information on using LLMs to identify zero days, malware, and source code vulnerabilities. The most striking observation was the dramatic acceleration of the “meantime to availability of an exploit,” which has reduced from months to hours. This is a “call to arms” for the cybersecurity industry and raised the question of whether automatic patching is now required.Defensive Automation and Agentic Infrastructure: I heard presentations from companies like Google, OpenAI, and Meta about their security strategies, tooling, and efforts to leverage AI agents for security automation.New Attack Surfaces: This area included discussions on indirect prompt injection, new attack vectors in AI-automated systems like KYC pipelines and image recognition (OCR embedded in LLMs), and the vulnerabilities and legal implications of ubiquitous AI notetakers (meeting assistants).Prompt as Code (Conceptual Highlight): To me, the concept of “thinking about the prompt as code” from the Google Gmail team was one of the most interesting conceptual points, emphasizing the need to apply secure coding and hygiene practices to the prompt itself, as it serves as an instruction set.Real-World Case Studies: I noted good real-world case studies from various firms (Trail of Bits, Wiz, others), including the use of multi-agent triage to uncover breaches.Overall, huge kudos to the team over at Knostic!But that’s not all…There were a number of other topics that I didn’t have enough time to cover in the 15-minute episode. Here are some of my thoughts below. Operationalizing Threat Modeling for LLMsOne theme was the urgent need for threat modeling tailored specifically to Large Language Models (LLMs) and generative AI systems. Traditional application security models often fall short, failing to account for the unique attack surface introduced by model weights, training data pipelines, and prompts themselves.‍Key speaker sessions highlighted a new approach focusing on three main challenges:‍Model Theft &amp; Extraction: Protecting intellectual property embedded in the model itself.Inference-Time Attacks (Prompt Injection, Evasion): Mitigating threats during real-time use.System-Level Integration Risks: Addressing vulnerabilities introduced when LLMs connect to external tools (RAG, code execution).A Shift in Attack Vectors: Focus on Evasion and MisuseWhile Prompt Injection remains a foundational concern, the conversation has matured to address more subtle and potentially damaging attack vectors.Adversarial Evasion TechniquesSeveral talks detailed advanced adversarial examples designed not just to trick the model into an undesirable output, but to subtly shift its behavior over time or bypass safety filters without obvious jailbreaking language. This requires a defensive posture that looks beyond simple keyword blocking and into semantic understanding and anomaly detection on input and output data.Misuse and Abuse by DesignThe focus is increasingly on how malicious actors can misuse the powerful capabilities of an AI system, even when it’s technically operating “as intended.” For example, using a coding assistant LLM to generate highly optimized malware code or leveraging an RAG system to exfiltrate proprietary data through cleverly crafted queries. This necessitates integrating “red teaming” early in the development lifecycle, simulating real-world abuse scenarios before deployment.The Tooling Landscape: What Practitioners Are UsingThe conference provided a fantastic overview of the tools that are actually making a difference in AI security labs today. The consensus is that no single tool provides a complete solution, so a layered defense strategy is essential.The Rise of Defense-in-Depth for AIThe core message is the need for an approach that includes:Application Layer: Prompt engineering guidelines and specific guardrails.Middleware/Proxy Layer: Dedicated AI security tools intercepting API calls for validation, sanitization, and logging.Model Layer: In-model defenses (e.g., constitutional AI, fine-tuning for robustness) and continuous monitoring of model performance and drift.Looking Ahead: The Human Element and Future ChallengesBeyond the technical deep-dives, the most engaging discussions centered around the future. Below are some of my thoughts from conversations with AI security leaders that I had at the event:We’re still in the earliest stages of securing AI adoption.We only know the challenges presented today, and we haven’t solved all of them yet. There’s almost certainly some Rumsfeld Matrix “unknown unknowns” that will emerge in the near-term and medium-term future. Everyone seems to agree that 2026 is the year that we start expanding the scope of needed AI security platforms from employee-focused to everything-focused.Humans are needed. In fact, more humans are needed than ever before, or than we currently have. I see an exciting future ahead in securing AI for companies everywhere. </p><div class="spu-placeholder" style="display:none"></div><div class="addtoany_share_save_container addtoany_content addtoany_content_bottom"><div class="a2a_kit a2a_kit_size_20 addtoany_list" data-a2a-url="https://securityboulevard.com/2026/03/unprompted-key-insights-from-the-ai-security-practitioners-conference-firetail-blog/" data-a2a-title="[un]prompted: Key Insights from the AI Security Practitioners Conference – FireTail Blog"><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F03%2Funprompted-key-insights-from-the-ai-security-practitioners-conference-firetail-blog%2F&amp;linkname=%5Bun%5Dprompted%3A%20Key%20Insights%20from%20the%20AI%20Security%20Practitioners%20Conference%20%E2%80%93%20FireTail%20Blog" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_linkedin" href="https://www.addtoany.com/add_to/linkedin?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F03%2Funprompted-key-insights-from-the-ai-security-practitioners-conference-firetail-blog%2F&amp;linkname=%5Bun%5Dprompted%3A%20Key%20Insights%20from%20the%20AI%20Security%20Practitioners%20Conference%20%E2%80%93%20FireTail%20Blog" title="LinkedIn" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F03%2Funprompted-key-insights-from-the-ai-security-practitioners-conference-firetail-blog%2F&amp;linkname=%5Bun%5Dprompted%3A%20Key%20Insights%20from%20the%20AI%20Security%20Practitioners%20Conference%20%E2%80%93%20FireTail%20Blog" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_reddit" href="https://www.addtoany.com/add_to/reddit?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F03%2Funprompted-key-insights-from-the-ai-security-practitioners-conference-firetail-blog%2F&amp;linkname=%5Bun%5Dprompted%3A%20Key%20Insights%20from%20the%20AI%20Security%20Practitioners%20Conference%20%E2%80%93%20FireTail%20Blog" title="Reddit" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_email" href="https://www.addtoany.com/add_to/email?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F03%2Funprompted-key-insights-from-the-ai-security-practitioners-conference-firetail-blog%2F&amp;linkname=%5Bun%5Dprompted%3A%20Key%20Insights%20from%20the%20AI%20Security%20Practitioners%20Conference%20%E2%80%93%20FireTail%20Blog" title="Email" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share"></a></div></div><p class="syndicated-attribution">*** This is a Security Bloggers Network syndicated blog from <a href="https://www.firetail.ai">FireTail - AI and API Security Blog</a> authored by <a href="https://securityboulevard.com/author/0/" title="Read other posts by FireTail - AI and API Security Blog">FireTail - AI and API Security Blog</a>. Read the original post at: <a href="https://www.firetail.ai/blog/un-prompted-key-insights-from-the-ai-security-practitioners-conference">https://www.firetail.ai/blog/un-prompted-key-insights-from-the-ai-security-practitioners-conference</a> </p>