News

Exploiting Google Gemini to Abuse Calendar Invites Illustrates AI Threats

  • Jeffrey Burt--securityboulevard.com
  • published date: 2026-01-20 00:00:00 UTC

None

<p>A vulnerability in <a href="https://securityboulevard.com/2025/12/indirect-malicious-prompt-technique-targets-google-gemini-enterprise/" target="_blank" rel="noopener">Google’s Gemini AI model</a> that could allow bad actors to go around privacy controls in Google Calendar and access and leak private meeting data is the latest example of how traditional cybersecurity struggles to keep up with <a href="https://securityboulevard.com/2025/09/from-prompt-injection-to-a-poisoned-mind-the-new-era-of-ai-threats/" target="_blank" rel="noopener">threats posed by AI</a>, according to researchers with application security firm Miggo.</p><p>Using an <a href="https://sites.google.com/view/invitation-is-all-you-need/home" target="_blank" rel="noopener">indirect prompt injection</a> technique, they were able to manipulate Gemini and abuse its role as an assistant for Google Calendar. Gemini can parse the full context of a user’s calendar events, from titles and time to attendees and descripts, which allows it to answer questions the user might have, including about their schedule on a particular day.</p><p>“The mechanism for this attack exploits that integration,” Liad Eliyahu, head of research for Miggo, <a href="https://www.miggo.io/post/weaponizing-calendar-invites-a-semantic-attack-on-google-gemini" target="_blank" rel="noopener">wrote in a report</a> this week. “Because Gemini automatically ingests and interprets event data to be helpful, an attacker who can influence event fields can plant natural language instructions that the model may later execute.”</p><p>The researchers were able to exploit the vulnerability with three steps. They created a new calendar event that included an embedded prompt-injection payload that instructed Gemini to summarize all of the targeted user’s meeting on a specific day – this included private meetings – and sent an invite the user.</p><p>The promote also told Gemini to exfiltrate the data by writing it into the description of a new calendar event and then gave the user a harmless response – “it’s a free time slot” – to hide its intent.</p><p>“The payload was syntactically innocuous, meaning it was plausible as a user request,” Eliyahu wrote. “However, it was semantically harmful … when executed with the model tool’s permissions.”</p><h3>Payload Comes into Play</h3><p>The payload was kicked into action when the user asked Gemini an everyday question about their schedule, such as “Hey, Gemini, am I free on Saturday?” The request led to Gemini loading and parsing all relevant calendar events, including the hacker’s malicious one, and activating the payload.</p><p>To the user, Gemini appeared to be acting normally when replying, “it’s a free time slot.”</p><p>“Behind the scenes, however, Gemini created a new calendar event and wrote a full summary of our target user’s private meetings in the event’s description,” he wrote. “In many enterprise calendar configurations, the new event was visible to the attacker, allowing them to read the exfiltrated private data without the target user ever taking any action.”</p><p>Miggo alerted Google to the vulnerability, with the tech giant confirming the findings and mitigating the flaw.</p><h3>Syntactic vs. Semantic Threats</h3><p>That said, the bigger issue is that the exploitation of the flaw highlights how securing AI-based applications is a different challenge for security teams.</p><p>“Traditional application security (AppSec) is largely syntactic,” Eliyahu wrote. “We look for high-signal strings and patterns, such as SQL payloads, script tags, or escaping anomalies, and block or sanitize them. … In contrast, vulnerabilities<strong> </strong>in<strong> </strong>LLM [large language model] powered systems are semantic. This shift shows how simple pattern-based defenses are inadequate. Attackers can hide intent in otherwise benign language, and rely on the model’s interpretation of language to determine the exploitability.”</p><p>In Miggo’s testing, Gemini did act as a chat interface, but also operated as an application layer with access to tools and APIs, with Eliyahu noting that “when an application’s API surface is natural language, the attack layer becomes ‘fuzzy.’ Instructions that are semantically malicious can look linguistically identical to legitimate user queries.”</p><p>He added that “this Gemini vulnerability isn’t just an isolated edge case. Rather, it is a case study in how detection is struggling to keep up with AI-native threats.”</p><h3>New Security Thinking Needed</h3><p>In response, security teams need to move beyond keywork blocking and create runtime systems that can reason about semantics, attribute intent, and track data provenance, creating security controls that treat LLMs as full application layers with privileges that can be governed.</p><p>“Securing the next generation of AI-enabled products will be an interdisciplinary effort that combines model-level safeguards, robust runtime policy enforcement, developer discipline, and continuous monitoring,” Eliyahu wrote. “Only with that combination can we close the semantic gaps attackers are now exploiting.”</p><div class="spu-placeholder" style="display:none"></div><div class="addtoany_share_save_container addtoany_content addtoany_content_bottom"><div class="a2a_kit a2a_kit_size_20 addtoany_list" data-a2a-url="https://securityboulevard.com/2026/01/exploiting-google-gemini-to-abuse-calendar-invites-illustrates-ai-threats/" data-a2a-title="Exploiting Google Gemini to Abuse Calendar Invites Illustrates AI Threats"><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fexploiting-google-gemini-to-abuse-calendar-invites-illustrates-ai-threats%2F&amp;linkname=Exploiting%20Google%20Gemini%20to%20Abuse%20Calendar%20Invites%20Illustrates%20AI%20Threats" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_linkedin" href="https://www.addtoany.com/add_to/linkedin?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fexploiting-google-gemini-to-abuse-calendar-invites-illustrates-ai-threats%2F&amp;linkname=Exploiting%20Google%20Gemini%20to%20Abuse%20Calendar%20Invites%20Illustrates%20AI%20Threats" title="LinkedIn" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fexploiting-google-gemini-to-abuse-calendar-invites-illustrates-ai-threats%2F&amp;linkname=Exploiting%20Google%20Gemini%20to%20Abuse%20Calendar%20Invites%20Illustrates%20AI%20Threats" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_reddit" href="https://www.addtoany.com/add_to/reddit?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fexploiting-google-gemini-to-abuse-calendar-invites-illustrates-ai-threats%2F&amp;linkname=Exploiting%20Google%20Gemini%20to%20Abuse%20Calendar%20Invites%20Illustrates%20AI%20Threats" title="Reddit" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_email" href="https://www.addtoany.com/add_to/email?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fexploiting-google-gemini-to-abuse-calendar-invites-illustrates-ai-threats%2F&amp;linkname=Exploiting%20Google%20Gemini%20to%20Abuse%20Calendar%20Invites%20Illustrates%20AI%20Threats" title="Email" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share"></a></div></div>