A close up of a cell phone with icons on it

Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot

A critical zero-click vulnerability in Microsoft 365 Copilot allows attackers to exfiltrate sensitive data without any user interaction, highlighting risks in AI-driven enterprise systems.

AI RISK INTELLIGENCEAI VULNERABILITIES

Harshaun

8/13/20252 min read

A close up of a cell phone with icons on it
A close up of a cell phone with icons on it

Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction

Summary: A newly discovered zero-click AI vulnerability, named EchoLeak (CVE-2025-32711), allows attackers to exfiltrate sensitive data from Microsoft 365 Copilot without any user interaction. The vulnerability exploits a scope violation in the large language model (LLM) context, enabling unauthorized access to privileged internal data.

Incident Details: The EchoLeak vulnerability was identified by Aim Security and reported to Microsoft. It involves embedding a malicious prompt payload within markdown-formatted content, such as an email. When an employee interacts with Microsoft 365 Copilot, the system inadvertently combines untrusted input with sensitive data, leading to unintended data retrieval. This process occurs without any explicit user action, making it a classic example of a zero-click attack.

Official / Researcher Comments: Microsoft has acknowledged the issue and addressed it in their June 2025 Patch Tuesday updates. The company has assigned a CVSS score of 9.3 to the vulnerability, indicating its critical nature. Aim Security, the firm that discovered the flaw, emphasized the risks associated with LLM scope violations and the potential for indirect prompt injections leading to unauthorized data access.

Expert Analysis: The EchoLeak vulnerability underscores the challenges in securing AI-driven systems, especially those integrated with enterprise platforms like Microsoft 365. The ability of attackers to exploit LLM context violations without user interaction highlights the need for robust safeguards and continuous monitoring of AI systems to prevent such exploits.

AI / Cybersecurity Angle: As AI systems become more integrated into organizational workflows, ensuring their security is paramount. Vulnerabilities like EchoLeak demonstrate the potential risks associated with AI agents and the importance of implementing strict access controls, input validation, and regular security assessments to protect sensitive data.

What’s Next: Organizations using Microsoft 365 Copilot are advised to apply the latest security patches and review their AI integration practices. It's crucial to implement security measures that prevent unauthorized data access and ensure that AI systems operate within defined trust boundaries.

Reader Security Tips:

  • Regularly update AI-driven applications and platforms to the latest security patches.

  • Implement strict access controls and input validation mechanisms.

  • Monitor AI system interactions for unusual or unauthorized activities.

  • Educate employees about potential AI-related security risks and best practices.

  • Collaborate with cybersecurity experts to assess and enhance AI system security.

Sources: The Hacker News - "Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction" (June 12, 2025), Microsoft Security Response Center — CVE-2025-32711 Advisory.

WatchDog Wire

Bridging the gap between AI innovation and cybersecurity. Explore our AI Risk Intelligence & Governance Briefs.