/dq/media/media_files/2025/04/28/eMMYbxHVJfOJaUVgrxZK.jpg)
The year 2026 is being marked as the ‘Year of the AI agent’, and the enterprise industry is transitioning where humans and autonomous digital entities operate side-by-side to drive business outcomes. However, a recent report released by Microsoft warns that the rapid adoption is creating a massive visibility gap that threatens to turn these productivity boosters into an unintended double agent.
Microsoft’s first-party telemetry from November 2025 revealed a staggering reality: over 80% of Fortune 500 companies are now deploying active AI agents. Crucially, these are not just IT-led initiatives, but the proliferation is driven by low-code/ no-code tools like Copilot Studio and Agent Builder, democratising agent creation across the workforce.
The report breaks down the global agentic workforce by industry and region. Manufacturing leads global adoption at 13%, followed closely by Financial Services at 11% and Retail at 9%. The Technology sector itself remains the highest adopter at 16%.
While looking at the geographical spread, Europe, the Middle East, and Africa (EMEA) currently host the largest share of active agents at 42%, followed by the United States at 29%, Asia at 19%, and the Americas at 10%.
Despite this expansion, the readiness gap looks stark. Microsoft’s Data Security Index indicates that only 47% of organisations have implemented specific GenAI security controls. And, this lack of oversight has birthed a new wave of Shadow AI, with 29% of employees admitting to using unsanctioned AI agents to complete work tasks.
Double Agents emerge as a threat
The most alarming finding in the report is the emergence of the Double Agent, an AI entity that, through excessive permissions or manipulated instructions, becomes a security liability.
Microsoft’s AI Red Team and Defender researchers have documented an attack vector known as “Memory Poisoning” (MITRE ATLAS AML.T0080). In these campaigns, attackers use deceptive interface elements, such as “Summarise with AI” buttons or specially crafted URLs, to inject persistent, unauthorised instructions into an agent’s memory.
Once injected, the agent treats these malicious prompts as legitimate user preferences because the agent’s memory lacks semantic validation; these instructions remain latent until triggered weeks later, leading to invisible and persistent data exfiltration.
The Zero Trust mandate
In terms to combat these risks, Microsoft is urging to treat AI agents like human employees. This means applying strict Zero Trust principles; explicit verification, least privilege access, and an assumption of compromise to every digital identity.
The recommendation for this new defence strategy is Agent 365, a unified control plane designed to provide “Observability” across five core pillars:
- Centralised registry
- Identity-driven access
- Real-time telemetry
- Interoperability
- Built-in protections
A roadmap for 2026
The report concludes with a seven-point checklist for leaders to secure their AI transformation. Microsoft recommends curbing Shadow AI by providing secure, IT-approved alternatives. Define the specific purpose of every agent and block broad privileges. Update Business Continuity plans to include ‘tabletop exercise’ for AI failure modes. And, elevate AI risks to the board level. Alongside financial and operational risks.
Source: Cyber Pulse: An AI Security Report
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
Follow Us