AI is the New Threat Surface

Why Canadian Organizations Must Rethink Cybersecurity in the Age of AI

In June 2025, a single email compromised Microsoft 365 Copilot. The vulnerability, dubbed EchoLeak (CVE-2025-32711), required no clicks, no attachments, no user interaction whatsoever. The AI assistant simply processed the malicious email as part of its normal operation – and silently exfiltrated emails, documents, Teams messages, and SharePoint content to attacker-controlled servers. Microsoft patched it quickly. But the lesson was clear: the helpful AI assistant had become an unwitting accomplice in data theft.

This is what happens when AI becomes the attack surface. And it’s happening now, across every industry, at scale.

Canadian organizations are adopting AI rapidly – usage among businesses doubled in just one year. The Canadian Centre for Cyber Security’s National Cyber Threat Assessment 2025–2026 identifies AI-enabled attacks as the number one defining trend reshaping Canada’s threat environment, as we enter what they call “a new era of cyber vulnerability.”

How Has AI Changed the Game?

AI systems create vulnerabilities that conventional security controls weren’t designed to address:

 

1. AI can be steered in the wrong direction. Attackers don’t “hack” AI in the traditional sense – they influence it. Carefully crafted prompts, poisoned data, or misleading context can cause models to leak sensitive information, ignore guardrails, or behave unexpectedly.

 

2. AI acts autonomously, at scale. When AI systems call tools, trigger workflows, or coordinate with other agents, a single bad instruction can cascade into privilege escalation, unauthorized system changes, or data exfiltration – all without human intervention. EchoLeak demonstrated this exact point.

 

3. People trust AI too much. AI outputs carry an unearned sense of authority. When the model is wrong or subtly manipulated, those errors propagate through decisions and downstream systems faster than traditional software bugs ever could.

Why Can’t Traditional Security Cope?

Here’s the problem: your existing security stack wasn’t built for this.

A firewall can’t distinguish between a legitimate query and a prompt injection – a type of attack where an adversary crafts input to manipulate an AI’s behaviour – because both look like normal text. An endpoint detection system won’t flag an AI assistant following its programming, even when that programming is being exploited. Your SIEM won’t alert on an AI agent accessing files it’s authorized to access, even if it’s doing so at an attacker’s instruction. These attacks don’t trigger traditional alerts because they exploit intended behaviour, not software bugs.

The OWASP Top 10 for LLM Applications ranks prompt injection as the number one security risk for AI systems – a threat category that didn’t even exist five years ago. And in December 2025, OpenAI acknowledged that prompt injection “is unlikely to ever be fully ‘solved’” – placing the onus on organizations to invest in continuous risk management, layered defenses, and ongoing monitoring rather than expecting a “patch”.

The Attacks Are Already Happening

Threat actors are already targeting AI systems because they offer capabilities traditional endpoints don’t: privileged access to sensitive data combined with autonomous action. A compromised AI assistant can read emails, search files, and take actions on behalf of users – all by design.

Examples of these attacks include:

  • Zero-click AI compromises: At the Chaos Communication Congress in December 2025, security researcher Johann Rehberger demonstrated how AI coding agents could be compromised via prompt injections embedded in code repositories. He showcased an “AI virus” that replicates across systems – one compromised repository infects others when developers use AI tools to examine it. His message: “You should always assume breach. The agent gets compromised. What can it do?”

  • AI-targeted malware: In June 2025, Check Point researchers discovered malware dubbed “Skynet,” embedding prompt injections designed to manipulate AI-powered security tools into declaring malicious samples safe. Although the specific attempt failed, the direction is significant. Attackers are developing techniques to deceive AI-powered defenses. As Check Point observed: “First, we had the sandbox, which led to hundreds of sandbox escape techniques; now, we have the AI malware auditor. The natural result is hundreds of attempted AI audit escapes.”

 

 How Does Agentic AI Raise the Stakes?

The threat surface grows further as organizations adopt agentic AI – systems that do more than answer questions; they take autonomous actions. These agents can discover resources, invoke workflows, interact with other services, and make decisions with minimal human oversight. While the productivity benefits are real, the security implications are significant.

An AI agent can be manipulated to influence other AI agents or systems through “second-order” prompt injections, causing cascading unauthorized actions. A seemingly benign input can trigger chains of decisions that result in data breaches, privilege escalation, or policy violations – often without triggering traditional alerts.

Organizations must consider security by design when developing and implementing agentic AI. The agents themselves are vulnerable to manipulation if not properly constrained. And the tools used to build them carry their own risks – researchers have documented over 30 vulnerabilities in popular AI coding tools including GitHub Copilot and Cursor, with flaws enabling credential theft, malicious code execution, and compromised development environments. Without appropriate governance and guardrails, an AI agent is not a productivity tool – it is an attack vector waiting to be exploited.

 

Looking Beyond Traditional Security Operations

The emergence of AI as both tool and target also demands a rethinking of how organizations structure their security operations. Traditional Security Operations Centres were built to detect and respond to conventional cyber threats – malware signatures, network anomalies, known attack patterns. AI-enabled threats operate differently.

For years, we’ve treated humans as the first line of defence,” observes Andrew Buckles, EVP, Services at ISA Cybersecurity. “We’ve trained them and tested them; we’ve filtered what reaches them, and monitored their behaviour. We built security awareness programs and controls because people were a primary attack path. That same discipline must now extend beyond people. AI agents are becoming identities inside the business, making it essential for organizations to build the same level of resilience and trust across this new threat surface.”

 

Is Canada Ready?

AI adoption is accelerating rapidly. According to a recent report, nearly nine in ten Canadian organizations are now using generative AI tools, and 42% of IT decision-makers ranked generative AI as their top budget priority for 2025 – on par with cybersecurity investments. Microsoft’s latest SMB Report found that 71% of Canadian small and medium-sized businesses now actively use AI in their operations.

However, organizational readiness is not keeping pace with this expansion. According to a 2025 industry report, nearly half of Canadian IT decision-makers cite insufficient staff expertise in AI as their biggest implementation challenge. And despite 94% of Canadian organizations either using or planning to use AI-enabled cybersecurity solutions, only 41% of board members feel they fully understand the risks posed by AI.

This readiness gap is where risk lives. Canadian organizations spent over C$1.2 billion on recovery from cyber attacks in 2023 alone – double the amount from two years earlier. The average cost of a data breach in Canada in 2025 reached C$6.98 million. As AI expands the threat surface, these numbers will only grow – unless we act decisively.

 

 

What Organizations Should Do Now

Building resilience and maturity against AI threats requires a comprehensive approach spanning governance, assessment, engineering, and response:

1. Establish AI governance and oversight. Create clear policies governing AI use, including acceptable use guidelines, data handling requirements, and security standards. Designate accountability for AI risk management at the board and executive levels. Address shadow AI – the unauthorized tools employees may already be using – and ensure that even authorized AI tools have appropriate data handling controls. Employees routinely paste sensitive information into AI assistants without understanding where that data goes or how it’s retained. Integrate AI security into your broader risk management program, aligning with frameworks like the NIST AI Risk Management Framework. Ensure AI considerations are embedded into enterprise risk management, third-party risk assessments, and business continuity planning.

2. Manage your AI attack surface. Inventory all AI systems deployed across your organization. Understand where AI interacts with sensitive data, makes decisions that affect operations, or interfaces with external systems. Pay particular attention to agentic AI deployments – systems that can discover resources, invoke workflows, or take actions autonomously present compounding risk if not properly constrained. Conduct thorough risk assessments, and consider red-teaming exercises that attempt to exploit AI vulnerabilities through prompt injection and model manipulation. Traditional penetration testing methodologies need to expand to cover these new attack vectors.

3. Implement AI-specific defensive controls. Apply defence-in-depth principles tailored to AI deployments: input validation and sanitization, output monitoring, access controls, audit logging of AI interactions, and regular security assessments of AI systems. Integrate AI security into your existing network protection, cloud security, and data protection programs. Don’t assume that because a tool comes from a major vendor, it’s inherently secure – the EchoLeak incident demonstrated that even industry-leading platforms can harbour critical flaws.

4. Build AI incident response capabilities. Traditional incident response playbooks weren’t designed for scenarios where an AI system has been manipulated or compromised. Develop AI-specific procedures for detecting AI-related incidents, containing compromised AI systems, investigating AI-specific attack vectors, and recovering AI services safely. Train security analysts on prompt injection, model manipulation, and emerging AI threats. The skills gap in AI security is real and widening – whether through internal training, strategic hiring, or partnerships with specialized service providers, ensure your organization has access to the expertise needed to secure AI systems effectively.

 

The Path Forward

The integration of AI into business operations represents one of the most significant technological shifts in a generation. With it comes tremendous opportunity – and commensurate risk. Canadian organizations that approach AI security strategically, with clear governance frameworks, rigorous assessment, appropriate controls, and skilled teams, will be positioned to capture AI’s benefits while managing its risks effectively.

Those that treat AI security as an afterthought will find themselves increasingly vulnerable to adversaries who have already recognized what’s at stake. The threat landscape is evolving at machine speed. Organizations must evolve with it.

ISA Cybersecurity helps Canadian organizations navigate the complexities of AI security through our AI 360 services – a comprehensive framework addressing AI governance, risk assessment, secure engineering, and incident detection and response. Our Cyber 360 services provide the complementary security foundations – from penetration testing and red-teaming to managed detection and incident response – that AI programs require to operate securely.

Contact us today to learn how we can help your organization build resilience against emerging AI-enabled threats.

NEWSLETTER

Get exclusively curated cyber insights and news in your inbox

Contact Us Today

SUBSCRIBE

Get monthly proprietary, curated updates on the latest cyber news.