Тренди

AI

Бізнес

Zero Click Is Becoming the Default Attack Surface in AI-Driven Systems

Zero Click Is Becoming the Default Attack Surface in AI-Driven Systems

Jan 2, 2026

Picture the incident review nobody wants to run.

Nothing was clicked. No one opened a suspicious attachment. No one installed a shady browser extension. Yet a sensitive internal summary appears in the wrong place, a CRM record is modified, or a customer workflow executes with the wrong parameters—because an AI system “helpfully” did what it inferred it should do.

This is the new security shape of AI-driven operations: action without interaction.

Zero-click used to mean a rare, elite class of exploit. In AI systems, it increasingly means something simpler—and more operationally dangerous: untrusted context reaching an executor.

The Web4 shift that changes the threat model

If Web2 was “pages → clicks,” then Web4 (as it’s being discussed in industry) looks more like “intent → execution.”

Three forces are converging:

  1. LLM search becomes the first interface: users ask; answers appear; navigation is optional.

  2. AI agents become the first operator: agents read, plan, and trigger actions across tools.

  3. System-to-system interactions become the default: APIs, event streams, and connectors exchange instructions continuously—often with AI in the loop.

The security implication is not philosophical. It’s mechanical:

When a model is allowed to act (call tools, write records, send messages, trigger workflows), the primary entry point becomes whatever the model can read—not what a user clicks.

Microsoft describes this class of risk directly in its work on indirect prompt injection, where an LLM processes untrusted data (emails, documents, web pages) that contains instructions the model misinterprets as commands.

Zero-click wasn’t an anomaly. It was an early signal.

Traditional zero-click exploits like Pegasus demonstrated that “no interaction required” can still mean full compromise. Citizen Lab documented FORCEDENTRY, a zero-click iMessage exploit used to deploy Pegasus spyware in the wild.

That era taught a hard lesson: if the attacker can reach the parser, the rest is details.

AI systems recreate this pattern at a different layer. The “parser” is no longer just a media library or messaging stack—it’s the context ingestion pipeline (RAG, email, docs, tickets, knowledge bases) plus the tool-calling runtime.

This is why the most important zero-click incidents in 2025 weren’t OS-level exploits. They were AI-native exfiltration chains.

EchoLeak and the new meaning of “no user interaction”

One of the clearest examples is EchoLeak, described by multiple security researchers as a zero-click style vulnerabilityaffecting Microsoft 365 Copilot—where crafted content can cause sensitive data exposure without the classic “user clicked a link” step. 

The important takeaway isn’t Copilot specifically. It’s the pattern:

  • Untrusted content enters a workspace (email/document/chat)

  • The assistant ingests it as context

  • The model is induced to treat it as instruction

  • It retrieves sensitive data via internal access (RAG/Graph/connectors)

  • It exfiltrates through an allowed channel (summary, message, URL, response)

This is not “phishing.” This is permissioned leakage—a system doing what it’s authorized to do, prompted by what it never should have trusted.

Why AI agents expand blast radius faster than humans

Security teams already understand blast radius in cloud: overprivileged roles, lateral movement, shared secrets.

AI agents change blast radius in three ways:

1) They are non-human identities with continuous access.
Agents run 24/7, read broadly, and often inherit permissions designed for convenience. That’s a larger, more persistent attack surface than a human user session.

2) They collapse boundaries between “read” and “act.”
RAG and tool-calling turn passive data access into active execution. A doc isn’t just a doc—it can become an instruction source.

3) They create invisible lateral movement via workflows.
A compromised instruction can propagate across tools: email → assistant → CRM → ticketing → messaging → billing.

To make this concrete, here’s a typical agentic attack chain rendered as an execution graph (not a “list of tips,” but the operational flow security teams actually need to model):

[Untrusted Input] (email/doc/web snippet)

        v

[Context Assembly / RAG] (retrieval + ranking + chunking)

        v

[LLM Reasoning Layer] (instruction vs data confusion)

        v

[Tool Invocation / Connectors] (Graph, CRM API, ticketing, payments)

        v

[State Change + Exfiltration] (record write, message send, summary output)

This is the Web4 security problem in one diagram: the UI is no longer the gate.

Where enterprises lose control today

Enterprises aren’t losing control because they “don’t do AI security.” They lose it because their controls were designed for a world where:

  • humans initiate actions

  • apps are the boundary

  • permissions are scoped per application

  • audits follow explicit workflows

Agentic AI breaks those assumptions.

The most common failure mode is policy ambiguity: nobody can answer, precisely, which data an AI system may read, which tools it may call, under what conditions, with what logging, and with what escalation.

And in enterprise environments, ambiguity always resolves the same way: systems ship with broad access, and observability arrives later—after the first incident.

Why “LLM inside workflow” is a security problem, not a feature

Embedding LLMs into workflows is often pitched as “AI automation.” Security teams should translate it as:

“We are introducing a component that can be influenced by untrusted text, and we are giving it operational permissions.”

This is exactly why OWASP and major vendors are elevating prompt injection and indirect prompt injection as first-class risks, and why Microsoft specifically discusses defenses against indirect prompt injection in enterprise workflows.

If your workflow can change state (create, modify, approve, send, close), then the AI component is part of your control plane. Treat it like one.

How execution-layer platforms change the safety equation

This is where platforms like HAPP AI (and similar enterprise agent layers) become strategically relevant—not as “another channel,” but as an execution layer between customer communication and internal systems.

Execution layers are different from chatbots in one key way: they can be designed to enforce a closed-loop operational model:

  • integrate with systems of record (CRM/ERP/telephony)

  • log every action and decision path

  • measure outcomes (conversion, resolution, leakage, escalation rate)

  • improve flows under governance

This matters because the safest AI is rarely the smartest. It’s the one whose actions are constrained, observable, auditable, and reversible.

A practical way to express this is the policy loop enterprises already know from modern infrastructure:

Allow = Intent is validated

     AND tool scope is least-privilege

     AND action is logged

     AND output is bounded

Else = escalate / ask / refuse

A small, real-world-ish policy snippet

Here’s how integrators often start expressing “least privilege + audit” when building agentic systems (pseudocode / policy-as-code style):

# Example guardrail policy

default allow = false

allow {

  input.actor.type == "ai_agent"

  input.action in {"read_customer_status", "create_ticket", "update_order_note"}

  input.data.classification != "restricted"

  input.context.source_trust >= 80

  input.session.mfa_verified == true

}

deny_reason["restricted_data_blocked"] {

  input.data.classification == "restricted"

}

deny_reason["untrusted_context"] {

  input.context.source_trust < 80

}

The point isn’t the syntax. It’s the operating principle: AI execution must be policy-gated the same way production changes are policy-gated.

What enterprises should design for in 2026

Zero-click risk doesn’t go away with better prompts or nicer UX. It is structural.

If your organization is moving toward LLM search, agents, and system-to-system automation, “secure AI” starts looking like secure infrastructure:

The AI-native controls that matter most (execution security)

  • Least privilege for agents (scoped tokens, per-tool permissions, time-bounded access)

  • Context trust scoring (separate untrusted from trusted inputs; quarantine unknown sources)

  • Sandboxed tool execution (especially for write actions; staged approvals for high-impact operations)

  • AI firewall patterns (prompt/response inspection, injection detection, policy enforcement)

  • Full observability (tool calls, retrieved context, decision traces, and outcome logs)

The red flags that signal “zero-click ready to happen”

  • Agents can read broadly across drives/mail/CRM “for convenience”

  • RAG pulls from unvetted sources without provenance

  • Write actions are enabled without staged approvals

  • No audit trail of retrieved context + tool invocations

  • “It worked in the demo” is used as a reliability argument

The bottom line

Zero-click is becoming the default attack surface because the web is becoming less clickable and more executable.

As LLM search compresses journeys, AI agents replace manual steps, and systems talk to systems at machine speed, the most important security question shifts from “What did the user click?” to:

“What did the system ingest, and what was it allowed to execute?”

Enterprises that treat AI as a feature will bolt it onto workflows and discover the risk later. Enterprises that treat AI as infrastructure will design execution layers that are governed, observable, and constrained—so AI can scale without turning the organization into its own attacker.

Picture the incident review nobody wants to run.

Nothing was clicked. No one opened a suspicious attachment. No one installed a shady browser extension. Yet a sensitive internal summary appears in the wrong place, a CRM record is modified, or a customer workflow executes with the wrong parameters—because an AI system “helpfully” did what it inferred it should do.

This is the new security shape of AI-driven operations: action without interaction.

Zero-click used to mean a rare, elite class of exploit. In AI systems, it increasingly means something simpler—and more operationally dangerous: untrusted context reaching an executor.

The Web4 shift that changes the threat model

If Web2 was “pages → clicks,” then Web4 (as it’s being discussed in industry) looks more like “intent → execution.”

Three forces are converging:

  1. LLM search becomes the first interface: users ask; answers appear; navigation is optional.

  2. AI agents become the first operator: agents read, plan, and trigger actions across tools.

  3. System-to-system interactions become the default: APIs, event streams, and connectors exchange instructions continuously—often with AI in the loop.

The security implication is not philosophical. It’s mechanical:

When a model is allowed to act (call tools, write records, send messages, trigger workflows), the primary entry point becomes whatever the model can read—not what a user clicks.

Microsoft describes this class of risk directly in its work on indirect prompt injection, where an LLM processes untrusted data (emails, documents, web pages) that contains instructions the model misinterprets as commands.

Zero-click wasn’t an anomaly. It was an early signal.

Traditional zero-click exploits like Pegasus demonstrated that “no interaction required” can still mean full compromise. Citizen Lab documented FORCEDENTRY, a zero-click iMessage exploit used to deploy Pegasus spyware in the wild.

That era taught a hard lesson: if the attacker can reach the parser, the rest is details.

AI systems recreate this pattern at a different layer. The “parser” is no longer just a media library or messaging stack—it’s the context ingestion pipeline (RAG, email, docs, tickets, knowledge bases) plus the tool-calling runtime.

This is why the most important zero-click incidents in 2025 weren’t OS-level exploits. They were AI-native exfiltration chains.

EchoLeak and the new meaning of “no user interaction”

One of the clearest examples is EchoLeak, described by multiple security researchers as a zero-click style vulnerabilityaffecting Microsoft 365 Copilot—where crafted content can cause sensitive data exposure without the classic “user clicked a link” step. 

The important takeaway isn’t Copilot specifically. It’s the pattern:

  • Untrusted content enters a workspace (email/document/chat)

  • The assistant ingests it as context

  • The model is induced to treat it as instruction

  • It retrieves sensitive data via internal access (RAG/Graph/connectors)

  • It exfiltrates through an allowed channel (summary, message, URL, response)

This is not “phishing.” This is permissioned leakage—a system doing what it’s authorized to do, prompted by what it never should have trusted.

Why AI agents expand blast radius faster than humans

Security teams already understand blast radius in cloud: overprivileged roles, lateral movement, shared secrets.

AI agents change blast radius in three ways:

1) They are non-human identities with continuous access.
Agents run 24/7, read broadly, and often inherit permissions designed for convenience. That’s a larger, more persistent attack surface than a human user session.

2) They collapse boundaries between “read” and “act.”
RAG and tool-calling turn passive data access into active execution. A doc isn’t just a doc—it can become an instruction source.

3) They create invisible lateral movement via workflows.
A compromised instruction can propagate across tools: email → assistant → CRM → ticketing → messaging → billing.

To make this concrete, here’s a typical agentic attack chain rendered as an execution graph (not a “list of tips,” but the operational flow security teams actually need to model):

[Untrusted Input] (email/doc/web snippet)

        v

[Context Assembly / RAG] (retrieval + ranking + chunking)

        v

[LLM Reasoning Layer] (instruction vs data confusion)

        v

[Tool Invocation / Connectors] (Graph, CRM API, ticketing, payments)

        v

[State Change + Exfiltration] (record write, message send, summary output)

This is the Web4 security problem in one diagram: the UI is no longer the gate.

Where enterprises lose control today

Enterprises aren’t losing control because they “don’t do AI security.” They lose it because their controls were designed for a world where:

  • humans initiate actions

  • apps are the boundary

  • permissions are scoped per application

  • audits follow explicit workflows

Agentic AI breaks those assumptions.

The most common failure mode is policy ambiguity: nobody can answer, precisely, which data an AI system may read, which tools it may call, under what conditions, with what logging, and with what escalation.

And in enterprise environments, ambiguity always resolves the same way: systems ship with broad access, and observability arrives later—after the first incident.

Why “LLM inside workflow” is a security problem, not a feature

Embedding LLMs into workflows is often pitched as “AI automation.” Security teams should translate it as:

“We are introducing a component that can be influenced by untrusted text, and we are giving it operational permissions.”

This is exactly why OWASP and major vendors are elevating prompt injection and indirect prompt injection as first-class risks, and why Microsoft specifically discusses defenses against indirect prompt injection in enterprise workflows.

If your workflow can change state (create, modify, approve, send, close), then the AI component is part of your control plane. Treat it like one.

How execution-layer platforms change the safety equation

This is where platforms like HAPP AI (and similar enterprise agent layers) become strategically relevant—not as “another channel,” but as an execution layer between customer communication and internal systems.

Execution layers are different from chatbots in one key way: they can be designed to enforce a closed-loop operational model:

  • integrate with systems of record (CRM/ERP/telephony)

  • log every action and decision path

  • measure outcomes (conversion, resolution, leakage, escalation rate)

  • improve flows under governance

This matters because the safest AI is rarely the smartest. It’s the one whose actions are constrained, observable, auditable, and reversible.

A practical way to express this is the policy loop enterprises already know from modern infrastructure:

Allow = Intent is validated

     AND tool scope is least-privilege

     AND action is logged

     AND output is bounded

Else = escalate / ask / refuse

A small, real-world-ish policy snippet

Here’s how integrators often start expressing “least privilege + audit” when building agentic systems (pseudocode / policy-as-code style):

# Example guardrail policy

default allow = false

allow {

  input.actor.type == "ai_agent"

  input.action in {"read_customer_status", "create_ticket", "update_order_note"}

  input.data.classification != "restricted"

  input.context.source_trust >= 80

  input.session.mfa_verified == true

}

deny_reason["restricted_data_blocked"] {

  input.data.classification == "restricted"

}

deny_reason["untrusted_context"] {

  input.context.source_trust < 80

}

The point isn’t the syntax. It’s the operating principle: AI execution must be policy-gated the same way production changes are policy-gated.

What enterprises should design for in 2026

Zero-click risk doesn’t go away with better prompts or nicer UX. It is structural.

If your organization is moving toward LLM search, agents, and system-to-system automation, “secure AI” starts looking like secure infrastructure:

The AI-native controls that matter most (execution security)

  • Least privilege for agents (scoped tokens, per-tool permissions, time-bounded access)

  • Context trust scoring (separate untrusted from trusted inputs; quarantine unknown sources)

  • Sandboxed tool execution (especially for write actions; staged approvals for high-impact operations)

  • AI firewall patterns (prompt/response inspection, injection detection, policy enforcement)

  • Full observability (tool calls, retrieved context, decision traces, and outcome logs)

The red flags that signal “zero-click ready to happen”

  • Agents can read broadly across drives/mail/CRM “for convenience”

  • RAG pulls from unvetted sources without provenance

  • Write actions are enabled without staged approvals

  • No audit trail of retrieved context + tool invocations

  • “It worked in the demo” is used as a reliability argument

The bottom line

Zero-click is becoming the default attack surface because the web is becoming less clickable and more executable.

As LLM search compresses journeys, AI agents replace manual steps, and systems talk to systems at machine speed, the most important security question shifts from “What did the user click?” to:

“What did the system ingest, and what was it allowed to execute?”

Enterprises that treat AI as a feature will bolt it onto workflows and discover the risk later. Enterprises that treat AI as infrastructure will design execution layers that are governed, observable, and constrained—so AI can scale without turning the organization into its own attacker.

Ready to transform your customer calls? Get started in minutes!

Automate call and order processing without involving operators

Ready to transform your customer calls? Get started in minutes!