AI has changed the cyber threat landscape

Artificial intelligence is now a first‑order cyber risk driver, as well as a productivity tool. AI‑accelerated threats, fragile model “harnesses,” and concentrated dependence on a few vendors demand urgent uplift in governance, resilience, and incident response.

Share
AI has changed the cyber threat landscape
Photo by Igor Omilaev / Unsplash

Artificial intelligence is not just another IT trend. It is fundamentally changing the cyber threat landscape and the expectations of regulators. In the past few weeks alone, ASIC and APRA have both written to industry warning that AI is accelerating cyber risk and that governance needs to catch up fast.

The message to boards and executives is clear: you cannot treat AI and cybersecurity as separate conversations any longer. They are now the same conversation.

What the regulators are really saying

In early May 2026, ASIC issued an open letter to licensees and directors calling for “urgency, focus and accountability” in how organisations uplift their cyber security to withstand AI‑accelerated threats. The letter frames cyber resilience as a core licensing obligation and explicitly expects firms to extend existing governance, risk, and cyber frameworks to cover AI‑specific risks. ASIC is particularly focused on strengthening governance and risk frameworks for AI, protecting critical assets, and ensuring that escalation and decision‑making processes are fit for an AI‑enabled threat environment.

Just days earlier, APRA had written to banks, insurers, and super funds, warning that AI adoption is outpacing the sector’s ability to manage the new risks it creates. APRA’s review found that governance, risk management, assurance, and operational resilience practices are not keeping up with the scale, speed, and complexity of AI deployments. It highlighted concentration risk from over‑reliance on a small number of AI providers, gaps in contingency planning, and weaknesses across the full AI lifecycle, including third‑party arrangements.

APRA singled out frontier models such as Anthropic’s Claude Mythos, warning that they could materially increase both the likelihood and scale of cyber attacks by helping malicious actors discover and exploit vulnerabilities faster than many institutions can patch them. In other words, the tools your teams are experimenting with today are also empowering your adversaries.

How AI is changing the cyber threat landscape

AI is amplifying cyber risk in several key ways that boards and executives need to understand.

AI is supercharging bad-actors. Generative models can write convincing phishing emails, clone voices, and generate fake identities at scale, lowering the bar for sophisticated social engineering. Deepfake images, audio, and video can now be crafted to impersonate executives and trusted advisers, making traditional “trust the channel” heuristics increasingly unreliable.

AI is also automating and optimising technical attacks. Models like Claude Mythos can help attackers rapidly identify vulnerabilities, chain exploits, and even generate polymorphic malware that continuously mutates to evade detection. This shortens the attack cycle and can quickly overwhelm traditional patching and remediation processes that were already struggling to keep up with high‑severity vulnerabilities.

AI systems themselves introduce new attack surfaces. Models can be manipulated through prompt injection, data poisoning, and adversarial inputs, leading them to leak sensitive information, make unsafe decisions, or degrade the performance of security tools that rely on AI. When AI is embedded deeply into business processes - credit decisioning, trading, underwriting, claims, customer onboarding - these vulnerabilities become business‑critical, not theoretical.

The combined effect is that cyber risk is now becoming more dynamic, more scalable, and more tightly coupled to AI governance than ever before.

Claude’s recent issues as a case study in operational risk

The recent turbulence around Anthropic’s Claude models offers a useful illustration of how AI‑related issues can quickly become operational and reputational risks. In March 2026, Anthropic reported “increased errors” and “reduced performance” for its latest Claude Opus 4.6 model, including the Claude app and console, even as it topped Apple’s free app charts. For enterprises starting to embed Claude into critical workflows, those elevated error rates translate directly into degraded service, productivity losses, and potentially customer‑visible failures.

For non‑specialists, it helps to understand what a “harness” is in this context (here is a video about them). The AI model itself is the clever brain, but the harness is the surrounding software and infrastructure that decides how that brain is used: how much “thinking time” it gets, what tools and data it can access, how it remembers previous steps, and how it formats answers. A useful analogy is that the model is like an engine, while the harness is everything around it - the gearbox, dashboard, safety systems, and controls - that turns raw power into a safe, predictable car you can actually drive.

In April, Anthropic published a post‑mortem explaining that the problems were not due to the underlying model weights regressing, but to three changes in the harness around the models: a shift in default reasoning effort, a caching logic bug that effectively wiped the model’s short‑term memory every turn, and updated verbosity prompts. These relatively small configuration and infrastructure changes produced outsized impacts on perceived capability and reliability for users, especially on complex tasks.

For boards and executives, the lesson is that AI risk is not just about model accuracy or bias; it is also about the mundane but critical realities of software engineering, release management, monitoring, and incident response. If you are consuming AI as a service, you are exposed not only to your own change‑management practices but also to your provider’s, including the way they roll out harness changes and configuration tweaks. Third-party risk is still a thing even in the marvellous world of AI.

What companies should do now

Regulators are clearly expecting disciplined, risk‑based action for the businesses that come under their umbrellas. A practical response for companies should include at least the following steps.

  1. Treat AI as a first‑class cyber risk driver
    Make AI a standing item in cyber risk reporting to the board, alongside traditional threat intelligence and incident updates. Ensure your cyber risk appetite explicitly addresses AI‑enabled attacks and the use of AI in your own operations.
  2. Map your AI attack surface
    Build and maintain an inventory of AI use cases, models, and providers across the organisation, including embedded AI features in SaaS products. Identify which systems, datasets, and processes would cause the greatest harm if an AI component failed, was compromised, or was abused, and treat them as critical assets for protection and monitoring.
  3. Uplift governance and lifecycle controls
    Extend existing model risk management, change‑management, and assurance processes to cover AI systems from design through to decommissioning, including prompt engineering, fine‑tuning, and guardrails. Require security testing and code review for AI‑generated or AI‑assisted code, with clear policies for how teams can safely use tools like Claude, GitHub Copilot, and others.
  4. Address third‑party and concentration risk
    Review dependence on a small number of AI providers, especially where multiple critical use cases sit on the same platform. Ensure contracts, exit plans, and contingency arrangements reflect the reality that model behaviour, harness configuration, and service levels may change frequently - as Anthropic’s recent experience shows.
  5. Invest in people, process, and resilience
    Uplift cyber hygiene and fundamentals first; ASIC has been explicit that firms should not wait for advanced AI tools to fix basic weaknesses. At the same time, build capability in your security and risk teams to understand AI‑specific threats, run adversarial testing of AI systems, and use AI defensively for threat hunting and monitoring.
  6. Practice AI‑specific incident response scenarios
    Update playbooks to cover AI incidents: prompt injection against customer‑facing chatbots, data leakage from training or fine‑tuning, model outages at a critical provider, or an AI‑assisted mass phishing campaign against staff. Run tabletop exercises that include your AI product owners, data teams, and external providers so everyone understands roles and escalation paths.

From experimentation to accountable stewardship

Most organisations are still in the experimental phase with generative AI, piloting use cases at the edge of the business rather than in core systems. Yet attackers, unconstrained by governance processes, are moving much faster. ASIC and APRA’s recent letters are a signal that the window for casual experimentation with AI - without strong governance and cyber oversight - is closing.

For boards and executives, the challenge is to move from AI enthusiasm to AI stewardship: to harness the productivity and innovation benefits of AI while taking seriously the new ways it can fail and the new threats it enables. That shift will define which organisations can safely scale AI, satisfy regulators, and maintain trust in an increasingly adversarial digital environment.

All views on this site are my own and not those of any employer or organisation.