Cybersecurity is entering a transformational period. Over the past two years, a series of high-profile incidents has shown that artificial intelligence is no longer merely a supporting tool. It has become a central element in both cyberattacks and cyberdefence.
In early 2024, researchers demonstrated how attackers exploited Anthropic’s Claude through an indirect prompt injection attack, tricking the model into exfiltrating data using its own code interpreter. Malicious instructions were embedded into seemingly harmless input, allowing the model to execute tasks that circumvented traditional safeguards. This attack exposed a striking new reality: modern AI agents, especially those with external access, introduce novel vulnerabilities that did not exist in traditional systems.
Around the same time, Microsoft Threat Intelligence confirmed that state-sponsored groups from China, Russia, Iran, and North Korea had begun using large language models (LLMs) to accelerate reconnaissance, craft more convincing phishing content, and analyse stolen datasets. The report noted that while the models did not directly execute cyberattacks, their ability to automate and enhance early-stage operations has meaningfully shortened the attacker timeline.
CISA echoed these concerns in its 2024 “Secure by Design” advisories, warning that LLM-powered systems expand the attack surface by enabling new classes of prompt-based manipulation, automated exploitation, and cross-system chaining. Meanwhile, NIST began formalizing a new category, AI system vulnerabilities, in its early drafts of the AI Risk Management Framework, signalling the growing recognition that traditional cyber controls are insufficient for AI-integrated environments.
At the same time, defenders are also advancing. IBM’s Cost of a Data Breach Report found that organizations using AI-driven detection and response reduced breach lifecycles by nearly 100 days, dramatically lowering containment costs. MITRE’s newest ATT&CK updates now include AI-related techniques and defensive guidance, acknowledging the growing presence of AI in both offence and defence.
In short: AI has become a force multiplier for everyone for both attackers and defenders alike.
The interview conducted with cybersecurity analyst, Ken DSouza, for this article reinforced the same conclusion. Throughout the discussion, key themes emerged: the growth of AI-enabled attack tactics, the operational struggle of CISOs trying to keep pace with rapidly evolving environments, the rise of AI-enhanced SOC workflows, and the governance challenges introduced by machine-speed decision-making.
The result is a cybersecurity landscape defined by acceleration. Attackers are faster. Defenders are faster. And the organizations that adapt will determine whether AI becomes an advantage or a liability.
AI as a Catalyst for New Attack Vectors
When asked how AI is changing the threat landscape, DSouza pointed to the same indirect prompt injection attack on Claude as a defining example. This incident revealed what many experts now call the “lethal trifecta” of AI security risk:
- A powerful model capable of following complex or ambiguous instructions.
- External access, such as code interpreters or third-party connectors, even when tightly controlled.
- Prompt-based control, where system instructions and user input are not fully separated.
This combination creates a new class of vulnerabilities unique to modern AI agents.
But this is only one example. Across the industry, analysts have observed AI enabling:
- Hyper-personalized phishing generated at massive scale, using tone-matching, behavioural modelling, and real-time content adaptation.
- Autonomous reconnaissance, where AI agents scan networks, cloud assets, and publicly exposed interfaces far faster than any human attacker.
- Malware mutation, with AI modifying payloads or behaviours to evade signature-based detection.
- Automated vulnerability analysis, where AI can read source code, search for weak configurations, or chain misconfigurations across systems.
State-sponsored APT groups, in particular, have embraced this evolution.
“APTs are constantly using AI to automate their tasks so they can sprawl out and attack,” DSouza explained. The automation of low-level or repetitive attacker tasks allows threat actors to focus more time on high-value targets and lateral movement.
In short, attackers are no longer just skilled operators, they are operators augmented by highly capable AI systems.
The Monitoring Challenge: When Everything Is Changing at Once
CISOs consistently describe their biggest challenge as “keeping up with change.” Today’s environments shift hourly. Cloud resources spin up and down. Remote endpoints appear and disappear. OT systems connect to networks they were never originally designed to touch. Shadow IT expands unpredictably across business units.
DSouza highlighted an often-overlooked dimension of this problem: tech debt. Modernizing security tooling requires investment. Not just in technology but in infrastructure, architecture, and people. Yet CISOs must compete for budget with every other part of the business.
“CISOs face a lot of upgrades and updates to their infrastructure,” he said, “but in order to make this happen, they have to consider the business need for such changes. They often find themselves having to convince CFOs and other leaders, and be very clear on the justification.”
This friction slows modernization at a time when the threat environment is accelerating exponentially.
AI is helping bridge this gap. Modern AI-driven platforms provide:
- Continuous asset discovery in hybrid IT/OT/cloud environments.
- Real-time vulnerability identification as configurations change.
- Risk-based prioritization that ties exposures directly to business impact.
- Anomaly detection flagging deviations in behaviour, access patterns, or data flows.
However, these tools rely on foundational visibility. Without executive buy-in to modernize architecture, CISOs cannot fully leverage the technology needed to keep up with machine-speed adversaries.
Incident Response Reinvented: The Rise of SIEM + SOAR + AI
Incident response has undergone one of the most dramatic transformations since AI entered the SOC.
DSouza explained that many organizations now maintain SOAR systems that are fully configured, even if not always active, so they can be “switched on” rapidly when needed.
“SOAR has helped reduce response time and error in general,” he noted.
This aligns with industry findings. AI-accelerated SIEM and SOAR platforms now:
- Automatically enrich alerts with contextual threat intelligence.
- Execute containment workflows without manual intervention.
- Identify correlations across massive datasets.
- Filter noise to ensure analysts focus on the highest-impact events.
In the past, analysts would spend hours gathering logs, reviewing cases, and manually verifying indicators. Today, AI can perform this enrichment in seconds.
The result is a shift in the SOC’s operating model:
- AI handles triage.
- AI handles enrichment.
- AI handles repetitive or low-risk actions.
- Humans handle investigation, judgment, and escalation.
This partnership dramatically reduces response times and decreases cognitive load on analysts—addressing both operational efficiency and burnout.
Does AI Understand Context? The Human–Machine Partnership
A key question in the AI security debate is whether machines can truly “understand context.” DSouza offered a balanced perspective.
AI is highly effective at processing large volumes of data, identifying statistical anomalies, and executing tasks without fatigue or emotional bias. In other words: AI excels at consistency and speed.
However, it remains limited in its ability to interpret nuance.
“Human analysts have intuition, gut feeling, and a lot more context than AI,” DSouza said. “We need a balance between AI and humans that works like a partnership.”
This view reflects the emerging industry consensus:
Humans bring:
- Strategic judgment
- Understanding of the business context
- Interpretive reasoning
- Ethical oversight
- Experience-driven intuition
AI brings:
- Scale
- Speed
- Precision
- Pattern recognition
- Consistency under pressure
The future of defence is neither human-only nor automation-only. It is augmented decision-making, where machines handle volume and humans handle meaning.
The Speed War: AI Accelerates Both Attack and Defence
Attackers have always tried to be faster than defenders, but AI is redefining the meaning of “fast.”
On the attacker side:
- AI-generated phishing can be produced instantly at scale.
- Reconnaissance can be fully automated.
- Malware can evolve on the fly.
- LLMs can help attackers evaluate stolen data in minutes instead of days.
On the defender side:
- AI systems detect anomalies in seconds.
- Playbooks isolate compromised endpoints immediately.
- AI agents elevate only the most critical signals to analysts.
- Incident response times are shrinking dramatically.
DSouza summarized this dynamic clearly: “APTs are using AI to automate their tasks… but we also use AI to detect them.”
The result is a machine-speed contest, and the organizations that automate effectively will be the ones that keep pace.
Bridging IT, OT, and Security Through AI Integration
Another area where AI is proving transformative is cross-team collaboration. Historically, IT, OT, and security teams used different tools, maintained separate silos, and communicated through lengthy manual processes.
But AI-enabled platforms are collapsing those barriers.
DSouza highlighted Microsoft 365 as a real-world example. Its integrations have evolved to the point where different business units can access shared dashboards, unify logs, and accelerate collaboration thereby creating a single pane of glass across environments.
This visibility is essential in organizations where:
- OT systems are increasingly digitized.
- Cloud applications multiply rapidly.
- Remote work expands the attack surface.
- Shadow IT grows outside formal governance.
AI not only detects threats, it aligns teams around the same information and accelerates shared understanding.
Risk, Governance, and the Imperative of Responsible Adoption
With AI becoming integral to SOC operations, governance is becoming increasingly complex. DSouza emphasized the importance of legal compliance and responsible data handling.
“From a risk and governance perspective,” he explained, “we have laws we have to comply with. We need to ensure AI access does not involve sensitive or classified information.”
This aligns with CISA, NIST, and other regulatory bodies calling for:
- Rigorous data sanitization before information is fed into AI systems.
- Controlled access to AI tools.
- Clear documentation of model usage.
- Continuous monitoring of AI-generated outputs.
As organizations experiment with generative AI for threat analysis, log review, or decision support, governance frameworks must evolve alongside technology.
Responsible AI adoption is not optional but it is a cornerstone of cyber resilience.
Looking Ahead: AI and the Future Role of the CISO
DSouza offered simple but powerful advice when asked what the next three to five years will look like.
“Adopt AI. This is the new age, and we have to be with the times.”
Future CISOs will be expected to:
- Treat AI as a foundational defence capability.
- Maintain constant awareness of evolving compliance requirements.
- Build SOCs that combine human expertise with machine-scale automation.
- Invest in architecture and cultural change, not just point solutions.
- Develop workforce strategies that blend technical skill with governance literacy.
AI will not replace the role of the CISO, but it will absolutely redefine it. Those who embrace AI as a strategic asset will build stronger, more resilient programs. Those who delay may find themselves outpaced by adversaries who do not.
Conclusion: When the Adversary Adapts, We Must Adapt Faster
AI is reshaping cybersecurity in real time. Attackers are using it to scale operations, automate reconnaissance, and develop new avenues of exploitation. Defenders are using it to detect threats faster, respond more intelligently, and bring coherence to complex environments.
The message is clear:
AI is neither inherently dangerous nor inherently protective—it is powerful. And the outcome depends on how quickly and responsibly organizations adapt.
Those who pair AI with strong governance, continuous oversight, and skilled human judgment will build the adaptive, resilient security programs the future demands. Those who hesitate risk falling behind not only human adversaries but increasingly autonomous machine-driven ones.


