In security operations, we like clean diagrams: funnels, swim lanes, playbooks. However, the reality is messier. Especially now that the SOC is filling up with machine learning models, GenAI copilots and automated response playbooks, it’s tempting to believe we’re finally on the brink of a fully “hands-off” SOC.
We’re not... and we shouldn’t be!
AI is transforming how Security Operations Centres work, but it hasn’t changed one fundamental truth: the hardest problems in cyber are still human problems. Judgement, context, ethics and trade-offs live with people, not with models. The future isn’t a “no-ops” SOC. It’s an AI-accelerated SOC where human judgement is deliberately put in charge.
This article explores why, in the age of AI SOCs, human judgement still beats pure automation and how to design your operations so the tech amplifies your teams instead of sidelining them.
The pressure cooker that created AI-driven SOCs
The modern SOC is drowning in signals:
- Exploding attack surface (cloud, OT, SaaS, remote work)
- A constant stream of vendor alerts and telemetry
- Increasingly sophisticated, faster adversaries, including those exploiting AI themselves
The cost of getting it wrong keeps rising. The average global breach was reported to be USD $4.88 million, a 10% jump and the largest year-over-year increase since the pandemic.
At the same time, talent is scarce and teams are burnt out. That mix has made AI and automation incredibly attractive:
- Use machine learning to correlate alerts and reduce noise
- Use behaviour analytics to spot anomalies humans would miss
- Use SOAR and scripted playbooks to respond in seconds, not hours
This is the promise of the AI-enabled SOC: higher efficiency, lower mean time to detect (mean time to detect, MTTD) and respond (mean time to respond, MTTR), and the ability to monitor more assets with smaller teams. Recent commentary on cybersecurity automation highlights exactly these benefits—speed, scalability and continuous monitoring, while also warning that human oversight is essential to adjust and correct automated decisions.
So if AI is so powerful, why isn’t it enough on its own?
Breaches are still mostly about people
For all our investment in tools, the data is blunt: humans remain at the centre of breaches.
The 2024 Verizon Data Breach Investigations Report (DBIR) found that the “human element”, including social engineering, errors and misuse was involved in around two-thirds of breaches. That’s everything from a user clicking a phishing link, to misdirected emails, to misconfigurations in cloud services.
Similarly, phishing, business email compromise and credential theft are seen as persistent, high-impact threats, even as more technically sophisticated attacks continue to grow.
Why does that matter for AI SOCs?
Because most of what actually goes wrong is not a purely technical anomaly. It’s a messy mix of:
- Human behaviour (fatigue, urgency, trust, distraction)
- Business context (who has access to what, and why)
- Organisational trade-offs (productivity vs security friction)
A model may flag “user logged in from new device, unusual time, new geo”. Whether that’s a breach or a travelling executive trying to fix a critical issue at midnight is a judgement call. And that’s exactly where automation starts to hit its limits.
What AI and automation are genuinely good at
To be clear: AI is not a gimmick in the SOC. Used well, it’s a force multiplier.
1. Volume and velocity
AI models can process log volumes no human team could ever review: cloud telemetry, endpoint events, network flows, identity activity and more. They can correlate patterns across time and systems to surface subtle, low-frequency indicators of compromise.
2. Pattern recognition and anomaly detection
Statistical models and ML can learn “normal” baselines for users, hosts and applications, then detect deviations that would be invisible to simple rules. Combined with frameworks like MITRE ATT&CK etc… they can flag behaviour that looks like lateral movement or privilege escalation, not just “a weird login”.
3. Consistent execution of playbooks
Once a response path is well understood: reset a password, isolate a host, block an IP, automation can execute it reliably and repeatedly. No fatigue, no skipped steps, or delays.
4. 24/7 coverage and enrichment
AI-driven enrichment (geo-IP lookups, reputation checks, contextualization) happens in seconds, not minutes. Combined with automated triage, this can rapidly shrink MTTD and MTTR, which is critical given that many modern ransomware and intrusion campaigns can escalate within hours.
All of this is powerful. But it’s only half the story.
Where automation breaks: context, novelty and adversaries
AI in the SOC is still bounded by its inputs and assumptions. There are at least four places where over-relying on automation becomes dangerous.
1. Lack of business context
Models don’t inherently know your organisation’s real-world priorities.
- Is this file server a test environment or the system that runs payroll?
- Is this OT network segment controlling a lab, or a critical production line that can’t be taken down during business hours?
Without this context, automated “risk scores” can be wildly misaligned with what actually matters. Human analysts bring lived knowledge of the business: who screams when what breaks, which assets are “crown jewels”, and which teams can tolerate disruption.
2. Novel attacks and weak signals
Most AI detection relies on patterns learned from past data. Attacks that deliberately avoid known patterns or blend in with expected behaviour may go undetected.
This is particularly acute with:
- Slow-burn intrusions where attackers carefully throttle activity
- Abuse of legitimate tools (living off the land)
- Cross-domain campaigns that span IT, OT and cloud in subtle ways
Human threat hunters excel at forming hypotheses, chasing hunches and connecting dots across domains in ways that are still hard to encode in models.
3. Adversarial behaviour and AI misuse
Attackers also use AI. Europol and others have warned about AI-assisted malware, deepfake-based social engineering and large-scale, highly targeted phishing.
In parallel, security vendors are shipping more AI-enabled tools, which opens new attack surfaces:
- Prompt injection or “model hijacking” against SOC copilots
- Data poisoning of training sets via compromised telemetry
- Abuse of automated response interfaces to block legitimate traffic or disable controls
Determining when a model’s output might be manipulated or untrustworthy is a human critical-thinking task, not something we can fully automate from inside the same system.
4. Ethics, legality and accountability
When a decision has real-world impact e.g. blocking a hospital system, shutting down an OT process, reporting a suspected insider threat you run into legal and ethical questions.
Frameworks from NIST and others emphasize that automation should support, not replace, risk-informed decision-making by accountable humans.
The regulator, the board and the public don’t want to hear “the AI decided.” They want to know which accountable person made the call, based on what information, and how that decision aligns with policy.
What humans still do better than any model
So what does human judgement actually look like in an AI SOC?
1. Interpreting risk in real-world context
Analysts, incident commanders and security leaders:
- Understand business processes and political realities
- Weigh trade-offs between availability, confidentiality and safety
- Know which executives will accept which risks and which won’t
When an AI model scores two incidents as “high”, a human decides:
This one hits a low-priority lab. That one touches citizen data or patient records. We’re dropping everything for the second.
2. Making sense of incomplete, conflicting data
SOC work is often about ambiguity:
- Logs are missing
- Telemetry conflicts
- The attacker is actively trying to mislead you
Humans are good at building and revising mental models, holding multiple hypotheses in mind and updating them as new evidence arrives. This is core to effective incident handling and threat hunting.
3. Storytelling and influence
Defence doesn’t stop at detection. You have to convince people to act:
- Explaining to the CEO why a painful change is necessary
- Persuading line managers to enforce MFA or access reviews
- Conducting post-incident reviews that lead to real improvements
No model can build trust, read the room, or navigate organizational politics the way a human can. Yet these “soft skills” are often what determine whether your security recommendations actually stick.
4. Creativity and adversarial thinking
Attackers are creative. Defenders must be too.
Human analysts can:
- Imagine how an adversary might chain small misconfigurations into a big compromise
- Run tabletop exercises where they invent plausible but unseen attack paths
- Question whether a long-standing assumption still holds in the current threat landscape
AI can help simulate scenarios, but it’s humans who decide which scenarios matter and what to change in the organisation as a result.
Designing a human-in-the-loop AI SOC
The goal is not to reduce human involvement to near zero. It’s to move humans to the parts of the workflow where their judgement has the most impact.
Evolving the SOC skill set for the AI era
If judgement is the differentiator, we need to hire and grow people accordingly.
1. From tool operators to investigators
Rather than focusing on which SIEM query language someone knows, emphasise:
- Hypothesis-driven investigation
- Ability to correlate clues across systems and time
- Understanding of attacker tradecraft and kill chains
Tools change but investigative thinking scales across them.
2. Data literacy and model scepticism
Analysts don’t need to be data scientists, but they do need to:
- Understand what their AI-powered tools are (and are not) doing
- Spot when outputs look suspicious or too good to be true
- Ask good questions about training data, blind spots and bias
This helps prevent “automation bias” the tendency to over-trust system recommendations even when they conflict with other evidence.
3. Communication and collaboration
As breaches become more disruptive and expensive, SOC teams are pulled deeper into conversations with business leadership, regulators and partners.
Skills that matter:
- Writing clear, non-technical incident reports
- Presenting risk and options succinctly
- Collaborating with IT, legal, HR and operations during crises
These are not “nice-to-haves” in an AI SOC. They are critical to turning detection into meaningful organisational change.
Governance: keeping humans accountable
Finally, there’s governance. Many frameworks and regulators now expect:
- Documented roles and responsibilities for automated vs human decisions
- Explainability for high-impact security actions
- Continuous monitoring not just of systems, but of the controls themselves
This is where security operations, risk management and compliance must align:
- Define which classes of actions may be fully automated and under what constraints.
- Ensure there is always a human who can halt or override automation.
- Log not just what was done, but why including model outputs and human rationale.
Get this right, and AI becomes an asset in your audits and regulatory engagements, not a new source of opaque risk.
In Conclusion: judgement on top, automation underneath
AI SOCs are here to stay, and that’s good news. Used well, they help close the gap between the scale of modern threats and the finite capacity of human teams. They reduce toil, improve detection and make the job more sustainable.
But efficiency without judgement is just faster failure.
The human factor isn’t a liability to be engineered out. It’s the decisive layer that interprets, prioritizes and takes responsibility. The winning SOCs in the age of AI will be the ones that:
- Automate aggressively at the bottom of the stack
- Invest heavily in human skills at the top
- Make human oversight a feature, not an afterthought
AI can spot the smoke. It still takes people to decide which fire to fight, how to fight it, and what to rebuild afterwards.

