Will AI Deepen the Cyber-Security Skills Gap?
Artificial intelligence is everywhere at today’s cyber‑security conferences. New platforms (and refreshed/rebranded legacy tools….), proudly tout machine‑learning engines, large language models (LLMs) or “generative AI.” The pitch is simple: AI promises to make security operations not only more accurate but also more efficient.
There is likely a level of truth in the hype. Well‑trained AI systems can triage alerts around the clock, correlate threat intelligence at machine speed and surface suspicious activity that would otherwise drown in noise. Yet one claim keeps resurfacing that deserves a harder look: the idea that AI can replace entry‑level analysts altogether.
Why Junior Analysts Still Matter
On paper, swapping humans for algorithms looks irresistible. AI never needs annual leave, a pay review or a promotion; licences may be pricey, but the software never burns out. However, the push to automate first‑line investigation overlooks a crucial reality:
Junior analyst roles are the breeding ground for tomorrow’s senior defenders.
Most seasoned professionals cut their teeth on the “grunt work” now earmarked for AI: combing log files, validating false positives, rebuilding phishing emails, and learning, through repetition, how small misconfigurations or social‑engineering hooks lead to major incidents. Strip away that apprenticeship and the industry risks starving itself of future expertise.
A Looming Two-Tier Workforce
Today, headlines about a cyber‑skills shortage are less voluminous than they were a few years ago, partly because entry‑level supply has caught up and partly because tooling has improved. But look ahead five to ten years:
Veteran practitioners will continue to move into leadership or architecture roles.
Fewer newcomers will have gained hands‑on experience.
Organisations will find a yawning gap between basic SOC monitoring (now automated) and the complex, contextual judgment calls that only experienced humans can make.
That mid‑senior shortage could be far harder, and more expensive, to close than today’s entry‑level gap.
Training the Trainers — Human and Machine
There is also a second problem. Generative models learn by ingesting historical investigations and analyst notes. If there are fewer human investigations, the data available to train or fine‑tune future models shrinks too. In other words, we risk a feedback loop: fewer analysts → poorer data → weaker AI → greater dependence on the limited human experts who remain.
A Balanced Approach: Augmentation, Not Replacement
The solution is not to shun AI; it is to deploy it as a force multiplier while preserving a human talent pipeline.
Daytime co‑analysis: Pair junior analysts with AI copilots that suggest next steps, draft incident reports and flag potential blind spots. Humans still make the final call and learn from the machine’s recommendations.
Out‑of‑hours coverage: Allow AI to run unattended during nights or weekends for triage and containment, handing a curated queue of cases to humans each morning.
Structured up‑skilling: Rotate analysts through threat‑hunting, purple‑team exercises and tool‑tuning so their work is not limited to alert triage a task AI excels at automating.
Metrics that value mentoring: Measure SOC performance not only by mean‑time‑to‑detect but also by how many junior staff progress to higher‑skill roles each year.
Yes, this hybrid model diminishes some of the headline cost savings promised by pure automation. But it preserves organisational resilience, sustains a healthy talent pool and produces better‑trained AI systems over time.
Final Thoughts
AI is transforming cyber defence, often for the better. Yet if we chase efficiency without regard to workforce development, the industry may create the very skills crisis it aims to solve. Augmenting, rather than eliminating, junior analysts is the pragmatic path - one that keeps both our people and our algorithms learning long into the future.