Thoughts on How AI Will Shape Cyber Security.
This article explores how LLMs will reshape cyber security practices and examines the potential implications for security engineering workforce demand.
Cyber security has always been a reactive field. We’re not the rockstars, we’re the bodyguards protecting them. We arrive after the scene is already set, and they don’t involve us in its creation. We have to adapt to the new environment and invent new methods if needed.
When Active Directory dominated corporate environments, securing it was a hot topic. Now, many companies have ditched their physical networks and shifted to cloud-native ecosystems with SSO platforms like Okta. As the infrastructure changed, so did the attack surface, and so did we. We invented a new profession named “Cloud Security Engineer”.
The rise of agile software development led to continuous integration and daily deployments. Continuous delivery made “freeze the repo and call the pentest team” impossible. Traditional security testing just couldn’t keep up. That’s how DevSecOps was born.
What Hasn’t Changed?
Despite all the technological shifts, one thing hasn’t changed: the heavy reliance on human expertise. The vulnerability scanner tools we use like static code analyzers or automated web scanners are almost the same as they were ten years ago. Tooling never made the leap from “helpful assistant” to “trusted authority”. They still generate noise, miss subtle bugs, or flag false positives. They need interpretation, correlation, and judgment. That judgment has always come from humans.
What Is Changing Now?
What is changing now is that LLMs are getting good at “everything”. And that everything contains security too. Previously, developing a new security product was a time-consuming and resource-intensive process that could take years. Now, LLMs are becoming better security products without anyone noticing.
Less Vulnerable Code
Take static code analysis products like Checkmarx and Synk as an example. They employ a large number of employees and resources, but still their products are not perfect.
But now, advanced models like ChatGPT o3 or Gemini 2.5 Pro are very good at finding vulnerabilities in code, even though they don’t advertise themselves as “security tools.” Without even trying, they’ve become better at security than many tools made by security companies.
Consider this example: most static code analysis tools won’t be able to detect this risk. But Gemini was able to identify it: https://gemini.google.com/share/24e94c0e306c
They still choke on giant monolith projects due to context-length limitations. But this will be solved in the near future. Therefore, companies won’t purchase security tools from security companies. Instead, they’ll turn to large monopolies like Microsoft and Google. They will pay for LLM models that can do “everything”: write code, find security flaws in code, answer questions, create documentation, review code changes, and handle any other task you can think of. Therefore, LLM providers could potentially be the largest security providers as well.
We are almost certain that developers will use LLMs to generate code in the future. When code is produced with security-conscious models and simultaneously reviewed by another agent in real-time, there’s a massive potential to reduce application-layer vulnerabilities before they’re even committed.
Less Demand for Penetration Testers.
Fewer vulnerabilities mean less demand for human-led application penetration tests. Just like mainframe audits today, deep testing will mostly be used in critical infrastructure or high-finance systems. It won’t be considered for most of the companies. And those companies won’t have specialised security engineers. Developers (or tech generalists, as they may be called in the future) will be able to handle security-related tasks with the help of LLMs.
Diminishing Security Positions
Any security job that can be broken down into a decision tree is at risk. Take cloud security as an example. There are many factors to consider, such as exposed buckets, privilege escalation paths, and subnet misconfigurations. However, these are all decision tree tasks that an AI agent can check and report on continuously. For instance, an AI agent could report, “This bucket contains sensitive data and was exposed to the public just two minutes ago.” You don’t need to be a cloud security expert to understand its output.
Or sending phishing e-mails for awareness and assigning training to people who failed. We won't need humans for this job anymore.
Reduced Human Need for Vulnerability Management
AI agents will make vulnerability management much easier. Think about all the security tools that find vulnerabilities in your systems: infrastructure scanners like Nessus, web security scanners like Invicti, open source we scanners like Nuclei, code analyzers, secret scanners, container scanners, and dependency checkers etc.
These tools often make vulnerabilities sound worse than they really are. Right now, security engineers need to figure out which findings are real problems and which are not. Is this a false-positive? Is it true positive but not exploitable? Is it exploitable but unlikely to happen? Or does it need immediate attention?
A proper LLM model can handle this task well. It can create accurate risk ratings by checking if systems are accessible from the internet, reading vulnerability descriptions, and understanding how they could be exploited. Then a human engineer can review the AI's work and take action when needed. This means you could potentially manage your vulnerabilities with a single person instead of a team of five.
AI workflow automation systems like n8n can play a huge role here. In fact, it might even be possible to implement this system today.
The end of Red Team, Blue Team, and the Rise of the Generalist Security Engineers
I believe the line between red team and blue team roles will become less clear in the future. Companies will likely have small security teams, and these teams will be expected to understand both red and blue team tasks at a high level.
However, I’ve never worked with blue team tools before. For example, I don’t know how to manage CrowdStrike agents on endpoints or how to analyze logs in Splunk. But a good blue team security engineer knows how to use their UIs, where to click, and which queries to run.
But MCPs (Model Context Protocols) are changing this. They are removing the need for traditional user interfaces. When an AI agent is connected to Splunk via Splunk’s MCP, I no longer need to use Splunk’s interface. I just need to know what I want. For example, I can say: “Analyze the logs and tell me about failed login attempts from Asian countries between these dates.” The agent can do this without me needing to use the Splunk interface. Or I can ask: “Find the suspicious IP address in Splunk and check CrowdStrike to see if any processes communicated with it.” I don’t need to be an expert in CrowdStrike.
Do you think it’s too high-level? Let me tell you something, we’re too high-level for a security engineer who works in the 90s. For example, the person who created nmap knows how to modify raw TCP packets at a low level. I don’t know how to do that. I don’t need to do that. It’s too low level for me. I’m interested whats inside of a layer7 HTTP packet. But even this might become too technical in the future. People might not need to understand HTTP packets at all. The tools and AI will handle these low-level details, while security professionals focus on higher-level strategy and decision-making.
That’s why versatility is becoming more important than specialization. The old advice "Pick one area and go deep” is giving way to “Be a jack of all trades.” You’ll need enough knowledge to ask the right questions.
A Surge in Cybercrime Is Coming Too
Of course, the adversary gets the same upgrade. AI agents may allow threat actor to run many operations in parallel. Even worse than that, the window from “patch released” to “exploit in the wild” can go down from weeks to hours. An AI agent can automatically read patch notes from popular software, write exploit code, scan the internet for vulnerable targets, attack them, steal valuable data, or install ransomware. This means if you host a vulnerable software, you must patch it the same day the fix comes out. Waiting until tomorrow might be too late.
Final Thoughts
I believe cyber security will still be important, but it will become a more niche profession. I’m afraid there will be less demand for security engineers in the future, but the job won't disappear completely. Security engineers will become like orchestra conductors instead of individual musicians. They won't need to play every instrument perfectly, but they'll need to know how to direct all the different tools and AI agents to work together effectively.
It’s really an interested read. Thanks for the write up.
Great Article. Thanks for sharing 👋