AI can help reduce the need for entry-level cyber professionals. But can the AI effect bridge the cybersecurity workforce gap?
Cyberseek estimates that there are currently 715,000 people employed in cybersecurity in the U.S. with 314,000 unfilled positions. This is not a recent trend, nor is it just a U.S. issue. From 2013 to 2021, cybersecurity openings grew by 350% to 3.5 million worldwide. That has held steady until now, based on an International Information System Security Certification Consortium (ISC2) Cybersecurity Workforce Study.
AI offers an exciting opportunity to bridge this longstanding workforce gap in multiple ways.
Let Agents Do Grunt Work Across Cyber Career Levels
There’s a lot of focus on how AI agents and agentic approaches could replace the entry-level cybersecurity workforce. However, this will only put a small dent in the workforce gap.
Entry-level jobs accounted for the smallest number of job openings at roughly 10,000, according to Cyberseek’s career pathway map. Of the 314,000 unfilled openings, most were mid- to advanced-level jobs.
AI has shown great promise in automating analysis and executing tasks for mid- to upper-level careers as well. However, these roles tend to carry higher stakes and require a higher level of trust to be carried out. In these cases, a tiered and strategic approach to implementing AI likely is the best-case scenario for wide-scale adoption.
What are AI Agents?
AI agents are models trained to analyze and act in a narrow fashion on a particular task.
Earlier this year, Microsoft rolled out 11 AI agents for its Security Copilot based on customer feedback. A couple of examples are an agent for phishing email detection and an agent for breach notification. The agent focused on phishing email detection does not have knowledge and cannot evaluate or act on breaches. Meanwhile, the breach notification agent cannot detect or act on phishing emails.
Work is rapidly being done on agentic workflows, where these agents have knowledge of each other and work together. But that requires an even greater level of trust and may take a little more time to widely enter the mainstream marketplace.
“Correctly implemented” agents are experts in their narrow view of the operation. This means they are trained and have access to optimal data at all times to perform their jobs. These agents will plug away and deliver high-quality results in their field of view.
If the human in the loop knows the purpose of the agent and can ensure that it has the data needed, this is an excellent way to supplement and scale operations without additional hiring.
AI Can Reduce Manual Intervention, Burnout, and Career Pivots
The cybersecurity field is an alert-driven field, where tasks often involve manually evaluating dashboard after dashboard for hours at a time. Depending on the job’s sensitivity, alerts can be constant, so it is easy for a human analyst to grow weary of them over time.
Additionally, entry-level analysts in cybersecurity often become proficient in other areas like program management, computer networking, software development, and systems engineering. This gives them the skills needed to pivot into other fields.
This could be a major cause for the influx in job openings across the nation. For these individuals, the journey may have started in cybersecurity but ended up in a different field.

Napoleon Paxton
AI presents a great opportunity to analyze mundane reports and to only alert human analysts when necessary. That could prevent entry-level analysts from burning out. It also can provide mid- or advanced-level roles with support that would normally be provided by those entry- to mid-level roles.
Threat Actors Are Using AI, Too
In this era of AI, cyber criminals are also taking advantage of the technology to enhance their attacks. They are using it to generate phishing emails and create duped content such as deepfake videos and audio files to impersonate victims.
These types of attacks can trick AI models indirectly. They can impersonate data input into the models to get a result that is favorable to the adversary, such as not detecting an email as spam. That puts individuals and organizations at risk of malware exposure.
Cybercriminals are also using AI to directly affect models. For example, in a poisoning attack, a cybercriminal gains access to the AI model training data and injects information to instruct the model to perform favorably to the attacker. An agent may rely on a model’s training data to deposit money in a certain bank account, for instance. If this model is poisoned, the threat actor could divert the money to their own bank account instead.
Keep AI Secure
To guard against these evolving threats, we need to develop a robust governance strategy with multiple checks to secure the model pipeline. Some key paradigms to follow include:
- Zero Trust
- Phishing-resistant and multi-factor authentication (MFA)
- Secure by Design
- Security standards such as NIST AI Risk Management Framework
How to Balance the AI and Human Workload
The overall optimal balance between automation and human oversight is 70/30 with 70% controlled by humans. As trust grows in agents, that balance will quickly flip to 30% or less for humans and 70% or more for the agents.
When we break down the roles over the three levels — entry-, mid-, and advanced level — we can see a more granular picture. AI should augment humans, and we should strategically evaluate the strengths it can bring across career levels.
- As entry-level cyber roles are currently defined, a strong argument can be made that AI can completely replace human workers.
- For mid-level roles, roughly 50%-70% of the tasks could be replaced by AI.
- In advanced roles, only about 10% of their tasks could be completely replaced by AI.
These rough estimates consider the increasing range of variability in the decisions that need to be made as cyber careers progress. Many are hard or impossible to train a model to do today.
In addition to training, the trust factor reduces as the stakes for the decision get higher. Over time, the ratio likely will grow rapidly in favor of more AI augmentation.
Dr. Napoleon Paxton is vice president of federal technology for AI Squared. He is an AI and cybersecurity subject matter expert and executive with over 20 years of experience.
Featured image: iStock