If you have not read Aaron Levie’s essay on Jevons Paradox for Knowledge Work, it is worth it. The idea is simple: when powerful tools get cheaper and easier to use, people do more work, not less. AI agents may do for messy, judgment-heavy work what the cloud did for predictable work: make advanced capabilities widely available.
When the cost of doing a task drops, demand often rises. New use cases become worth doing. Small teams start doing work that used to require big-company budgets. Levie’s best one-liner is the one I keep coming back to: today’s jobs are tomorrow’s tasks.
This is the surprising part of efficiency. It does not only save money. It also makes the market bigger.
Levie uses marketing as the example. Since the 1970s, the number of marketing jobs in the US grew by about 5x, even while marketing tools got dramatically better. The internet and globalization helped, but cheaper marketing inputs mattered too. Better tools did not shrink the field. They helped it grow.
I hinted at this at the end of my last essay: when you make something more efficient, you usually get more of it. That is true for cybersecurity as well, and for the people who work in it.
Last week, Anthropic announced Project Glasswing and a preview of Claude Mythos, a frontier model built for autonomous cybersecurity research. I covered the details elsewhere, so I will not repeat them here. Some people have pushed back on the benchmarks. That is fair. Vendor numbers always deserve scrutiny. Still, the overall direction is hard to deny, especially when independent evaluations point the same way.
The big change is this: LLMs can read and reason about code in a way older tools cannot. Static analyzers look for patterns. Fuzzers look for crashes. A frontier model can look at what code is trying to do, and notice when the code fails to match that intent. That is why these models are structurally better at finding security bugs. Mythos is just the first version that is difficult to shrug off.
The UK AI Security Institute’s independent evaluation says it plainly. On expert-level CTF challenges, Claude Mythos Preview succeeds 73% of the time. No model before April 2025 could solve these at all. On AISI’s 32-step corporate cyber range (“The Last Ones”), Mythos Preview solved the full range end-to-end in 3 of 10 attempts, and averaged 22 of 32 steps across runs. The curve is going up, and it does not look like it is flattening.
There is another reaction to Mythos that matters more than skepticism about benchmarks. Some people see a model finding huge numbers of bugs and conclude security engineers are about to be automated out of a job. They are right about the AI getting better. They are wrong about what it means for jobs.
Cybersecurity jobs are not going away. Not despite AI. Because of AI. Over the next few years, AI will expand the amount of security work that is worth doing, and that will grow the field.
Barriers
Start with barriers to entry. They are dropping on both sides.
On offense: building an exploit used to require a rare mix of skills. You needed to read assembly, understand memory layouts, and chain primitives in ways that survive modern mitigations. That took years. The supply of people who could do it was small. Now, an intermediate attacker with an LLM can get a working proof-of-concept for a published CVE in an afternoon. A skilled attacker can go further and find new bugs.
On defense: the same shift is now possible, starting from a much lower baseline. Most companies below the Fortune 500 have been under-defended because they could not afford a real security team. For the first time, that can change.
A threat model that used to take a week can become a day.
Secure code review that used to consume a senior engineer can start happening inside the PR.
Detection engineering, compliance evidence, and incident triage are all about to get much cheaper.
This is the “democratization” move Levie describes, but aimed at security instead of marketing. Capabilities that used to belong only to the Fortune 500 come within reach of a 50-person SaaS.
Here is where Jevons shows up.
Cheaper offense means more offense. Attackers can reach further. They can target organizations that used to be ignored. Campaigns that once required nation-state budgets can run cheaply.
Cheaper defense means defenders can finally look at the work they avoided. Not because problems were not there, but because there was no time to triage, validate, and separate signal from noise. Old backlogs become workable. Medium-severity tickets sitting for a year become tractable. So does the long tail: old code, dependencies, internal tools, half-documented services. When a model can help you test, reproduce, and prioritize, “overwhelming” becomes “manageable”.
The whole economy of attack and defense inflates. And an inflated economy needs more people working inside it, not fewer, even if each person becomes more productive.
The Gate Lifts
There is also a social layer. Security has long been gatekept, sometimes by culture. The skills to contribute sat behind years of training, specialized tools, and a kind of mystique. A developer who sensed something wrong usually had two choices: escalate to a security team (if one existed), or move on. Curiosity was not enough.
That is starting to change. With a capable model in the loop, a curious developer can do more than just worry. They can ask a model to reason about the security implications of a diff. They can try to exploit an endpoint that feels wrong. They can learn by asking, not by enrolling.
That does not make someone a security engineer. But it can make them a real contributor, and the field has rarely had contributors at scale.
Two downstream effects matter:
Developers who always cared about security but had no way in can contribute now.
Security teams can push their judgment into everyday workflows, through the coding agents developers already use.
The agent can enforce safer patterns, flag risky changes, and rewrite vulnerable code before it reaches a PR. CISOs get the coverage they have wanted for years. When something ships insecurely, security still owns the outcome, not the developer who happened to write the line. The security team gains reach into the place where code is created.
It is worth separating this from “shift left”. Shift left largely failed because it moved responsibility without moving expertise or interest. Most developers never cared about security, and scanners in CI did not change that. It produced tickets people ignored and lectures nobody asked for.
What is possible now keeps ownership where it belongs. Developers do not have to become security experts. The security team still owns the outcomes. The difference is that the security team can now act at the point of authorship via the tools engineers already trust.
This matters for supply. The field has been constrained by limited training pipelines and by the fact that many companies cannot afford the people those pipelines produce. Lowering the barrier does not replace trained security engineers, but it widens the base of people who can participate meaningfully. A developer who can do a credible security review of their own PR is valuable to a 50-person startup that cannot justify a full-time hire yet. Some fraction of those developers will become the next generation of security engineers.
Surface
Barriers are only half of the story. The other half is that what we are defending is getting bigger.
Every company is writing more code than it did two years ago. Coding agents do not just make engineers faster. They change what organizations decide to build. Features that were not worth a quarter are now worth a week. Internal tools that would never have existed now exist. Prototypes ship. Glue code multiplies. And software creation spreads beyond engineering. Sales, marketing, RevOps, and customer success will increasingly commission and deploy software, often without any serious security review. More code means more bugs. Some of those bugs will matter.
There is also new surface area that did not exist before. Agentic systems create new vulnerability classes the industry is still naming: prompt injection, tool confusion, indirect data exfiltration through model outputs, privilege escalation via agent impersonation, poisoning of persistent memory, and compromise of MCP servers and everything downstream.
The blast radius changes too. A compromised employee account is bad. A compromised agent with that employee’s permissions, running on a schedule, chaining actions automatically, and reaching into production is worse. One foothold reaches further than it used to. Defender visibility is not catching up as quickly.
So we get three forces at once: more code, new primitives, and bigger blast radius. Security work becomes harder and more voluminous.
This is where people misread the Jevons analogy. Inside big security teams, AI productivity gains will absorb much of the expanding scope. A team of fifty probably does not become a team of two hundred. It might become sixty or seventy while doing what two hundred used to. The dramatic growth is elsewhere. It is extensive growth, not intensive growth. It comes from the thousands of companies that never had a real program, and soon can afford one.
Where Humans Concentrate
Parts of the job will be automated. Low-severity alert triage, reproducing bugs, drafting detection rules, producing audit evidence. That is fine.
What does not go away is judgment, coordination, and accountability. Security is a coordination problem before it is a technical one. It spans engineering, product, legal, IT, and the executive team. The people who do it well spend much of their time making hard calls: what to prioritize, who to pull in, how to communicate, what to disclose, and when.
During an incident, someone must decide who to wake up, what to shut down, what to tell the board, when to call outside counsel, and whether to involve law enforcement. Models can draft the messages. They cannot run the room.
There is also a slower legal layer. Someone must be the name on the incident report. Someone must sign the SOC 2. Someone must own SEC disclosure under the 2023 cyber rules. Someone must handle regulators who want to know why a detection did not fire. These responsibilities do not get cheaper when models get better. If anything, they get heavier as more companies reach disclosure thresholds, run audits, and face AI-specific scrutiny.
Outlook
Put it together:
Attacking and defending are getting cheaper.
The volume of both is rising faster.
The surface area is growing on its own.
The parts that stay human — judgment, coordination, accountability — scale with the size of the problem.
That is not a field that shrinks. It is a field that grows, gets harder, and pays more.
Marketing is the closest comparison, with one key difference. Marketing’s growth came from existing teams getting bigger and from new companies buying marketing software for the first time. Security will grow the same way, but weighted toward the second. AI will absorb much of the “hiring spree” inside large security teams. The huge expansion happens at the tens of thousands of companies that could not afford real security before and now can.
I said at the end of my last essay that security will be the hottest job in tech. I still believe that. This is the economic argument underneath it. Better tools do not remove security work. They create more of it, and they create more responsibility than automated systems are allowed to hold.
Today’s jobs are tomorrow’s tasks. The security jobs of the next decade will be built on top of the tasks today’s jobs hand off. That is not a threat to the field. It is a hiring plan.
