The Conference Recap Nobody Asked For But Everyone Needed
- Apr 1
- 7 min read

Check out my latest podcast with the famous TonyG around RSA: https://www.drchaos.com/post/podcast-rsa-2026-recap
San Francisco's Moscone Center hosted (by what I could find online) 43,500 cybersecurity professionals last week.
The one message I kept on hearing....AI...Agentic....Something....
The conference closed on March 26th. Hugh Jackman showed up to chat with RSAC's leadership. Kevin Bacon played guitar. I still think Magic Johnson last year topped them, but I grew up as a Lakers fan.
Somewhere between the keynotes and the vendor booths, the industry accidentally revealed what it's actually worried about, which I think is very different from what anyone's marketing.
The Branding Problem Nobody Wants to Admit
Walk the expo floor and you'll see a pattern. Most vendors are doing one of four things with AI.
First, there's the outright rebrand. Legacy products from 2024 suddenly have "AI" in the marketing deck. The architecture hasn't changed. The capabilities haven't fundamentally shifted. But the color scheme now matches OpenAI, and somewhere a product manager changed "employees and contractors" to "agents" in the positioning doc.
Second is the subtle refresh. Marketing materials get updated. Sales decks pivot. The underlying technology does basically what it did last year, except now it uses an LLM to summarize your logs. Revolutionary.
Third is genuine tourism. These vendors are using AI the way a tourist uses Google Translate in Paris—confident it works, probably getting the grammar wrong. Basic summarization. Simple anomaly detection wrapped in transformer models. Nothing that would make an actual machine learning researcher lose sleep. As a AI/ML person, I find this sloppy and it confuses people.
Then there's the fourth category: companies that actually built their architecture around AI from the start. Some vendors showed off their Agentic SOCs where you describe what you want in plain English and the system does it. Fortinet (obvious bias, my day job is at Fortinet) positioning around autonomous agents. These aren't rebands. This is a different category of product entirely. I was actually kind of happy to see some of the major vendors taking the same approach.
The honest take from the floor: a lot of vendors came to RSA with nice branding and not much underneath. Some actually have something. Most people couldn't easily tell the difference.
The Thing Everyone's Actually Worried About
The real conversation at RSA wasn't about AI washing or vendor maturity. It was about what happens when you give autonomous systems permission to take action.
Traditional security operations: alert fires, person looks at screen, person decides, person fixes problem. It's slow. But humans slow down to think.
The new model: alert fires, AI agent understands context, agent determines threat level, agent remediates. No human loop.
Jeetu Patel from Cisco said this in his keynote and it landed hard: "With chatbots, you worry about getting the wrong answer. With agents, you worry about taking the wrong action."
That's the crux. You're not just concerned about bad information anymore. You're concerned about a system that isn't human executing decisions at machine speed based on patterns it learned from datasets you probably didn't thoroughly examine.
The room full of security leaders apparently went quiet at that point.
This was the thread running through nearly every keynote. Splunk marketing an Agentic SOC. Cisco positioning agents as "digital co-workers." Palo Alto talking about autonomous response. Fortinet engineering an Agentic SOC. All of it centered on the same uncomfortable idea: we're building systems that can operate independently of human oversight (not saying that is how you have to or would deploy these solutions).
Identity Became the Secret Language
Something odd happened at RSA 2026. Identity security showed up in almost every discussion, but not in the way it used to.
For years, identity meant: can this user access this resource? Is this device trusted? What's the permission model?
Now identity means something different. Which AI agent is acting? On whose behalf? Did an actual verified person authorize what's about to happen? Did the system grant itself permissions it shouldn't have?
The industry is trying to build an identity framework for non-human actors operating at machine speed. The problem is obvious: your traditional identity infrastructure wasn't designed for entities that can operate autonomously, make decisions instantly, and potentially escalate their own privileges if something's misconfigured.
One attendee put it directly: "Security is no longer centered on devices or networks. It's now centered on which AI agent is acting, on whose behalf, and whether a real, verified person authorized that action."
I had many meetings with clients, and the close-door sessions with CISOs kept coming back to this. How do you verify that an AI agent is doing what it's supposed to do? How do you audit its decisions after the fact? What's your liability if an autonomous system takes the wrong action?
These aren't abstract questions anymore. Our customers were asking us these questions. Organizations are deploying agentic AI systems right now, and they're trying to figure out the answers in real time.
What CISOs Actually Said in Private
The keynotes were polished. The vendor pitches were slick. But the real conversation happened in the closed-door meetings, the private briefings, and the conversations between sessions.
Data is the actual competitive advantage. Every client, CSO, or security team I met in a private meeting said some version of the same thing: you can use an AI model to extract insights from data you have, but you can't use a model to create better data. Also...how and where do we get the data from our organization, and how to do we feed that into AI in a reliable, secure way without it being as training data?
The vendors winning attention on the floor weren't the ones with the most sophisticated AI architectures.
They were the ones with differentiated threat intelligence. Current feeds of what attackers are actually doing. Proprietary data that competitors don't have.
Detection is solved. Response is broken.
Security teams have gotten good at detecting threats. The capability gap is in response speed. A CISO's actual complaint: "Detection has improved significantly. The ability to act on those detections, especially for external threats like fraudulent domains or impersonated brand assets has not kept up." This matters because attackers are now operating at a speed that manual response processes can't match. You can see the attack. You can't stop it fast enough.
Overconfidence in AI is creating a risk gap.
The past five years have created a lot of people who consider themselves AI experts. That's not necessarily the problem. The problem is deploying AI systems without understanding where they fail. Using models without validating their threat models. Assuming sophistication equals security. Adversaries are betting their attacks against exactly this gap.
Quantum is coming, but nobody's ready. There were at least twice as many quantum-related vendors at RSA compared to previous years: IONQ, ID Quantique, Quantinuum, IBM, QuintessenceLabs. The conversation wasn't theoretical. It was about post-quantum cryptography. About preparing for algorithms that can break your current encryption. About doing it before quantum computers actually exist at scale.
The Uncomfortable Speed Metrics
FortiGuard Labs threat intelligence team mentioned something that didn't make the headlines but should have: the window between initial breach and attacker handoff has collapsed from hours in 2022 to seconds in 2025 (in some cases). Similar stats were mentioned bu Google's Threat Intellgence teams.
That's not a trend line with a gentle slope. That's a cliff.
Your mean time to detect might be excellent. Your incident response team is still measured in minutes. The math doesn't work.
Attackers are also using agentic AI to scale old techniques. Phishing campaigns that used to require coordination now run autonomously. Brand impersonation sites spin up faster than takedown processes can respond. Credential theft runs at machine speed.
One source described it as the end of "luck-based security" the idea that you're safe because nobody's found your misconfigured systems yet, or because your vulnerability is obscure enough that it's unlikely to be exploited. With agentic attackers, obscurity isn't a defense. Being hidden doesn't matter if the attacker can probe your entire infrastructure systematically in minutes.
What The Conference Accidentally Revealed
RSA 2026 showed an industry that has correctly identified the problems but hasn't solved them yet.
The problems are clear: How do you secure agentic AI? How do you verify that autonomous systems are doing what they're supposed to? How do you maintain trust across systems you don't fully understand? How do you respond to attacks operating at machine speed?
The solutions are still being worked out.
There's a fundamental tension running through the conference. Agentic AI is becoming faster, more powerful, and more opaque simultaneously. Organizations are deploying autonomous systems before guardrails exist. Attackers are scaling old techniques with new tools. And everyone's trying to figure out if their security vendor actually innovated or just hired a better marketing team.
Jeetu Patel mentioned the "oops phase"—the period where autonomous systems are operating faster than anyone expected and mistakes are happening before they're caught. That's not some distant risk. That's happening now.
The credential theft problem is real. An AI agent with the right permissions can do serious damage before anyone notices something's wrong. The attack surface is expanding faster than security teams can manage. The gap between what defenders can handle and what the threat landscape demands is widening.
The Actual Takeaway
If you attended RSA, you probably left with a sense that the industry is in transition. We're moving from reactive security (detecting what happened) to response-focused security (stopping what's happening) to predictive security (preventing what might happen). And now we're stepping into autonomous security (systems that act independently) while also trying to defend against autonomous attackers.
That last piece is the trick. How do you build systems that are both autonomous enough to keep up with threats and constrained enough that you can trust them?
The conference didn't answer that. But it's clear that vendors are trying, CISOs are worried, and the clock is ticking because attackers aren't waiting for the solution.
The vendors that'll win in the second half of 2026 won't be the ones with the flashiest AI marketing. They'll be the ones with actual answers. How do you audit autonomous decisions? How do you prevent the wrong action from being taken? How do you maintain trust in systems nobody fully understands?
Those questions were asked a hundred times in the hallways and closed-door sessions. The people asking them weren't looking for clever marketing. They were looking for something that actually works.
That's what matters. That's what'll matter for the next year. Not the hype. Not the rebands. Actual solutions to the hard problem of building security that keeps pace with threats that operate at machine speed.





Comments