I've spent 20+ years building bridges between development, operations, and security teams. I've transformed organizations where these groups went from adversarial to collaborative, where velocity and security reinforced each other instead of jockeying for position. I know what's possible when technology teams work together.
I also know when something fundamental has changed.
AI-powered development isn't just making developers faster. It's turning non-developers into builders. And while we're still fighting yesterday's battles about development velocity versus security controls, a new paradigm is eating the horizon right in front of us.
I know because I became one of them.
I recently dove into AI development. Not to audit developers or study them from a distance, but to understand what AI-assisted building actually feels like.
I'm a techie at heart. I've been in technology my entire career, from electronics tech to systems engineer to strategic leadership. I love technology and feel most at home amongst my fellow technologists making things happen. So I set out to see what AI could do, how it does it, and how I might use it to extend my own capabilities.
I started with what I knew: ChatGPT and Claude. I wanted to try to build a local environment where multiple AI agents could work together to perform tasks using Codex, Claude Code, and the Gemini CLI.
It worked. Very cool.
Then I started thinking about features. What if it had a UI? What if I could track jobs in a dashboard? What if it could scan its own codebase and generate context-aware tasks to get smarter with each build?
So I built an AI-powered development pipeline. Plain language input to orchestrate multiple AI models to plan, code, test, and deploy to GitHub. All automatically.
I got it to work!
Each feature led to the next. Token tracking. Cost estimation. Auto-repo creation. Job queue. Status monitoring. Next feature. Next feature.
In one session, it scanned its own codebase, generated context-aware tasks, wrote the code, and pushed to GitHub. All from a single job description.
The entire flow took minutes.
I built this? What?!?
I'm not a developer. Never have been. I was a decent scripter in my day, but coding was never in my wheelhouse. But now, I felt like AI was giving me a chance to build things I couldn't before.
And in that flow of building, security never crossed my mind. Not once.
I sat back and admired what I'd built as reality began to hit me. Where are my passwords and keys stored? How much access does this now have to my file system and AI accounts? How do I know which component accessed what, when and why?
What was I thinking? I wasn't. That's the point.
I ran a security assessment on my own creation. Eighteen vulnerabilities. Several critical. The code worked perfectly. That's what made it dangerous.
AI optimizes for code that works, not code that's safe. It replicates insecure patterns at scale with complete confidence.
I wasn't deliberately ignoring security. In my excitement and drive to create something important, I wasn't thinking about the security requirements. I guess security could come later? But later means bolting on security instead of building it in. The architecture is already set with the wrong habits baked in.
So now, here's what keeps me up at night. Even with all of my experience, when I started building with AI, security was in the parking lot. If that can happen to me, what's happening everywhere else?
We're having the wrong conversation.
Across our organizations, development, security, operations, and executive leadership are still debating developer velocity versus security controls. Still fighting about approval processes and gate requirements. Still treating this like the same tension we've always had, just faster.
But something fundamentally different is happening. AI is creating an entirely new class of builders.
People who couldn't code before can now ship working software. Non-technical domain experts build tools for their own workflows. Business analysts prototype solutions without waiting for engineering resources.
The vibe coders are here... And they're not asking for permission...
Folks no longer have to wait for developers to work on their ideas. They don't have to wait for infrastructure. They aren't thinking about operations, security, or supportability. They're just building.
Professional developers may not always get security right, but they understand the terrain. They know the difference between production and non-production. They have lived through deployment cycles, dependency issues, operational realities, and the consequences when something breaks.
The new builders bring something professional developers can't fake: deep domain expertise. They understand their problems in ways that would take engineers months to learn. AI removed the coding barrier. It didn't grant domain knowledge.
But they have never had to think about production systems, integration points, security and compliance constraints, or who wakes up at 2 AM when something fails.
This is the new shadow IT: working prototypes built in days by people who expect them to be deployed, but who have never had to think about what deployment actually requires.
This feels disruptive because it IS disruptive. But it may also be the moment we have needed.
For years we have tried to break down the walls between engineering, operations, and security. We built frameworks, held workshops, aligned on principles, and still found ourselves fighting over process, ownership, and control. AI has finally forced the issue.
It will take engineering, operations, and security to operate as one and build together what comes next.
Here's the thing: no one has solved this yet.
How do you build systems that receive applications from anywhere -- a product manager prototyping a customer feature, a domain expert building a data pipeline, a business analyst automating a workflow -- and route them intelligently toward production?
This is a genuine architecture problem. Intake APIs that assess what arrived. Scanning engines that understand application patterns. Transformation pipelines that inject secure-by-default authentication, secret handling, observability. Security encoded as architecture, not bolted on as review. Routing logic that knows the difference between customer-facing features and internal tools.
You need security's expertise to get the threat modeling right. You need operations to design for what happens at 2 AM. This isn't governance overhead. It's the domain knowledge required to architect systems this complex.
Whoever figures this out first writes the playbook. Your patterns become the reference architecture. Your platform becomes what the industry copies.
Build this proactively and you're the thought leaders who designed the paradigm. Wait, and you're firefighters inheriting deployments under controls you didn't design.
And when something genuinely can't be made production-ready? Your platform explains why and shows a path forward. Rejection that teaches, not gates that block.
You're not containing vibe coders. You're designing the system that transforms them into builders.
You've spent years being the people who say no.
Now you get to design the systems that say yes safely. Your expertise isn't overhead. It's the domain knowledge that makes the platform architecturally sound.
This is your seat at the design table. Take it.
Right now, AI feels like chaos.
"Our talent will leave if we don't adopt it." "Our customers will panic if we do." "Our competitors might figure it out first."
But what if this is the moment you actually enable AI at scale, safely?
With deliberate resource allocation and strategic planning, we transform invisible risk into competitive advantage. You create career evolution, not job elimination. And you can tell a compelling story to the market.
Competitors are frozen in panic about AI. Customers are asking tough questions about how you're managing AI risk. But you've unified your technology organization to harness AI responsibly and intelligently while ensuring security, operational excellence, and integrity.
Market differentiation. Customer confidence. Thought leadership while competitors remain paralyzed.
Done proactively, this creates new roles. Citizen developer platform builders. Integration engineers. AI output validators.
Just as cloud computing evolved data center engineers and system admins into cloud engineers and platform specialists, this wave evolves technologists into the architects of what comes next.
If the technology teams don't come together on this, it falls to you to force the issue. This is the defining technology challenge of the next five years and your window to turn AI from a source of anxiety into a competitive advantage ahead of the competition.
Build it and they will come. Block it..... and they will come anyway.
The vibe coders are here.
Time to get ready.