Navigating AI in 2026
The future has never been as unevenly distributed as it is today.
Software engineering is going through a revolution. If you're reading this and you write code for a living, you already feel it. And if you don't feel it yet, that's actually more concerning.
For the first time in history, there's a real contender to dethroning writing production code by hand: Vibe Coding.
This pivotal moment happened in late November and December 2025, when SW Engineers got access to the Opus 4.5 and Codex 5.2 models. These weren't incremental improvements – they fundamentally changed what's possible.
Boris Cherny, the creator of Claude Code, recently mentioned for the first time in his career he didn't open his IDE for an entire month. Ryan Dahl, the creator of Node.js, said "the era of humans writing code is over." Andrej Karpathy said on X: "This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks."
These aren't hype merchants. These are people who built the tools we use.
This change is VERY new, and it's REALLY happening.
What is vibe coding, really?
Vibe coding gets a bad rep sometimes, and I think I understand why. There's term ambiguity and there are moral objections. Let's tackle both.
Vibe coding is writing software in such a way that AI writes the vast majority of the code.
That's it. There's no requirement that humans never touch the code, that we never read the code, or that we one-shot a whole app. All it means is that AI is writing most of the code instead of our keyboards.
Now, this wasn't quite how the term was originally introduced by Andrej Karpathy, so it's understandable there's ambiguity. A friend of mine who heavily uses AI once told me "this other app I completely vibe coded, I haven't seen the code" – but "code that is vastly generated with AI" doesn't quite roll off the tongue, does it?
I'd like us to refer to vibe coding for any approach where AI writes the vast majority of the code so we can name the principle and then focus on the tactics: how much humans guide the AI, what tools support it, and whether they read the code or not.
The moral objections
Humans are moral, which means we have deeply ingrained feelings about what's right and wrong. And AI writing most of the code doesn't feel right to many people.
There are valid reasons for this. Much of today's code is mission critical, with lives at stake on its correctness. There's a trillion dollar software engineering industry and no telling what happens to it in a world where vibe coding dominates. Perhaps most importantly, most conversations about AI alignment were thrown out the window the moment AI became a gold rush: leverage AI for market value now, figure out if it's doing the right thing for humanity later.
I think there are reasonable moral objections to embracing vibe coding. But our lack of sophistication in ethics bites us here: moral philosophy just hasn't advanced enough to serve humanity as an uncontroversial guide, so technological innovation will continue its stampede.
To quote System of a Down (lol): "You can't afford to be neutral on a moving train." So you should probably still figure out what's going on and what role you'll play in this.
Some software engineers who have learned and honed their craft for years are resisting and/or grieving the obsolescence of their skills.
This I understand. But it's how technology evolves. Before, we stopped writing machine code. Then, we stopped managing memory. Now, we'll no longer write the code. Software Engineering is still necessary – software will exist forever and it needs to be engineered. How humans engineer it, though, was always subject to change.
Personally, I think most people who master vibe coding will NOT go back to writing code by hand even on their personal time. But we just have to go through all the stages of grief on this one.
AI is not magic
When people hear "AI writes code now," it's easy to assume this is magic.
You can't describe an app and have AI one-shot it end-to-end in any production grade manner. Not with features, and often not even with PRs.
Any idea that you can vibe code production grade software in early 2026 without human verification or expert use of processes and tools is a fallacy.
But there's a new discipline emerging that requires an entirely new set of skills: reliably guiding AI work.
The challenge is that most of us aren't well positioned to get the best out of this technology. Not because the technology is incapable, or because we're incapable of learning. But it's genuinely challenging to guide AI to do really good work.
The learning curve for all tools and techniques is steep, there are virtually no consolidated materials anywhere, and new tools and techniques are coming out literally every week.
Right now, it's a full-time job's worth of workload to learn what exists and how to use it. It's the opposite of intuitive and free – it's time consuming and expensive.
So it's easy to assume the hard part of software engineering is over because AI is writing the code now.
It's not. The hard part has shifted.
You're holding it wrong
A lot of people blame AI for being incapable. It's an understandable first instinct, but most of them are just not very skilled at using AI to write reliable code.
We expect AI to be intelligent because "intelligence" is right there in the name. So we assume it'll know exactly what we want even if we're not good at asking. That's almost completely the opposite of the truth given the current set of tools.
It's never been as hard to know how to use AI to its utmost capabilities as it is today. As its power increased, so has the expertise necessary to wield it effectively.
Right now, if you want to be effective just in tools usage, you need to know Claude Code or Codex CLI, skills files, custom commands, subagents, MCP servers, hooks, managing agents in parallel. Each of these is its own learning curve, and they're all evolving rapidly.
Additionally, there are techniques you need to be familiar with that are almost new for AI: agent planning, TDD with AI, specs and agent specifications, building visual verification capabilities, verification loops, background agent delegation, building skills for the myriad roles your agents can take.
When people say "you can't trust AI," what they're really saying is they can't trust it, not you. Trusting AI – and knowing exactly which parts you can never trust – is a skill. It takes a LOT of time and effort to build.
The people who dismiss AI as unreliable are often the same people who haven't invested the dozens (or hundreds!) of hours to learn how it actually works with modern tools. And the people who over-trust AI are often the ones who haven't been burned yet.
Verification, verification, verification
Like the "location, location, location" real estate adage, the 3 most important things in vibe coding are verification, verification, verification.
Manually reviewing AI-generated code the way we review human code doesn't scale.
The volume is just too high and the cognitive load is soon unsustainable. AI generated code can inhabit the spectrum from "this is great and we can merge as is" to "this is absolute crap and why is it completely wasting my time" and everywhere in between.
With AI in 2026, the most important question is "how are you verifying AI's work?"
This is an unsolved problem across the industry, but options abound: many experts work with AI to delineate clear success criteria through automated tests, then work with AI to ensure the tests pass (while preventing AI from just deleting the tests!). Engineers who are knowledgeable can review AI's code as an oracle of what good looks like.
Some people dismiss AI coding because verification is hard. Others ignore verification entirely and assume AI output is always reliable. Both positions are just completely wrong.
Explicit verification is critical, whether through automated tests during development, proper prompting as it generates code, AI-only reviews, AI-assisted human reviews, validation in lower envs, or canary deploys with automated rollbacks.
For many production-grade applications, engineers must play the role of oracle – the guide AI uses to check its solution with the "truth." This is particularly valuable if you do this early in the cycle, before the code makes it into a PR. Of course, if you're not a strong software engineer, you don't know what good looks like and can't be an oracle for AI's work.
Shifting left our guidance of AI and automating verification as close to code generation as possible is one of the core skills in this multi-agent enabled world.
The bottleneck shifts
If you've read The Goal by Eliyahu Goldratt, you know that optimizing anything other than the bottleneck is waste.
AI is about to expose every bottleneck in the software development lifecycle.
As AI coding becomes more common, it pushes hard on all the other constraints: merges, CI, QA, deploys, security. Every workflow is built for humans, not AI agents.
If engineers can write code 5x faster but the CI pipeline takes 45 minutes, the deploy process requires three approvals, and security review is a two-week queue, you won't ship more software. Writing software with AI will only make your release speed worse.
Organizations will need to rethink how they operate to actually capture the productivity gains AI makes possible.
And while learning these skills is incredibly hard, lasting organizational learning and change is even harder.
It's a pay-to-play field
AI coding is not just about learning new skills. It's about access to tools, codebases, and having the time and resources to learn through them.
Many people serious about AI-assisted coding have Claude Max 20x accounts, often several accounts, just for personal use. I rented a server so I can run my agents without frying my laptop. The cutting edge isn't free, and the investment gap is widening.
Learning to code with AI effectively requires experimentation, failure, and iteration. That takes time, and it takes money.
For me, learning to code with AI has consumed pretty much my entire study budget for the past several weeks. It's one thing to know how to use AI but another to develop a feel for the agents, the models, and the workflow. That only comes from heavily investing time.
I've also been studying open source projects, because I think it's one of the few ways outside of work to learn on production-like codebases. The skills here aren't about kicking off greenfield prototypes you'll throw away tomorrow. They're about learning to onboard, navigate, and change brownfield, high-criticality applications. That's where the real value lies.
There's no formed curriculum. No established certification. No clear learning path. Everybody for themselves.
Cognitive load is about to explode
Currently, we measure cognitive load by how many lines of code a team maintains, sometimes divided by how many engineers are on the team. That number is about to multiply by an order of magnitude, with power law growth over the next few years.
Understanding the codebase you own is about to become much more difficult – dare I say impossible – and require a completely different set of skills, because you're owning it with AI instead of on your own.
We're also used to writing a single piece of functionality at a time. But now we'll oversee 2 agents in parallel, then 4, then 12, then 20+. The cognitive load of overseeing all this parallel work, often in different projects, is nothing like what we're used to.
Many engineers say they don't even think about the code anymore, only about the architecture. PMs, Designers, and other Eng-adjacent roles are getting more involved with coding given AI, and will go through even more drastic changes in the mental work they do day to day.
In a way, it's reassuring because your current cognitive load problem – tens of thousands of lines of code that no Engineer owns – just became completely moot. We're now in a whole different league.
The economics are shifting
The economics of rewrite vs. maintain are about to change significantly.
Rewriting code used to be expensive. Maintaining was the only realistic option for most teams. It's often quoted that 80% of software time is spent maintaining rather than building – which strikes me as true in our old world.
But now rewriting is becoming cheaper and maintaining more expensive, because there's so much more software being built in a shorter amount of time. Technical debt that used to be "too expensive to fix" might suddenly be worth addressing. Rewrites that were never gonna be prioritized become holiday proofs of concept.
Modularity, services architectures, golden paths, stable APIs – things that were best practices become almost obligatory in this new world.
Programming language choice is about to matter a lot more, and work under completely different criteria.
Languages were designed with humans writing code by hand in mind. Ruby, for instance, was optimized for "developer happiness." But now we need languages that make the AI happy – large training corpora, good verification tools.
Languages like Rust with correctness built in and reassuring feedback loops become more important. We need to feed verification mechanisms to the agents to remove ourselves as the bottleneck. The more the language can verify, the less we have to.
Runtime speed will also become significantly more critical. If a team is writing an order of magnitude more software, verification can become an order of magnitude slower and be too slow for AI. Some languages will absolutely not have viable runtime characteristics or infrastructure costs for a team of the same size once engineers are shipping 10x more software.
What this means for hiring
Hiring is about to get even more complicated than it is already.
The skills that made someone a great engineer in 2023 are not the same skills that will make someone great in 2027. Agent verification, orchestration, AI architecture guidance, fluency with tools and models – so many critical software engineering skills didn't exist 6 months ago.
How do you interview for that? How do you evaluate someone's ability to oversee 12 agents working on different parts of a pre-existing codebase? How do you test whether they can catch a subtle security vulnerability in AI-generated code?
We don't have good answers yet. Most interview processes still test for the old skills. Companies that figure out how to identify and develop the new skills will have a significant advantage.
Even Steve Yegge's Vibe Coding book from October 2025 suggests doing a manual coding round in your interview loop. I wonder if that advice will hold until October 2026. I suspect it won't.
Having engineers who proactively build new skills is a huge advantage over having engineers who wait until they're taught the new best practices in a world where the most efficient way to do the work is being literally invented on a weekly basis.
It was NEVER as critical to figure out how to have people in your organization learn new skills as it is today. And honestly, I haven't seen any organization do this well – it's just not what organizations are wired to do.
The biggest revolution yet
This is the biggest revolution in software engineering yet. Bigger than the internet. Bigger than mobile.
Different people are adjusting at different paces, and the innovation is so quick that it feels like we're each living in a different part of the timeline.
The future has never been as unevenly distributed as it is today.
Anthropic and OpenAI are living in the future in how they write code. Meanwhile, software engineers are spread across the entire spectrum, anywhere between 2022 and 2027 in how they work – in today's terms a generational difference.
The speed of innovation has another effect: catching up is only getting harder. The changes seem to follow a power law, and the gap is expanding instead of shrinking. It's very hard to stay on the cutting edge; if you keep your current pace, you'll almost surely fall more and more behind.
What I'm doing about it
I really don't have all the answers, and I wouldn't trust anyone who says they do. This is unprecedented, and it takes a lot of expertise to even parse what's going on, let alone predict the future with any accuracy.
But here's what I'm doing:
Invest the time. There's no shortcut. You have to spend hours working with these tools to develop expertise, competence, and intuition. You no longer have years to learn how to write software. You have weeks.
Train on real codebases. Prototypes won't teach you the hard skills. Find open source projects or brownfield applications where the stakes feel real. Build production quality changes on existing, large codebases, because a lot of the value for leveraging AI will come from that.
Build verification into everything. Don't trust AI output blindly. Don't trust AI output blindly. Don't trust AI output blindly.
Rethink your architecture. Modularity isn't optional anymore. If your codebase is entangled, it'll be hard to leverage AI effectively. Architect for AI.
Stay humble. None of us knows exactly where this is going. The people who will thrive are the ones who keep learning. If we define expertise in absolute terms – people who really know what there is to know in the field – there are virtually no experts anymore.
We're at the beginning of something massive. The rules are being rewritten.
And the engineers, teams, and companies that adapt fastest will certainly have significant leverage over their competition.