2026 AI Predictions
3 mediocre predictions for an exciting year
Inspired by Jasper, here’s some things I think will happen this year.
It turns out I’m not prone to making big picture predictions about the state of the world (no AGI timelines here). Unsurprisingly most of these are about how we use AI to write better software faster.
1. Most of the effort and more of the revenue will converge on coding agents
Over the last few years we’ve seen lots of attempts at “AI Products” across lots of different software verticals. Coding agents are the runaway success story. I think the frontier labs know this and are putting disproportionate effort into making models better at coding.
I think that most of the effort from the AI labs will go into making models better at writing code. As a result they’ll see most of their revenue come from coding agents.
This isn’t a judgement on non-coding AI products, it’s just a prediction of where the puck is heading from the perspective of model development.
2. The challenge for software companies will be keeping up
Because software development capabilities will expand even more rapidly in 2026, the big challenge for software companies will be keeping up with these new capabilities and rolling them out across teams of various levels of capability (and enthusiasm) for AI-driven coding.
Coding with AI is a new skillset, it’s fundamentally different to what came before, and it’s changing rapidly. Staying good at it will be a big job for engineering teams that want to excel.
In 2025 it was common for someone to say that AI was making them a better coder; less common for someone to say that about a colleague. In 2026 the improvements will be much more noticeable.
3. A new technique solves the domain knowledge problem
Right now the biggest challenge in agentic coding is giving agents enough domain knowledge to truly understand the codebase, product, or business, so that they can make realistic decisions and move forward uninterrupted.
There’s a lot of things that the models are smart enough to do, but are held back by a lack of domain knowledge.
This might get solved by longer context windows, but my specific prediction is that the industry will come up with new techniques that eliminate this problem with the models we have today.
These solutions won’t just be useful for coding agents - the same techniques will apply across all domains and probably make a bunch of other AI software much more useful.
What did I get wrong?

Your third prediction about the domain knowledge problem is the one I keep coming back to. You're right that the models are smart enough but lack the context.
I reckon the answer isn't a single new technique but more of a pattern. You feed the agent your team's conventions, architectural decisions, and coding standards through structured instruction files. Not just longer context windows but curated context that tells the agent how your codebase works. I wrote about this with OpenCode agents specifically https://blog.devgenius.io/your-senior-devs-dont-scale-your-opencode-agents-can-e2ecf2d04548 and found it eliminates most of the "technically correct but doesn't fit" output that makes people distrust agent-written code.
Basically your prediction might already have a partial answer in the form of agents-as-code, where the domain knowledge lives in config files that travel with the repo. Did you end up seeing anything along those lines since?