Notes

Build vs Buy Is About to Flip (Again)

Agentic AI is changing what it costs to build software. For enterprise leaders, that means rethinking the build vs buy decision—especially for contextual tools.

December 13, 202510 min read
Agentic AIStrategySoftware DevelopmentDigital Transformation

If you've been in leadership long enough, you've probably sat through at least one renewal negotiation where the vendor's "standard" product can't quite support the way your company actually works. Maybe it's been customized within an inch of its life. Maybe there are modules being paid for just to get access to one critical feature. Maybe there's a roadmap item that's been "coming soon" for longer than anyone wants to admit.

I've been thinking a lot about these moments lately. Not because there's anything inherently wrong with buying software, but because I'm noticing a pattern in how these conversations are evolving. The friction seems to be building. And I think we might be approaching a point where the fundamental economics of the build vs buy decision start to look different than they have for the past decade.

What I want to explore here is how the cost and effort required to build software is changing in ways that haven't fully worked their way into how most organizations think about these decisions yet. I believe we're heading toward an inflection point, likely starting in 2026, where the default answer changes for a meaningful category of enterprise software.

I don't claim to have all of this figured out. But I do think it's worth thinking through what might be coming, and what questions leaders should be asking themselves now.

A little context: how we got to "buy as default"

The build vs buy debate has been around a lot longer than software. Back in 1937, economist Ronald Coase asked a surprisingly fundamental question: why do firms exist at all? If markets are so efficient, why don't we just contract for everything?

His answer was simple and elegant: markets aren't frictionless. When the transaction costs of using the market (finding suppliers, negotiating contracts, coordinating work, managing risk) get high enough, it becomes cheaper to "make" things internally rather than buy them.

Software inherited this same framework, but with its own unique characteristics. In the early days of corporate computing, building was often the only real option. Software was tightly coupled to hardware, highly specialized, and the expertise usually lived inside companies.

As the industry matured, packaged enterprise software changed the equation. ERP suites, CRMs, HR systems all promised standardization, support, upgrades, and clear roadmaps. For many organizations, adopting "best practices" started to feel more attractive than reinventing everything from scratch.

Then SaaS made buying even more appealing. Subscription pricing, faster implementation, infrastructure as someone else's problem. If you're trying to move quickly and focus on your core business, buying often feels like the obvious choice.

And for most of the last fifteen years or so, that logic has held pretty well. Buying has been a rational default for a lot of situations.

Why buying has made sense (and still does)

I think it's worth acknowledging that the reasons leaders have gravitated toward "buy" are actually pretty sound.

Time-to-value is a big one. Buying typically gets you to production faster, especially when requirements are well understood and the problem is common across industries. Why spend months building something that already exists?

There's also the reality of economies of scale. Vendors spread their development and maintenance costs across thousands of customers. As a single buyer, it's hard to compete with that kind of efficiency, at least under traditional assumptions about what it costs to build and maintain software.

Then there's the operational side. Enterprise software isn't just about features. It's uptime, incident response, compliance artifacts, security patching, vendor risk management, SLAs. When you buy, you're buying an entire operational posture. Someone else is carrying the pager.

Talent is another consideration. Most organizations don't want to be in the business of recruiting and retaining specialized engineering talent just to build internal tools. Even when the business case might be strong, the practical reality of hiring can kill a "build" decision before it starts.

And there's organizational momentum too. Procurement and finance have well-established processes for buying from third parties. Internal build initiatives can run into budgeting friction, unclear ownership, and the challenge of proving ROI before anything has been built. Buying often feels like the safer path.

These are all still valid considerations. What I'm suggesting is that some of the underlying assumptions, particularly around what it costs and how long it takes to build, are starting to shift in meaningful ways.

What's different now: the move from assistance to execution

We've had AI in software development for a while now. But I think most leaders still think of it primarily as a productivity boost. Autocomplete for developers, faster documentation, that sort of thing.

What's emerging now feels qualitatively different.

The distinction I keep coming back to is between tools that assist and tools that execute. A copilot helps a developer write code faster. It suggests, predicts, fills in the obvious stuff. That's valuable. But it still requires a developer to be in the driver's seat, making decisions, steering the work.

What we're starting to see are agentic systems. These are tools that can take a goal, break it into tasks, write code across multiple files, run tests, identify and fix failures, and produce working outputs. They're not just assisting anymore. They're executing significant portions of workflows that used to require continuous human intervention.

Now, let me be clear: these systems aren't perfect. They make mistakes. They need guardrails and human oversight. But what's becoming increasingly evident is that they're changing the economics of software development in some pretty fundamental ways.

In traditional software development, costs are dominated by salaries, timelines are shaped by coordination overhead, and risk comes from defects and long-term maintenance. In a world where agentic tools handle significant implementation work, teams can produce more output with fewer people. Iteration becomes faster because feedback loops compress. And a meaningful chunk of development work starts to look more like commodity compute than scarce specialized labor.

I think this is starting to change the math on build vs buy in ways that most planning processes haven't caught up to yet.

Where this is headed

Here's my core thesis: if the cost and time required to build software continues dropping at the rate we're seeing, the default answer for certain categories of software is going to shift. Starting in 2026, I believe we'll see companies increasingly choose to build software they used to buy, not because they suddenly have infinite engineering resources, but because the economics have fundamentally changed.

This doesn't mean vendors disappear or that every company should suddenly build everything. But I do think there's a category of software where "build" will start making more sense than it has in a long time.

The way I think about it is in terms of "contextual software." These are tools that exist because your specific processes, constraints, and operating model require something tailored. Some software is genuinely commodity. You're probably always going to buy payroll, core accounting, identity providers. Those make sense to outsource.

But then there's all the software in the middle. Workflow tools, process automation, internal portals, decision support systems, reporting dashboards. These are the places where teams commonly say things like: "We can't quite configure it the way we need to." "The workflow is 80% right, but that last 20% creates friction." "We're paying for multiple add-ons just to make this work." "The vendor won't prioritize our feature request."

This is where I think the build option starts looking more attractive. Not for everything, but for more things than we'd typically consider today.

The pattern I expect to see is this: buy the foundational platforms (identity, payments, core systems, data infrastructure), but build more of the workflow and integration layers that sit on top of them. Small, focused applications that connect systems in ways that match how your business actually operates.

The key difference is that building these "edges" becomes faster and cheaper than buying yet another point solution that almost, but not quite, fits your needs.

What this could mean

If this plays out the way I think it will, the implications ripple out in some interesting directions.

For vendors, I expect we'll see real pressure on renewal negotiations and pricing power. If customers can build "good enough" alternatives more easily, that changes the dynamic. We'll probably see more usage-based pricing, more modular offerings, more emphasis on APIs and integration capabilities. Vendors will likely position themselves less as complete solutions and more as platforms you build on top of.

For companies, internal software portfolios are going to grow. When the cost of building drops, the number of things worth building increases substantially. That probably means more internal tools, particularly for automating processes, eliminating manual work, and creating interfaces tailored to specific decision-making workflows.

The challenge won't be building more things. It'll be building the right things, safely and sustainably.

I also think the role of engineering organizations will shift. Less time typing code, more time defining problems clearly, reviewing outputs, building robust test coverage, establishing architectural patterns and guardrails. Engineers who are skilled at this become force multipliers in ways they weren't before.

But there are new risks too. Agentic build introduces challenges around code security, governance, quality control. Organizations that treat this as a free-for-all will create real problems for themselves. The ones that succeed will be those who approach it as a capability that needs discipline and structure.

At a broader level, if software becomes cheaper to produce, budgets will shift from licenses toward internal capabilities. Companies can iterate faster. Products that were previously too expensive to justify building might become viable. Work doesn't disappear, it gets reallocated.

Questions worth asking

If you're thinking about this in your own organization, the answer isn't to immediately start building everything. But there are some questions worth considering.

One that I find helpful: if you looked at your application portfolio and tagged each tool as either "commodity" (always buy), "differentiating" (might build), or "contextual glue" (connecting systems in ways specific to your operations), what would that distribution look like? That last category, the contextual glue, is probably where this shift hits earliest.

Another angle: do you have the infrastructure to build safely? Things like standardized repositories, CI/CD pipelines, automated testing, security scanning. If you don't have those foundations, moving faster just means creating technical debt faster. The encouraging thing is that agentic tools can help you build those foundations too, but you need to be intentional about it.

It's also worth thinking about whether you're set up to treat internal tools like products. Just because you can ship something quickly doesn't mean it'll be good or that people will use it. Having a product owner, clear users and outcomes, adoption plans, maintenance ownership, these things still matter. Maybe more than ever.

For those in procurement roles, this potentially changes vendor negotiations. If your organization has a credible build option, does that shift your leverage? Can you push harder on data portability, API access, contract flexibility? Your alternatives might be stronger than they used to be.

And maybe the most important question: do your leaders have a shared understanding of what these new tools can and can't do? Not in a buzzword sense, but practically. Where they're reliable, where they struggle, how to validate their outputs, how to govern their use. Without that shared mental model, organizations will struggle to use these capabilities well.

What this is really about

The best build vs buy decisions aren't about picking an ideology and sticking to it. They're about maintaining optionality.

If building becomes meaningfully cheaper and faster, you have more choices. You're not locked into accepting workflows that don't quite fit, paying for functionality you don't need, or waiting indefinitely for vendors to prioritize what matters to you.

The strategic question becomes: what should we build because it creates real differentiation or significantly improves how we operate, and what should we buy because it's commodity infrastructure that we have no business maintaining ourselves?

I think 2026 is when enough leaders will start answering that question differently that the pattern becomes visible. The underlying economics are changing, and that's worth paying attention to.

I'm curious how other leaders are thinking through these questions. Where does vendor software work well for you, and where are you feeling friction? Are you starting to reconsider any of these decisions?

These are the questions I'm wrestling with, and I suspect I'm not alone.


Influential works:

Want the next note when it drops? Say hello and share what you're exploring.