Nymeriq
AI Is Everywhere. That Does Not Mean the Rush Makes Sense.

AI Is Everywhere. That Does Not Mean the Rush Makes Sense.

i Note

AI was used in the making of this blog post!

Over the last months, I have had a lot of conversations with current and former colleagues about AI. Like everyone else, we keep coming back to it because it is impossible not to. It is in strategy decks, product meetings, hiring plans, company announcements, investor messaging, and everyday team discussions. It is the topic of the moment.

But the more I talk about it, the more one thing keeps bothering me: in many companies, the strongest push for AI is not coming from developers. It is coming from leadership. And to me, that feels backwards. Usually, real technology shifts enter companies from the edges first. Curious developers try something new, technical people experiment, early adopters find practical use cases, and only then does the rest of the organization catch up. That is how these things usually work. The people closest to the work spot the value first, and the business follows later. With AI, especially in larger companies, it often feels like the opposite is happening.

Why Executives Love AI More Than Engineers Do

That is where my skepticism starts. Because large companies are not usually built to fall in love with risk. If anything, they are built to contain it. They spend huge amounts of money on governance, compliance, audits, security controls, and internal oversight. Entire departments exist to reduce uncertainty and make sure risk stays within acceptable limits. If you are a senior executive in a stable business, your job is normally not to behave like an early-stage founder. Your job is to move the company forward in the safest and most predictable way possible.

So why does AI get treated differently? Why are so many executives suddenly eager to push a technology that clearly introduces inconsistency, uncertainty, and often lower-quality output?

For startups, the answer is simple, and that is why I do not find their enthusiasm strange at all. If you are a tiny team, speed matters more than polish. You are trying to get something working, get in front of customers, test demand, and survive long enough to improve later. In that world, having no product is a much bigger problem than having a rough one. So if AI helps you prototype faster, generate code faster, or explore ideas more cheaply, then of course it is useful.

For medium-sized companies, I can also see the logic. They may not have huge resources, and AI can help them research faster, build proofs of concept faster, and test small pivots without overcommitting. That can be valuable. Although even there, there is a catch. When it becomes too easy to build and test new ideas, it can also become too easy to lose focus. Suddenly every path looks cheap enough to try, and the company starts moving in ten directions instead of one. Lower friction is not always a gift. Sometimes it just makes distraction more efficient. Still, the logic is understandable.

Where it stops making sense to me is with large companies. That is where the whole thing starts to feel much less like strategy and much more like hype wrapped in management language. I think part of the reason executives love AI so much is that they are often very far removed from hands-on work. Their world is made up of abstraction: strategy, alignment, metrics, summaries, priorities, messaging. So when they sit down, type a few sentences into a machine, and get back something polished, coherent, and apparently useful, it feels revolutionary. And at first glance, it is. But that first glance is doing a lot of work.

The Productivity Illusion Behind the Hype

The argument for AI in large companies almost always comes back to productivity. Payroll is expensive. Teams are expensive. Leaders want more output from the same number of people. Fine. That part is understandable. But this is also where I think many companies are repeating one of the oldest management mistakes in the book: confusing visible output with real value.

Years ago, bad managers loved meaningless engineering metrics. Number of lines of code. Number of commits. Number of tickets closed. Developers always knew these were nonsense metrics, because they rewarded activity, not impact. Writing more code has never automatically meant solving more problems. In many cases, it means the opposite. Good engineering is often about writing less, removing complexity, and solving the right problem cleanly. Now AI arrives, and suddenly that same misunderstanding is back again in a shinier form. More generated code. More drafts. More apparent movement. More visible productivity. Everyone sees a bigger pile and assumes more value has been created.

But that is not how value works.

In Hungary, especially in the countryside, there is a funny but very real way people judge restaurants. When you ask how the food was, they do not always say it was delicious. They say, “The portions are huge!” As if portion size itself proves quality. As if a big plate automatically means a good meal.

That mindset is shortsighted with food, and it is just as shortsighted with software. A smaller amount of high-quality work can be worth vastly more than a huge amount of mediocre output. We all know this in theory. But with AI, many executives seem to forget it instantly. And that is exactly why the whole thing feels so contradictory to me. AI in software development is, in many cases, a shortcut tool. Sometimes shortcuts help. Sometimes they save time. Sometimes they are perfectly reasonable. But shortcuts also introduce risk, and risk is supposedly the thing large companies hate most. Yet here they are, pushing it aggressively.

Why? I think part of the answer is that the risk does not show up first. The output does. At the beginning, AI looks fantastic. Code appears quickly. Documents come back polished. Features seem to move faster. Progress becomes easy to point at. It is perfect for dashboards and executive summaries. The problem is that the cost often shows up later, buried in quality issues, ownership issues, maintenance issues, and debugging pain. And by then the productivity story has already been sold.

Why Trust, Ownership, and Quality Still Matter Most

To me, one of the biggest problems here is not even the technology itself. It is what the technology starts doing to people’s relationship with their own work.

One thing I have heard more than once from developers I trust is that heavy AI use can slowly weaken ownership. The code gets worse, but more importantly, the sense of responsibility gets worse too. People stop reviewing as carefully. They stop thinking as deeply. And when something breaks, the excuse is already sitting there, waiting: the agent wrote it.

That is a real cultural problem.

I am not saying all AI-generated code is bad. That would be lazy and obviously untrue. Some of it is useful. Some of it is decent. Some of it is genuinely helpful. But a lot of it is worse than what a competent mid-level developer would have produced alone, and it often fails in very familiar ways. It adds noise. It overcomplicates. It touches things it did not need to touch. It introduces weird side effects. It solves the local problem while quietly damaging the wider system.

That is why it becomes such a pain later. Not just because the code is ugly, but because it was generated from partial context rather than genuine understanding. And real software is never just the small task in front of you. It is the hidden assumptions, the old constraints, the architecture scars, the business logic nobody documented properly, the silent expectation that new work should not break existing work. A good developer carries a lot of that naturally. Not perfectly, but naturally. An LLM does not.

It sees what it is given, and what it is not given often disappears from relevance. And even if you try to provide more context, there is still a practical limit. Real systems are messy. Real organizations are messy. Real requirements are layered, historical, and full of unwritten rules. That is exactly why engineering is hard in the first place.

That is also why I think we need to be more honest about what LLMs are actually good at. They are incredible tools when used in the right places. Their breadth of knowledge is genuinely impressive. They are great for learning, summarizing, brainstorming, research support, reframing questions, and helping people get unstuck. In that sense, nearly everyone should probably be using them in some way. But breadth is not depth.

Expertise is not just having access to lots of information. Expertise is knowing what matters, what is noise, what is fragile, what is dangerous, and what tradeoff is acceptable. It is a trained filter, not just a bigger database. Humans are still much better at that than people currently want to admit.

So when leaders get excited and start saying things like “we will need fewer developers now,” I think that says more about how they view software work than about what the tools can actually do. It assumes the value comes mainly from producing code, when in reality the value often comes from judgment, restraint, understanding, and ownership.

A former colleague of mine explained this better than I can. He once wrote a very strong description of an architectural issue and a proposed solution. It was so clear and polished that other people asked him if he had used ChatGPT for it. He said no. Then he said something I have not forgotten since:

He does not use ChatGPT when he already knows exactly what he wants to say, because if the thinking is already his, it is faster to write it himself than to wrestle a machine into reproducing his own thoughts.

That, to me, gets right to the center of the issue.

These tools are often most useful when helping us with things we do not fully understand yet. They are much less impressive when they are asked to replace real expertise, real ownership, or real judgment.

That is why I remain skeptical of the current AI rush, especially in big companies. Not because AI has no value. It clearly does. But because too many people seem to be mistaking polished output for trustworthy work, and speed for substance. Those are not the same thing. The real test is not whether AI helps teams produce more. It is whether it helps them produce work they can still understand, defend, maintain, and trust six months later.

That is the part many companies are still too eager to skip. And until they take it seriously, a lot of what gets called AI strategy will look less like progress and more like a faster way to accumulate hidden mess.

Subscribe for updates

Get new posts, research notes, and occasional opinions when there’s something worth tracking.