"Won't touch it. It's made with AI." My case for an informed opinion.

This post doesn't really belong here. As you know, this blog is about agile.

But over the past few weeks I've had an experience that's been on my mind, and the reactions to it are what pushed me to write this post. I built a desktop app: Cylenivo, available for Windows, Mac and Linux. I'd had the idea for a while, because Cylenivo fills a gap in Jira: proper analysis of Cycle Time and Lead Time (and a few other small things...).

The catch: I can't code. What I can do is conceive ideas, turn them into good user experiences, and I have the domain knowledge: like how to calculate a Cycle Time in a way that actually makes sense (which is more complex than you'd think...).

What's also changed in 2026: AI coding has taken another significant step forward. The tools from the major vendors around AI-assisted development have improved considerably compared to, say, 2024.

What has really changed since 2024

For Cylenivo I worked with Claude Code. I could just as well have used Codex or Gemini. This article isn't about which model is best. What interests me is the state of things in 2026, and that state is different from what it was in 2024 or 2025. I've done a lot of experimenting with AI-assisted development over the years and was often disappointed with the results. That's changed. The quality of the output, the reliability in more complex contexts, the ability to produce coherent code across multiple iterations - all of that has improved substantially.

That's the background, to give you some context.

The feedback that made me write this

Cylenivo has received various reactions. One of them was, roughly: "Won't touch it, it's made with AI." I've thought about that, and I can understand the sentiment to a degree (I'll get to that in the next chapter). What bothers me is that the most important question gets completely ignored: Does the app solve a problem? Does it do what it was built to do? Is it useful? For some people, that no longer seems to matter once the word "AI" enters the room. And I don't think that's a sensible position. Not because of my app, but when you follow that logic further.

I think this is a disconnect from reality that doesn't hold up under sober examination. The majority of software in use today almost certainly contains elements that were built with AI assistance. And that share is only going to grow. Anyone who wants to consistently reject AI-assisted development will struggle to find software they're willing to use.

And then there's AI slop. That irritates me too: empty LinkedIn posts that were obviously generated by a model and contain nothing of substance, videos where AI-generated voices read out meaningless text, products where AI features were bolted on because it's trendy, not because they add any value. That is AI slop.

But an app that solves a concrete problem is not slop. Regardless of what it was built with. A lot of things are being thrown into the same bucket right now that simply don't belong together.

The real disadvantages that deserve to be taken seriously

Of course I have serious concerns about where all of this is taking us. And there are legitimate objections to AI and AI in software development that I share, particularly because in my job I carry responsibility for developers.

The most obvious objection is the environmental one. The energy consumption of large language models is substantial, and the infrastructure behind them is not carbon-neutral. Anyone who factors that into their decision has good reasons for doing so.

A second objection concerns training data. Models were trained on code and content where the questions of copyright and consent haven't been clearly answered. On top of that, there's the question of what happens to the data you enter during use. These are legitimate concerns that I don't want to argue away.

What concerns me most personally is the question of developer growth. If AI is increasingly writing the code, what happens to the learning process? How does a junior become a senior when the bulk of the cognitive work is handled by a model? Will we eventually see a generation of developers who are good at reviewing code, but not at thinking it through? I don't have an answer to that. The question is entirely valid.

Then there's speed itself as a risk. AI significantly accelerates the production of code. But feedback cycles — the question of whether what you're producing is actually needed — don't move any faster (see also: Work-Feedback Loop). The risk of pure output-driven activity shouldn't be underestimated: we build more, faster, but not necessarily the right things.

And finally, there's the issue of dependency. Anyone who deeply embeds their development processes in AI tools becomes dependent on companies that are free to set their own prices and terms. We've seen this pattern before, for example with Uber. They effectively displaced the traditional taxi market in many US cities. Now that the alternatives are gone, prices are rising. The same risk exists with AI infrastructure, once market penetration is high enough.

What AI makes possible in development

All that said, there are tangible advantages that belong in this conversation. The most important one for me: someone who can't code can still bring an idea to life. That's not a small thing. Cylenivo, by my rough estimate, represents the equivalent of several hundred hours of development work. As a hobby project that I offer as a free download, that would simply have been unaffordable through an agency or a freelance developer. The idea would have stayed in my head, and a product that can help teams would never have existed.

AI also makes development faster (by now across the entire product lifecycle) and in certain contexts more cost-effective than purely human development. That's an economic fact, and a threatening one. But it remains a fact, and in a market where you're competing globally, it's not economically rational to ignore that potential - unless you're comfortable watching a business die. That sounds dramatic, but ultimately, that's where it leads.

Myths that no longer hold true in 2026

There are several persistent beliefs about AI-assisted development that may have been accurate in 2024 but no longer reflect reality. The first myth is that it only works with a perfect one-shot prompt, that you need to know exactly what you want from the start, and that any failure means starting over. That's no longer the case. Iterative development, corrections, follow-up questions, discarding partial approaches and restarting in a specific area, that's a completely normal workflow today.

The second myth is that AI gets hopelessly lost in complex projects. That also no longer matches my experience, provided you create the right conditions.

The third myth, which I still encounter from time to time: more complex ideas simply can't be implemented. Cylenivo is not a simple product. It includes calculations, database logic, a GUI for three operating systems, and export functionality. And yet it was entirely possible to build.

Then there's the fourth myth of poor code quality and maintainability. In reality, that depends (as it does with human developers) on how you approach the work. Anyone who takes AI-generated code without review may well end up with poor code. Anyone who works in a structured way, reviews regularly, and explicitly checks for architecture issues, security vulnerabilities and potential bugs gets something different. The source code of Cylenivo is publicly available on GitHub. Take a look and give feedback - that's explicitly welcome and I'd genuinely appreciate it.

What you actually need to make this work

In my experience, the demands placed on the person developing with AI are consistently underestimated. The idea that you can simply type in an idea and receive a finished app is still widespread. Often, funnily enough, among people who reject AI entirely. What you actually need can be described across four areas.

First, you need a good harness for the AI: well-structured, detailed information about the project, the tech stack, the architecture. Tools like Context7 help bring up-to-date documentation into context. Meta-information (such as the CLAUDE.md file) needs to be kept continuously up to date. And after every significant change, an explicit review covering architecture, security and bugs is essential, not as an optional step, but as a fixed part of the process. (Again: just like you would with human developers.)

Second, you need domain experience. AI cannot supply a concept or a product vision. Design in the sense of user flows and interaction logic has to come from a human and requires experience. Calculations need to be understood and specified upfront. An example from Cylenivo: what is Cycle Time, really? From which moment to which moment is it measured? Is it measured only once, or also when a ticket crosses the Cycle Time boundaries multiple times? That's a domain decision, not a technical one, and it has to be made by a human.

Third, prompting experience matters considerably. I've been doing this since late 2022, and I'm still learning. The ability to formulate a requirement in a way that the model interprets correctly is not trivial, and it doesn't come without practice.

Fourth: the surrounding toolset is just as important as in traditional development. Tests (properly thought-through tests, not just any tests) are non-negotiable. A solid development environment, clear test scenarios, a structured workflow and small stories. At its core, that's analogous to what human developers need too.

Conclusion

I can understand why people reject AI in software development. The objections (environmental impact, data ethics, the future of the developer profession, dependency on large corporations) are real and deserve serious engagement. What I can't understand is blanket rejection that doesn't actually build on those arguments, but settles for "AI" as a label.

My recommendation is straightforward: look at how AI-assisted development actually works today before you form a fixed opinion. Especially if you're skeptical, and especially if you're a developer. Not because you have to like AI, but because an informed opinion is better than a reflexive one. That's true for this topic just as it is for any other.

Updated:

Mastodon Diskussionen