How AI is misused in software teams

4 min read
Featured image
Photo by Tasha Kostyuk on Unsplash

How AI is misused in software teams

AI is making one part of engineering cheaper: producing text and code.

The misuse is assuming that cheap output produces progress.

Often it produces the opposite: teams generate more artefacts, spend more time validating them, and still do not make better decisions. Noise rises. Coherence falls. The bottleneck shifts from writing to judgment.

The core pattern: output inflation

When creation becomes cheap, organisations create more:

  • stories instead of tickets
  • documents instead of decisions
  • dashboards instead of understanding
  • PR text instead of reviewable intent

The visible activity looks like productivity, but the system does not improve because the underlying question is unanswered: what are we trying to achieve, and what constraints matter?

AI can format an Agile story, generate sequence diagrams, and expand a rough idea into 30 pages. If the goal is unclear, the result is a polished version of confusion.

The symptom is familiar: delivery still feels slow, and the team now has an additional burden—reading, verifying, and maintaining more material than before.

AI does not replace judgment

Some people treat this as a prompting problem: “we just need longer, better prompts.”

That misunderstands how software is built. Good engineering is iterative:

  • you learn as you go
  • you discover unknowns by trying things
  • you refactor when the design stops fitting
  • you stop features that aren’t worth the complexity

That loop is a judgment loop. It depends on context, trade-offs, and the ability to say “no.”

AI can help within the loop, but it cannot replace it.

Output inflation also delays the moment when the system forces a decision. You can ship more generated code and more generated documentation and feel like progress is happening—until maintenance debt becomes undeniable.

The dashboard trap (MCP and automation)

MCP and similar tooling can scaffold dashboards and automate tedious work. The trap is that it makes producing dashboards easy, without making the monitoring system useful.

You end up with:

  • 100 dashboards nobody owns
  • 10 versions of the same Kafka lag panel
  • anomaly detection that looks impressive but answers the wrong questions

During an incident, the team still lacks the few charts that matter because those are usually built iteratively, in response to real failures and changing systems. Monitoring is not a deliverable. It is a living system.

The PR description trap

Code review quality does not automatically improve with AI. Often it gets worse.

Instead of a short human summary of intent (“what changed, why, what to watch for”), reviews get flooded with AI-generated descriptions that are long, generic, and low-signal.

The effect is predictable:

  • reviewers stop reading
  • false alarms increase
  • teams learn to ignore the text

It is another case where the organisation gets more output and less clarity.

The security trap

Security can degrade when teams treat AI as a magic upgrade.

In practice this shows up as:

  • overly broad permissions (“just give it access, it will be fine”)
  • production access that bypasses normal review and controls
  • uncontrolled automation that turns mistakes into incidents

If AI helps you deliver better outcomes with less risk, it is a win. If it creates shiny pipelines while increasing risk and reducing accountability, it is a loss.

Vibe coding: good at the start, fragile later

Vibe coding can be excellent for scaffolding and getting to a first version quickly. The failure mode appears when the system grows:

  • prompts expand
  • coupling grows
  • changes become risky
  • the only way to move is “ask the model again”

At that point you are not delegating typing. You are delegating thinking. The system becomes harder to simplify because the simplification work is the very thing you outsourced.

The subtle shift: writing becomes reading

A common argument is “we review what the model produces.”

But that can flip the economics. Writing code was often the faster part. Understanding a large volume of generated output can be slower. Some teams even reduce or skip review because generation is fast. That is the most dangerous version: speed without control.

The real question is not “can we generate more?” It is “are we improving outcomes per unit of attention?”

Where AI actually compounds

Black keyboard on desk
Photo by Michael Soledad on Unsplash

AI compounds when it strengthens judgment and reduces entropy:

  • turning messy context into a crisp problem statement
  • generating trade-offs and failure modes before a decision
  • producing review checklists that force explicit intent
  • summarising incidents into concrete lessons and monitoring changes
  • tightening scope and constraints before implementation

These uses create leverage because they improve decisions and coordination. They reduce the noise you need to maintain.

A forward-looking consequence: data work becomes more valuable

As information becomes cheaper to produce, the volume of data and text will grow. The scarce capability becomes filtering, cleaning, structuring, and making it usable.

That suggests data engineering and data-quality practice will matter more, not less—not only in companies, but in personal knowledge systems too.

The future is not “replace humans with markdown files.” It is “use AI to reduce entropy while humans make the decisions.”