Insights Der Joomla , HTML, CSS &Webdesign-Blog

Artificial intelligence is supposed to help us work faster, more efficiently, and more purposefully. Imagine the following scenario: you’ve made an architectural decision for your next project — say, you want to use a specific database structure.

Image by Tenniel and AI

Before you start, you ask an AI:

What are the advantages of approach X in my case?

The AI responds thoroughly, competently, and affirmatively. What happened? You didn’t receive a neutral analysis. You got an answer to a leading question. The AI delivered what you asked for, and you got what you wanted to hear. The question is: “What are the advantages of approach X in my case?”

Look at the word “advantages.” It already contains a presupposition: that approach X has advantages. The question doesn’t challenge this, it assumes it. It doesn’t ask whether X is good — it only asks how good. Specifically, this means: the AI receives a task, not a task for analysis. And it fulfills it. It looks for advantages, and it finds them. Always. Because almost any approach can be shown to have advantages. What the question doesn’t allow is just as important: it doesn’t ask about disadvantages. Not about alternatives. Not whether X is even suitable for this specific case.

Imagine asking a car salesman: “What do you like about this car?” He will answer. Thoroughly. Enthusiastically. And he isn’t even lying, because the car really does have these qualities. But he hasn’t told you what he’s leaving out. That’s exactly what happens here.

The neutral version of the same question would be:

Is approach X suitable for my use case, and what are the pros and cons?

Now the AI is no longer in confirmation mode. Now it is in analysis mode. The difference isn’t in the AI’s answer — it’s in the permission you give it.

This is not the AI’s fault. It’s a fault in the question. And behind this question lies one of the most well-known cognitive patterns of the human brain: the confirmation bias. Bias comes from English and means something like inclination or tilt. Our thinking automatically leans in one direction without us being fully aware. Imagine a pair of glasses you always wear. You see through them. You think through them. But you only notice that they tint your view when someone takes them off for you.

AI as an amplifier: what goes in comes out magnified

Cognitive biases are not errors in thinking. They are an evolutionary survival mechanism. Our brain works with shortcuts, patterns, and assumptions because it must react quickly. That is efficient — but not always objective.

Imagine: the Stone Age. A clearing in the forest. Your ancestor hears rustling in the bushes. He doesn’t know if it’s the wind, a rabbit — or a saber-toothed tiger. There’s no time to think. His brain makes an immediate decision: danger. Get out. It wasn’t a rational analysis, but that decision may have saved his life.


Specifically: over millennia, our brains learned it’s better to err on the side of caution than to risk being too slow. Better to flee ten times unnecessarily than be late once. That’s cognitive bias — and it wasn’t a mistake back then. It’s why we’re still here today. AI doesn’t think; it processes your thoughts.

An AI responds quickly, thoroughly, and without hesitation. It expresses no uncertainty, doesn’t contradict by itself, and only questions your assumptions if you explicitly ask it to. This makes it an extraordinarily powerful tool and at the same time an extremely efficient amplifier of your preconceptions. What goes in comes out magnified.

But confirmation bias is only the first cognitive pattern affecting our use of AI. There are others, and some are far more insidious.

Automation Bias: because the machine says so

This potential thinking error is much subtler and, in many ways, more dangerous. Automation Bias describes a pattern we all know. We trust automated systems more than human sources. Not because they are better, but because they sound more convincing. When a colleague hesitates making a statement, you question it. An AI doesn’t hesitate. It doesn’t hedge. It shows no nervousness. It always answers — with the same confidence.

Ever trust a car navigation system more than your own sense of direction? I have. And I’ve ended up in dead ends despite having a gut feeling the direction felt wrong. I trusted the technology more than myself — and paid the price.


This is exactly where we must be careful. We must not scrutinize AI outputs less critically than human sources — perhaps even more critically, because AI always sounds confident, even when it’s wrong. And we all know: AI hallucinates. That’s no reason not to use AI, but it’s a very good reason not to follow it blindly.

Anchoring: the first answer sets the frame

Anchoring describes how strongly initial information shapes our subsequent assessment. What we hear first becomes the anchor; everything else is measured against it, not an objective standard.

An example we all know: the winter sale. A winter coat. “Original price: 299 euros. Now only 149 euros.”


The brain immediately calculates: 150 euros saved. Feels good. Purchase made. But wait — where did the 299 come from? Often it’s deliberately set to create this anchor. We no longer ask whether 149 euros is a fair price. We only compare it to the first price we saw.

This is no coincidence. This cognitive pattern has long been used in marketing and sales strategies because it works reliably for almost all of us. In AI use, it can also be a trap. You ask a question, get a first answer, and all follow-up questions operate within the frame set by that first answer. You refine, correct, or add to it, but you rarely leave the original thinking space.

This is especially relevant for architectural decisions, concept drafts, or strategy questions. If the first AI answer suggests a certain direction, you almost automatically continue thinking along that path. That’s why it’s important to ask key questions multiple times, phrased differently. Start a new conversation without carrying context from the old one. Maybe even the next day.

The principle of sunk costs: Sunk Cost Fallacy

You’ve been sitting through a boring movie for an hour. You actually want to leave. But you stay because the ticket cost 15 euros. So you sit another 90 minutes you won’t get back to “save” the 15 euros that are already gone. The ticket is no longer an argument; it’s the past. But the brain treats it as an obligation.


The same happens in long AI conversations. You’ve spent half an hour with the AI on a concept. Ten messages back and forth, a detailed draft emerges. Then a fundamental problem appears. But the draft is already advanced, and so much time is invested. So you try to optimize the problem instead of discarding the concept. Your investment in the conversation justifies continuing.

Specifically: discard your concept if necessary — even if it hurts. Don’t be too upset. Without AI, the same process might have taken much longer.

Availability Bias: the shark swims in your head

What comes to mind easily is judged as more important or frequent than it actually is.

Years ago, I remember constant news coverage of shark attacks. For people planning a beach vacation, the risk suddenly felt real and tangible. Yet the actual number of shark attacks that year wasn’t higher. What increased was the reporting.


The brain didn’t assess actual danger, only how often it had heard about it. In AI use, a variant appears: if an AI repeatedly gives similar answers to certain questions, e.g., suggesting specific tools, we consider these tools particularly good. But AI doesn’t suggest frequent answers because they are the best — it does so because they appear frequently in training data. Specifically: when you get repeated recommendations, ask explicitly why they’re suitable for your specific case — and what alternatives exist.

Diffusion of responsibility: “The AI said so”

A decision is on the table. No one wants to take it alone.

A working group is formed. The group creates a document. The document goes to a vote. The vote yields a result. If it goes wrong, who made the decision? The group. The document. The process. Everyone. And therefore: no one.This is diffusion of responsibility. The more shoulders carry a decision, the easier it feels — and the harder it is to assign responsibility later. With AI, this pattern gets a new variant: “We analyzed it with the AI.” “The AI recommended it.” “According to AI, this is the best approach.”


When we make a decision ourselves, we carry the responsibility. We know it, and it creates careful judgment. When we base a decision on an AI recommendation, a subtle shift occurs: responsibility feels shared or outsourced. But regardless of text, image, code, or analysis you create with AI, you remain responsible. AI is just a tool. Like a hammer. If someone causes damage with a hammer, the hammer is not liable.

Conclusion

  • Ask neutral, not confirming questions.

    Instead of “Why is X the right solution?” ask: “What speaks for X, what against, and what alternatives exist?” The difference seems small — it isn’t.

  • Explicitly request dissent.

    Ask the AI to criticize your approach, point out weaknesses, or take a contrary position. AI systems can do this, but rarely do so on their own.

  • Maintain responsibility.

    Use AI as a sparring partner, not a decision-maker. The recommendation comes from AI. The decision comes from you. This distinction is not semantic — it’s practical. And legal.

  • Stay critical.

    AI is not omniscient. It’s a tool that reacts to what you give it. Good questions produce good answers. Bad questions produce bad answers — just more convincingly formulated. Every smart, precise question you ask not only advances your results but also sharpens your own thinking when using AI.

One final thought — one we often forget when the result on the screen seems effortless: AI consumes enormous amounts of electricity. Every query, every conversation, every generated image costs energy — far more than most realize. That’s no reason not to use AI, but it is a reason to use it consciously. Ask the questions that truly matter, not all possible questions.

Do you have questions, want to explore a topic in depth, or are you looking for training on AI, Joomla, or digital workflows? I look forward to your message.