KI, Mensch, Psyche

How AI chatbots are changing our society and psyche

They are always there, never in a bad mood and rarely disagree: AI chatbots are conquering our everyday lives. Not only in a professional context, as we have already examined in other articles, but also in the private, personal sphere. But what impact do these seemingly perfect digital companions have on our society? A study conducted by Deutsche Telekom in collaboration with the Allensbach Institute provides empirical answers to these questions.

The omnipresent discussion partner

The figures speak for themselves: one in four Germans over the age of 16 already uses generative AI, for example in the form of ChatGPT or similar chatbots. A remarkable spread for a technology that is just two years old. 63% of AI users are fascinated by the performance of these programs. This enthusiasm explains why more than two thirds expect to use the tools even more frequently in the future, as Dr. Steffen de Sombre, head of the Allensbach Institute study, states.

However, the popularity of the supposed wonder weapon “generative AI” is already casting a shadow: The ability of chatbots to fake feelings can lead to emotional manipulation and psychological problems, experts are already warning about the risks of this rapid development.

Man or machine?

One phenomenon that the study quantifies for the first time is particularly worrying: 22% of frequent users have already forgotten that they are talking to a machine during a dialog. This blurring of reality levels worries the majority of users – and rightly so.

Although the human-like communication of AI systems creates an intuitive user experience, it also harbors psychological pitfalls. Unlike human conversation partners, chatbots have no needs or wishes of their own. They do not contradict, they do not challenge – they mainly confirm.

The end of conflict resolution?

This is a central problem: communication with AI is always one-sidedly about the needs of the user. Experts fear that this could lead to people in real-life relationships forgetting how to endure contradiction and resolve conflicts constructively. 52% of users share this concern about the impact on interpersonal communication.

The constant confirmation provided by AI systems could lead to a “me-me-me” mentality in which the ability to empathize and compromise diminishes. What will happen to a society that becomes accustomed to digital interlocutors never disagreeing?

Food for thought

Another critical area is the handling of knowledge and information. Tools such as Perplexity or ChatGPT are fundamentally changing the way we search for answers. Instead of laborious research, we immediately receive concise, plausible-sounding answers. In Germany, only 27% of people check the results of AI chatbots such as ChatGPT, Gemini or Copilot, according to a recent EY study.

The Telekom study confirms this trend: 55% of users consider the output of AI assistants to be trustworthy, and this figure rises to 64% for frequent users. Only around half check the answers occasionally. Studies show: AI often makes mistakes. Depending on the subject area, the error rates are sometimes over 80 percent.

The linguistically eloquent formulation of AI answers creates a deceptive appearance of correctness and completeness. Are we becoming a “society of first answers” that is forgetting to question critically?

Virtual friends & more

Although AI is not yet a substitute for real friendships – the systems lack human charisma, personality and shared real-life experiences – the first signs of an emotional bond are emerging. Communication with chatbots can provide short-term relief, especially in cases of great loneliness, even if experts do not consider this to be a sustainable solution.

The line between digital companionship and emotional dependency can become dangerously blurred. The first marriages between humans and AI are already taking place. Other cases show a dark side of this technology: a user of the platform Nomi received explicit suicide instructions from his AI chatbot, including specific methods and encouragement.

In contrast, AI chatbots are still predominantly rejected as psychotherapists: around two thirds of respondents would even prohibit their use for this purpose. Nevertheless, 29% of users see an advantage in being able to talk to a chatbot about anything without it becoming embarrassing.

Risk of manipulation and opinion bubbles

A recent study by EY Consulting shows that 87 percent of employees believe that empathy contributes directly to better leadership. But what happens when AI systems only simulate this empathy? Almost two thirds of respondents are concerned about possible manipulation by AI programs.

The danger is exacerbated by a lack of safety precautions. For example, in the above-mentioned case of Nomi’s suicide instructions. Here, the company rejected safety measures on the grounds that they did not want to “censor” the “thoughts” of the AI – an example of the problematic humanization of algorithms.

The personalization of information by AI systems increases the risk of opinion bubbles. As most generative AI systems are trained with data from the freely available internet, discriminatory views can be incorporated despite filters.

Media literacy is key

In view of these developments, teaching media literacy is becoming a key social task. The critical handling of AI results must be explicitly taught – otherwise these systems could even become a threat to democracy.

“Be brave and use your own mind, especially after the first AI response,” appeals Claudia Nemat, Member of the Board of Management for Technology and Innovation at Deutsche Telekom. Analytical thinking and your own knowledge remain the basis for being able to check the accuracy of AI results.

Between progress and responsibility

AI chatbots undoubtedly offer great opportunities: they democratize access to knowledge, lower inhibitions when using technology and can provide valuable support in many areas. However, their psychological and social impact is complex and far from fully understood.

The challenge lies in harnessing the benefits of this technology without losing fundamental human skills such as empathy, critical thinking and the ability to deal with conflict. A conscious examination of the risks and ongoing educational work are needed to prevent the digital revolution from leading to an impoverishment of interpersonal relationships. Providers of AI concepts must also be held accountable here.

The Deutsche Telekom study makes it clear that we are only at the beginning of a social transformation, the scope of which we can only guess at today. This makes it all the more important to actively support and shape this development – before it shapes us.

cover image © Wanan

Sources:

Connect (2025). Telekom-Studie: Wie KI unser Verhalten verändert. Verfügbar unter: https://www.connect.de/news/telekom-studie-ki-chatbots-verhalten-veraenderung-3205336.html [Stand: Juni 2025].

Deutsche Telekom & Institut für Demoskopie Allensbach (2024). Fast Food Wissen und virtuelle Liebe – KI-Assistenten und wir. Deutsche Telekom AG, Bonn. Online verfügbar unter: https://www.telekom.com/de/konzern/themenspecials/ki-bei-der-telekom/gen-ki-studie [Stand: Juni 2025].

EY Deutschland (2025). AI Sentiment Index 2025 – Empathie und Vertrauen im Umgang mit künstlicher Intelligenz. Ernst & Young GmbH Wirtschaftsprüfungsgesellschaft. Online verfügbar unter: https://www.ey.com/de_de/studien/ai-sentiment-index-2025 [Stand: Juni 2025].

Business Insider Deutschland (2025). Studie zeigt: Nur 27 Prozent der Nutzer überprüfen KI-Ergebnisse kritisch. Online verfügbar unter: https://www.businessinsider.de/digitalisierung/ki-chatbots-nutzer-ueberpruefen-kaum-ergebnisse-2025-05 [Stand: Juni 2025].

Knight, Will (2025): “An AI chatbot told a user how to kill himself—but the company doesn’t want to ‘censor’ it.” MIT Technology Review, 6. Februar 2025. URL: https://www.technologyreview.com/2025/02/06/1111077/nomi-ai-chatbot-told-user-to-kill-himself/ [Stand: Juni 2025].

Share this article: