Ilko von Bieberstein: “Generative AI in customer service means innovation and responsibility”

Some view AI as the greatest technological opportunity of our lifetime, while others prefer to avoid ChatGPT and similar tools altogether. However, what cannot be denied is that artificial intelligence is incredibly capable and has already fundamentally transformed daily operations across many industries.

In this interview, Ilko von Bieberstein, Head of Customer Service for Private and Business Customers at GASAG Group, explains the advantages of integrating AI applications, the ethical concerns that arise, and how to prevent AI systems from acting with bias.

What are the biggest advantages of integrating GPTs into service processes? What are the biggest challenges?

Ilko von Bieberstein: The integration of Generative Pretrained Transformers (GPTs) into service processes offers many advantages for companies, service employees, and customers. GPTs can generate high-quality text and support a wide range of tasks that previously could only be handled by service staff. They are available around the clock and can respond quickly and efficiently to a large number of inquiries simultaneously. GPT models are very flexible and are used to identify customer issues, draft responses, or provide specific knowledge. This allows them to relieve customer service employees of routine tasks, enabling them to focus more on their customers and their needs. This can lead to higher service quality and improved customer experiences.

However, there are also challenges in integrating GPTs into service processes. Ensuring data protection and security when incorporating GPTs into existing IT infrastructures is particularly important. Additionally, the use of GPTs requires careful monitoring and control of the results to detect and prevent inappropriate or undesirable responses. A specific challenge lies in GPTs’ tendency to “hallucinate, meaning they sometimes “invent” information or details that were not present in the provided data. This can lead to inaccurate or even false statements. Although various approaches exist to minimize this problem, it has not yet been fully resolved.

What ethical concerns exist regarding data protection with AI applications?

Ilko von Bieberstein: As the availability of free AI applications and apps on the internet increases, so do the risks associated with their use, particularly in terms of data protection. Many free applications collect and analyze large amounts of personal data, and users are often unaware of exactly what information is being collected and how it is being used. Such personal data can be used in undesirable ways, such as for surveillance, creating behavioral profiles, or discriminating against users regarding products and prices. Data protection regulations and terms of use are often complex and difficult for users to understand. Additionally, the long-term effects of data collection and use by AI applications on the internet are often hard to predict.

It is essential to clearly explain to users what data is being collected and how it is being used. Users should have the ability to consciously give or withdraw their consent to the processing of their data. Developers, companies, and legislators must work together to ensure data protection and user privacy. Data minimization and anonymization are important approaches in this regard.

How can we prevent AI systems from being biased (e.g., in image generation), and what responsibility do companies have when using AI-based applications and systems?

Ilko von Bieberstein: A key aspect in avoiding bias in AI systems is using balanced and representative data to train these systems. AI systems learn from the data they are provided—if this data is unbalanced or biased, this will be reflected in the systems themselves. This applies to both text and image generators.

Many major AI companies have now publicly committed to reducing bias in their models and are actively working to improve their systems. User feedback plays an important role in identifying and correcting undesirable patterns in generated responses.

Companies that use AI-based applications and systems are responsible for the output of their AI. They must ensure data protection and security and always be aware of the social impact of their products. This includes the obligation to continuously monitor and improve their systems. Raising employee awareness of the ethical and legal aspects of AI and providing training are crucial factors. Organizations should integrate ethical principles such as transparency, impartiality, objectivity, non-discrimination, and privacy protection into their AI applications.

Cover: © GASAG AG

Share this article: