KI Evolution

AI in the company: From the computing brain to the power of the future

In recent years, artificial intelligence (AI) has developed from a field of research for visionaries into one of the key technologies for companies. Whether automated customer interaction, data-driven decision-making processes or the optimisation of supply chains – AI is not only changing individual processes, but entire business models. However, the triumph of AI was by no means pre-programmed. The question is: where did the history of AI begin, what were the setbacks, and why is AI so crucial for companies today?

This article traces the history of AI – from the first scientific theories and groundbreaking breakthroughs to the technologies that characterise everyday business life today. Because one thing is certain: companies that understand the AI transformation and utilise it in a targeted manner will be at the forefront in the long term.

AI as a vision and experiment

The idea of artificial intelligence did not begin with computers. However, it was not until the 1950s that the first concrete concepts were developed. The decisive moment came in 1956 with the legendary Dartmouth Conference, at which the computer scientist John McCarthy, often referred to as the ‘father of AI’, coined the term artificial intelligence.

Early AI programmes such as ELIZA (a chatbot that simulated simple conversations) or SHRDLU (a natural language processing system) showed that machines could understand human communication to a certain extent. However, these first steps were not enough to generate economic benefits for companies.

The 1970s and 1980s were characterised by the so-called AI winters – phases in which inflated expectations of the technology were not fulfilled and investments failed to materialise. While researchers continued to work on the basics, AI remained a theoretical concept for companies for the time being – too expensive, too inefficient and not scalable enough for real business operations.

Looking back, Christopher Keibel, AI Engineer at the coeo Group, explains why AI remained unusable for many companies during this phase:

“Machine learning in the 1970s and 1980s was limited by a lack of computing power and data, which led to the AI winter. Systems such as ELIZA could simulate simple conversations, but were too limited for complex tasks. Companies therefore relied on expert systems, which proved particularly successful in clearly defined areas such as medical diagnosis or production planning. However, these systems were heavily dependent on manual programming and were not very flexible when faced with new challenges.”

The rebirth of AI

The rebirth of AI began in the 1990s with the introduction of machine learning. The big difference to the past: instead of programming AI with rigid rules, it was now trained with data to learn independently.

At the same time, the rise of the internet led to an explosion of data. Companies suddenly had access to huge amounts of customer, market and transaction data – the perfect basis for self-learning systems.

Companies began to use AI for real business processes:

  • Industrial companies use machine learning for predictive maintenance.
  • Banks used AI for fraud detection and risk assessment.
  • E-commerce giants such as Amazon have perfected AI-supported product recommendations.

But with the power of AI also came responsibility. Companies were suddenly faced with new challenges: Data protection, algorithmic fairness and the ethical limits of automation.

Christopher Keibel, AI Engineer at the coeo Group, also takes a critical view of the fact that technological advances without clear ethical guidelines can quickly lead to risks. He warns:

“The strength and precision of a machine learning system are largely based on the data that shapes it. However, the drive for maximum performance must not lead to data protection and ethical principles being neglected. Only by striking a balance between technical excellence and moral responsibility can AI be used sustainably and fairly.”

AI is becoming part of everyday business life

Today, AI has developed into a decisive economic factor. Companies no longer only use it to optimise processes, but also as a strategic tool for designing new business models.

Particularly transformative AI models are:

  • Neural networks used in image recognition, speech processing and autonomous systems.
  • Transformer models that power modern voice AIs like ChatGPT and transform search engine optimisation.
  • Generative AI that creates content such as texts, images and videos and revolutionises marketing processes.

As AI models are constantly evolving, it is essential for companies not only to rely on existing solutions, but also to respond to new technological trends. Agility is therefore crucial today, because what is celebrated as technological progress today could be outdated tomorrow. Kevin Yam, Chief AI Officer of the coeo Group, emphasises this point:

“The future of AI lies in the hands of the brightest minds developing advanced mathematical models, not in billions and billions of euros of investment. After all, the foundations of many deep learning architectures are still the simple mathematical principles that students learn in their first semesters.”

A look into the future

AI development is on the threshold of a new era. Research laboratories are currently developing technologies that have the potential to fundamentally change business processes once again.

  • Liquid neural networks could enable AI systems that adapt in real time – ideal for logistics or autonomous driving.
  • Neuromorphic chips simulate the functioning of the human brain and could make AI faster and more energy-efficient.
  • Quantum computing could exponentially accelerate AI models and enable new solutions for complex calculations in the financial and pharmaceutical sectors.

Quantum computing is considered to be particularly promising – and at the same time one of the biggest challenges for companies. The technology is not yet ready for mass use, but those who get to grips with it early on could gain an enormous head start. Kevin Yam sees this as one of the most exciting developments of the next few years:

“Quantum computing, with its inherent complexity and the disruptive opportunities it offers, requires early and in-depth engagement. Despite the challenges of implementation, it is an investment that promises massive competitive advantages for the pioneers. Unlike AI, rapid levelling in quantum computing is less likely, reinforcing the importance of early engagement.”

But the next phase of the AI revolution will not be decided by better algorithms or faster hardware alone – it will depend on how companies integrate these technologies into their business strategies in a meaningful way. Those who see AI not just as a cost saving but as a driver of innovation will be at the forefront of the next era of digital transformation.

Companies must act now

The history of artificial intelligence is a history of learning, adapting and experimenting. From the first hesitant attempts in the 2000s to the autonomous systems that control entire business processes today, the path was not always straightforward – but it was inevitable. Today, AI is no longer just an option, but a decisive factor for economic success. Companies that adapt to the new possibilities at an early stage will be able to actively shape the change.

“We are facing the most exciting phase of AI development. The question is not whether companies will adapt, but how quickly they will do so. Those who invest wisely now will reap the rewards in the coming years,” concludes Sebastian Ludwig, CEO of the coeo Group.

The coeo Group itself has already set this course. With the establishment of cAI Technology GmbH as an independent AI technology company, the Group is not only driving forward the automation of internal processes, but also developing customised AI solutions for the future.

The AI revolution is in full swing – but the challenge lies not in whether AI is used, but in how it is used. Because one thing is certain: the future of artificial intelligence has long since begun.

Cover picture © stock.adobe/Johannes

Share this article: