On 11 February 2026, the Federal Cabinet approved the government’s draft of the German implementing legislation for the EU AI Act. This marks a milestone, whilst also serving as a reminder of how much remains unresolved. Five years after the first drafts were drawn up, companies currently using AI in a productive capacity are asking themselves a very specific question: when will there be enough clarity to take action?
For companies like coeo, which use AI not as a future project but as an everyday tool, this is not an abstract question but an operational one.
The law is here – and so is the uncertainty
The EU AI Act has been in force since August 2024. The implementation deadlines are staggered: bans on certain AI practices have applied since February 2025, and the rules for large models have been in force since August 2025. From 2 August 2026, the law will apply almost in full (including the requirement to label AI content). For high-risk AI systems and their classification, the regulations will come into force from 2 August 2027.
The government’s draft bill from February and the Central Information Platform for the AI Regulation represent significant progress. The Federal Network Agency will become the central supervisory authority. The new KoKIVO Coordination Centre will serve as a dedicated point of contact, particularly for start-ups and SMEs. The 30-day deemed approval rule gives companies greater planning certainty when testing high-risk AI systems: failure to receive a response within 30 days is deemed to constitute approval.
At the same time, structural problems persist. This is particularly relevant for companies that actively develop AI and integrate it into their own services: binding requirements for GPAI models have been in force since August 2025, yet the European Commission’s guidelines, published shortly before the deadline, are not legally binding and leave many questions unanswered. It remains unclear which models are considered compliant and how the integration into proprietary products should be assessed from a regulatory perspective. Added to this are overlaps with the GDPR, a lack of uniform standards and the 16-country problem regarding country-specific applications.
What the figures show
Since the AI Act came into force, sentiment within the business community has been divided and scepticism is growing. A Deloitte survey conducted in September 2024 among 500 private-sector decision-makers already showed that the majority expect negative impacts on innovation opportunities, whilst just under a fifth see positive effects. A recent Bitkom study from the summer of 2025 confirms this trend – and exacerbates it: 56 per cent of German companies now see more disadvantages than advantages in the AI Act, whilst 53 per cent cite legal uncertainty as the biggest obstacle to AI deployment overall. Furthermore, a recent CRIF/EY study shows that almost two-thirds of companies have taken little or no action to implement the Act so far. The real challenge here is less the regulation itself than the lack of guidance on how to proceed. Companies know they must act. But they do not always know how.
Compliance or governance: a crucial difference
Many companies approach the AI Act as if it were a checklist: what do we need to do to avoid penalties? This is understandable, but it is short-sighted. After all, anyone who designs compliance structures today that could be obsolete tomorrow risks both misallocation of resources and over-compliance.
Isabell Neubert, Head of AI & IT Governance and Regulatory Technology at the coeo Group, sums it up: As a company that uses AI productively today, you effectively define for yourself what ‘state-of-the-art’ compliance means. The crucial difference lies in the approach: compliance asks what is permitted. Governance asks what is right and who understands the purpose of protection: upholding fundamental rights, protecting people and the community, and embedding this principle right from the outset in the concept, systems and context. In this way, compliance will no longer be a bottleneck but a solvable issue and a safeguard for one’s own business strategy.
Acting without certainty
The dilemma is very real: the rules have not yet been finalised, and the parliamentary process is still ongoing. And yet, waiting is not an option. Anyone who waits until everything is completely clear will have too little time to ensure a smooth implementation by August 2026.
Isabell Neubert describes what she has observed in practice: “In some cases, this uncertainty is simply sat out; in such instances, there is no experimentation with AI at all, and its use is banned from the top down. On the other hand, many others are experimenting extensively with AI without having a sound understanding of it.” Both approaches are problematic. Those who do not experiment will be left behind. Those who experiment in an uncontrolled and uninformed manner run the risk of regulatory penalties – including fines of up to 35 million euros or seven per cent of global annual turnover; furthermore, mishandling could also lead to data protection breaches.
The demands made by trade associations and the business community are largely consistent: a one-stop shop that also covers regional contexts, a uniform enforcement practice across Germany, a clear distinction from the GDPR, and implementation that is suitable for SMEs. The German AI Association has already set out these demands publicly.
coeo: Responsibility as a competitive advantage
Unlike cAI Technologies, coeo is a company that does not merely observe AI, but uses it productively on a daily basis – in direct contact with people facing financial difficulties. This makes the requirements for governance particularly clear.
André Heyden, who has been a member of the management board at cAI Technology GmbH since March 2026, clearly outlines his priorities: “My focus is on consistently translating the innovative power of research and development into robust, scalable products, with clear standards for operations, quality and measurable impact.” For him, regulatory requirements such as the AI Act are not at odds with scaling, but rather an integral part of it: governance and operating models have been part of cAI’s product strategy from the very beginning.
“We have vulnerable users,” says Isabell Neubert. “People in vulnerable situations who use our chatbot to discuss issues that might cause them distress in their daily lives. That’s why it’s so important to get this right.” At coeo, AI governance is therefore not a peripheral issue, but is firmly embedded at C-level, on a par with data protection, cybersecurity and legal matters.
The approach here is deliberately constructive: governance is seen not as a hindrance, but as an enabler. Building internal structures before the legislator makes them mandatory. Basing actions on existing rules without waiting for complete clarity. And fostering a corporate culture in which compliance is understood not as a threat, but as a shared responsibility.
Outlook: Trustworthy AI as a European competitive advantage
The parliamentary process is ongoing, and the final version of the implementing legislation has yet to be finalised. From 2 August 2026, the AI Act will come into full force, with all the consequences that entails for businesses that are not prepared by then.
Isabell Neubert looks to the future: “I hope that this will not only create a competitive advantage, but also build trust.” Those who develop and use AI responsibly build trust: with customers, with regulators, and with society. And in a market where scepticism towards AI is growing, trust is a real competitive advantage.
“For cAI, trustworthy AI means we can move into new use cases more quickly because quality, traceability and accountability are already built into the product. This isn’t just regulatory certainty; it’s a genuine growth enabler,” says André Heyden.
What companies can do now: draw up an inventory of all AI systems in use, identify high-risk areas and foster a corporate culture in which compliance is viewed positively. Those who tackle this today will not only be compliant tomorrow, but also well-positioned.