Dr. Robert Kazemi, how does the use of AI influence legal frameworks?

Thanks to the rise of artificial intelligence, millions of companies are experiencing a positive shift, allowing them to fundamentally restructure and realign their daily operations. At the same time, AI is likely to pose challenges in many areas of daily and professional life. In this interview, attorney Dr. Robert Kazemi explains the legal risks to be considered when using AI, the guidelines and measures companies should implement, and the potential impact of the AI Act.

What legal challenges and risks does AI present, and how does the use of artificial intelligence affect existing legal frameworks and procedures?

Dr. Robert Kazemi: The use of AI has risen rapidly, especially over the past year, and now permeates almost every aspect of life. At the same time, the development of generative AI systems is marked by significant dynamism and constant evolution, making legal regulation difficult. The inertia of legislation seems to be noticeably outpaced by the industry’s drive for innovation and development. Any legislative regulation thus runs the risk of either lagging behind the actual possibilities and challenges or hindering innovation. This is also associated with a high degree of uncertainty for AI users, as the legal frameworks to be considered can vary greatly depending on the area in which AI is deployed. In a business environment, data protection law, labor law, and copyright law are particularly relevant. For example, when training AI applications, the question may arise as to whether content can be used for machine learning without violating data protection or copyright laws. If AI-generated content is reused without verification, there is a risk that pre-existing (copyright-protected) works could be unknowingly used, potentially resulting in unauthorized reproduction, distribution, public performance, or requiring approval for adaptation. In the future, there will undoubtedly be many more legal questions to address.

How will the AI Act, passed by the EU, change the use of artificial intelligence in the long term? Do you see this law as an opportunity for AI or more of a hindrance?

Dr. Robert Kazemi: The AI Act is an attempt to establish a framework for the use and, especially, the development of AI. Whether this will be successful in the long term remains to be seen. Even before the regulation becomes effective—most of its provisions will not take effect until two years after publication in the EU Official Journal—there are already significant uncertainties in interpretation and application, which could slow progress. In my view, the EU would have done well to wait and trust that AI could be adequately regulated by applying existing laws. Beyond the GDPR, we’ve essentially created a new field of activity for legal advisors and compliance departments within companies.

What data protection guidelines and measures should companies that use AI technologies implement?

Dr. Robert Kazemi: First and foremost, I believe the question should be whether the use of AI is generally allowed within the company or not. Section 613 S. 1 of the German Civil Code (BGB) requires the personal performance of work duties; if AI handles these tasks, labor law questions arise. If the use of AI is to be generally permitted, it’s advisable to specify the AI application and make it available to employees. This avoids licensing and data protection issues. For companies with a works council, their involvement should be considered before AI is deployed. There are several useful statements from regulatory authorities that are well worth reviewing, such as the discussion paper from the LfDI Baden-Württemberg or the so-called “AI how-to sheets” from the French data protection authority CNIL. Specifically concerning the AI Act, it’s important to comply with the transparency requirements outlined in Article 50. The current text of the regulation can be accessed here.

Share this article: