AI in Europe: Responsibility, Inclusion, and the Future of Creativity

5 questions to Florian Dohmann

By Nora Eilers – April 25, 2025

Exactly one year ago, Florian Dohmann, founder and Chief Creative Officer of Birds on Mars, was a guest on our podcast Radio CityLAB, discussing with our director Dr. Benjamin Seibel the rapid development of Artificial Intelligence. How has the discourse around AI changed over the past year, and where do we stand in 2025?​

Florian Dohmann, founder and creative director of Birds on Mars

Dear Flo, a year ago, you and Ben talked extensively about the hype surrounding AI. At the time, you said: “The order of the day is to stay calm,” because everything was moving incredibly fast. Since then, the pace hasn’t slowed down—in fact, it’s accelerated. How has the discourse changed? And how do you perceive the current mood?

Florian: Fundamentally, not much has changed—AI continues to develop at a rapid pace, accompanied by a lot of commotion. The most important thing remains: don’t let yourself be pressured and find your own way of dealing with this technology.​

The maturity level in companies and public institutions has increased. Even in the public sector, generative AI has become part of everyday work. At the same time, it’s still important to engage with AI consciously, but not to blindly follow every trend. Especially in the private sector, it’s evident that the threat isn’t AI itself, but the competition that knows how to use it better.​

Therefore, my appeal to those who haven’t actively engaged with AI yet: take the time, try it out, and find out how you can use AI meaningfully for yourself.​

The EU AI Act aims to establish clear rules for the use of AI. In your view, which societal standards and values must be embedded in algorithms? And which debates are particularly important now to ensure responsible development and use?

Florian: A responsible approach to AI is crucial—not only concerning data protection and ecological sustainability but also the inclusion of people and embedding a European value system in technology development.

Europe has a great opportunity to position itself as a pioneer. Instead of complaining about regulations like the AI Act or previously the GDPR, we should see them as competitive advantages. It’s almost ironic that companies like Apple now market data protection and security as major USPs, while we in Europe set the standards that are then commercially exploited elsewhere.​

Europe needs to act more confidently and view the patchwork of regulations as a strength. The American tech world has glorified the pursuit of the next unicorn for decades—we now have the chance to establish a counter-narrative: an economy based on responsibility, collaboration, and social values.​

In the podcast, you mentioned that AI has great potential for inclusion, but increased participation in public discourse hardly plays a role. Has that changed in the meantime?

Florian: The problem often lies less in AI itself and more in existing systems—lack of funding and bureaucratic hurdles make it difficult to establish new technologies in the field of inclusion.

Nevertheless, I’m optimistic. AI has enormous potential to break down barriers—be it through translation services or new access to information. An example is our project “Kaleidofon” or the talking bookshelf of the Pankow city libraries. AI can help provide new opportunities for participation to people with disabilities. There’s still a lot to do, but I believe we’re on the right track.​

In the podcast, you also discussed the connection between AI and art, for example, in a project with artist Roman Lipski. Where can humans not be replaced in art?

Florian: The creative industry is currently experiencing enormous disruption, particularly in areas like advertising or video production. Large teams for elaborate shoots, where entire crews used to fly to South Africa to film a can for a ten-second social media video, are increasingly being replaced by generative AI.​ In the creative industries, a lot is happening, and it’s not just limited to visual arts. If you look at advertising, you can see that traditional agency models are undergoing massive changes.​

Honestly, I’m not sad that, in the example above, we can save on flying around the world. In a field heavily influenced by content creation, it’s only logical that generative AI takes over these processes—whether in text, images, or multimodal content.​ Of course, this brings many challenges that need to be addressed—often, the problem lies in the system, which must ensure that disruption doesn’t come at the expense of a few and that innovation and social compatibility finally work together in the sense of a truly social market economy.​

Interestingly, a recent study shows that many side jobs of artists—such as translations or writing forewords for books—are more threatened by AI than their actual artistic activities. This significantly changes the economic reality for many creatives.​

Despite all these changes, I believe it’s becoming increasingly important to appreciate the personal moment—even the imperfect! It’s about real, tangible life. A good example is the moment at a punk concert when you’re in the mosh pit and surrender to the pogo. Real flesh meets real flesh. This experience lets us concretely feel what it means to be human. That’s something no AI can replace.​

Not too long ago, a photo was considered proof of reality, then came Photoshop. Videos, on the other hand, had a kind of untouchable status—what was seen in moving images had to be real. But with AI-generated videos and deepfakes, this boundary is also blurring. How does this change our understanding of truth? And how dangerous is that for our society?

Florian: We’re currently in a transitional phase. Just as there was a moment when photography suddenly existed, and people had to get used to the fact that images were no longer just painted, we’re now experiencing something similar with AI-generated content.​ Photoshop already showed that images can be manipulated—now the same applies to videos, and even before generative AI, this was already the case in Hollywood.​ The difference is that today, anyone can create highly realistic fakes with just a few clicks. Whether text, photo, or video. This significantly amplifies the spread of misinformation and is a huge danger, especially in times of dwindling democracies, radical political movements, and targeted disinformation.

At the same time, I’m optimistic: new challenges bring new solutions. Protective mechanisms and forensic techniques are being developed to detect forgeries. In China, for example, there is supposed to be a labeling requirement for AI-generated content, and similar regulations will gradually be introduced in the EU. Of course, people can try to circumvent these systems – but it’s like traffic rules: you can tear down a stop sign or simply cover it up, but it still remains illegal.

So we need to develop new standards, new ways of thinking about how we deal with truth in the digital age. But one thing is also clear: the threat of disinformation will accompany us for decades to come.

We’ve already had five questions, but let’s end on a positive note: What gives you hope? Can you share an optimistic outlook for the future?

Florian: I follow with great interest what organizations like the Technologiestiftung Berlin and CityLAB are doing – they are pioneers in the public sector. What’s especially important to me is the concept of new alliances: business, the public sector, artists, NGOs, and politics must work together on the responsible use of AI. We need to understand AI as a shapeable technology – and use it in ways that not only make us more efficient but also strengthen our social cohesion.