In this column, we offer monthly insights into the everyday work of our team “Smart City and Administrative Innovation.” The team of project managers, service designers, UX/UI designers, and smart city designers brings diverse perspectives to its collaboration with public administration in order to move Berlin forward together. By balancing long-term strategies with agile solutions, they share their experiences and insights from the CityLAB. This time: Service Designer Deborah Paluch.

At the CityLAB, we work on AI chatbots in various contexts to support public administration in handling its tasks more efficiently. With BärGPT, the AI assistant for the Berlin state administration, we have created a tool that makes the potential of AI usable for the administrative context in a needs-based and data-protection-compliant way. Through our daily work, we know the target group of administrative employees well and understand that the content and correctness of generated answers are more important than the form in which they are delivered. Of course, BärGPT always acts objectively and politely and can even switch between informal “du” and formal “Sie” forms of address in its responses, depending on user preference.
In our project AI Training Assistant, we are developing a chatbot together with young people that is intended to help them find the right vocational training. The target group consists of young adults who often lack direction and have little network or environment to help them get started with their search. In various workshop and exchange formats that we conducted with the young people, we learned that language, expression, tone, and the question of “how does the AI talk to me?” are far more decisive factors for how well the chatbot is received.
„Own Voice“: Tone and Expression of an AI Assistant
The chatbot should not speak in buddy-like youth slang, but rather be serious and clear. Young people want to be taken seriously and not have the feeling that their counterpart is a class clown; instead, they want communication that is appropriately serious –especially when it comes to job searching. Even among young adults whose first language is not German, there is not necessarily a desire to have content translated into other languages – “better in German, but in a way that I can understand.”
Our goal is to empower young people to find a suitable path into vocational training and professional life and to give them a voice. To do that, we also need to give the AI a voice.
The term own voice has several meanings. On the one hand, in the BookTok universe (TikTok videos in which content creators review books from the “young adult” genre), it often refers to stories written by marginalized people for marginalized people, addressing lived realities beyond norms and the mainstream. On the other hand, own voice also describes the function of auditory speech output using one’s literal own voice, once it has been trained beforehand.
Our approach lies somewhere between these two meanings. For us, it is not primarily about the actual vocal timbre of the speech output, but about the training assistant responding in a way – through its expression and language – that allows young people to recognize themselves, feel seen, and feel taken seriously.
This can mean that, in its tone, the AI:
- understands and incorporates the living circumstances of young people
- is friendly and motivating, but not overly “buddy-like,” remaining serious
- provides simple explanations in German instead of translating all answers into other languages
Ultimately, it is about the chatbot speaking understandable German, remaining serious and benevolent, and finding a good balance between “over-the-top praise” and ruling out certain dream jobs.
What You Want!
The ominous AI is a new actor in our modern lives – an inhuman-yet-human sparring partner on the stage called society, speaking to us with the supposed sum of our collective voices. In requirements workshops, we found that young people who have been using AI since the very beginning have relatively few inhibitions. This does not automatically make them AI-literate, nor does it mean that their use of AI is careful or well trained. But AI has become an integral part of their everyday lives, and rather than demonizing this, we try to enable responsible use through our application.
Shakespeare’s comedy Twelfth Night (What You Will) deals, among other things, with identity – characters who pretend to be someone else, play roles, and get tangled up in love triangles. It is about appearance and reality, and the fine line between illusion and authenticity in a complex, multifaceted world. In the same way, an AI assistant “plays” a role – like a computer in sheep’s clothing: a blinking circuit board disguised as a human, to which we assign identity and character by providing situational bites and contextual snippets that it then happily starts to act upon. Much like improvisational theater (“clap”): “Imagine you are my personal assistant and you are supposed to help me find the perfect vocational training.” The ramp is built; all other elements – roles, scripts, props, and costumes – are created by the young people on their own “stages,” or chat rooms, within the app, by telling the assistant more about themselves:
i live in neukölln
favorite drink durstlöscher pomegranate-lemon
did an internship at rewe once
people there were super cringe
my cat is my baby
at home i look after my little brothers
low-key annoying
school isn’t really my thing, sports and music are okay
What young people want is a space where they do not feel shamed – a space that is value-neutral and free of prejudice, where they can be themselves, ask questions, and receive quick, precise answers. They want to feel seen and understood.
What we want is to provide meaningful support that matches young people’s needs, helps them reflect on their own skills, identifies suitable vocational training paths, and gives them low-threshold tips for the application process. At the same time, the tool should promote responsible use of AI by making its own limitations visible. The goal is to help young people enter the professional world with the support of trained AI – not to replace personal career counseling.
AI Needs Rizz (short for charisma)
The AI’s aura turned out to be far more important for our project than we initially thought. The right tone differs for every target group and every context and that is precisely the strength of an artificial system: it can assume a role, react within a predefined, “pre-trained” scene, and find the right words and the appropriate form of address for different people. We have learned that, for our assistant, the manner of communication is at least as important as the content itself in order to truly connect with the young target group and genuinely support them in their search for vocational training. And if it then also stops hallucinating so outrageously, we will really have achieved our goal.
