Building AI Expertise in the Public Sector
In your opinion, what is the biggest obstacle to building AI expertise in the public sector?
Bernhard Drees: I see two main factors: organisational culture and learning. Back in 2018, Isabell Welpe aptly described digital transformation as, above all, a cultural shift. In government administration, process reliability and error prevention in day-to-day administrative work are highly valued—and for good reason. However, when it comes to digitalization and the introduction of AI, the drawbacks associated with this culture also become apparent. A cultural shift is necessary: from preventing to enabling. This requires a constructive and, above all, solution-oriented approach.
The development of AI expertise, in my opinion, requires a culture of lifelong learning within the work process, outside of formal learning contexts. Many educational departments do not provide ongoing training for employees; instead, they organize training events that are held at a specific time and place, and participation in these events is requested, reviewed, approved or denied, and, finally, documented in the employee’s personnel file. While this approach provides the organization with process reliability and may certainly be appropriate and sufficient for certain content, it is time-consuming and inflexible.
AI models and applications, however, change very rapidly in terms of complexity, performance, data security, and the like. The ability to respond to this speed and complexity requires, on the one hand, that the location of skill acquisition is shifted more toward actual work practice. On the other hand, the way in which competencies are acquired needs to change. This is where fast, flexible, adaptive, and therefore agile forms and formats of learning come into play. Employees must be able to learn, experiment, and utilize appropriate formats for exchange within the work process. This requires cultural and structural changes, which administrations generally struggle with.
What factors are hindering efforts to build AI expertise?
Bernhard Drees: I see two main risks in this regard: cutting corners in the wrong areas—specifically when it comes to funding, technology, and training time—as well as issues with communication and credibility. I recently spoke with a young developer. At her company, employees are allowed to use one day every two weeks to build their skills. Of course, that’s an exemplary approach. In contrast, employees from a wide variety of government agencies repeatedly tell me that they receive no training in the use of AI or are not even allowed to use it at all. When people and learning time are viewed solely as cost factors rather than as resources or investments, building expertise becomes difficult.
The second aspect—communication and credibility—relates to the legitimate question employees ask: why should I even get involved in such a change process? This is where leadership comes into play, ensuring that individuals are engaged and empowered. Clear, credible communication is needed to explain the reasons and goals behind the introduction of AI.
Is the goal to eliminate my job, or to lighten my workload or support me in handling particularly demanding tasks? Employees can sense very well whether they’re being fed a fairy tale or whether they’re being told the plain truth—even if it might not be pleasant to hear. Clear communication is essential at this point. In addition, a communication strategy tailored to the audience is needed, one that considers the varying levels of openness to change among employees. A one-size-fits-all approach with top-down directives no longer meets the needs of employees in times of a shortage of skilled workers.
Are there examples of successful pilot projects that serve as inspiring models? And if so, where are they?
Bernhard Drees: I believe the process of introducing AI in our department at the university has been successful. We received early approval and support from leadership, relevant stakeholders were involved, and guidelines for its use were developed. The use of AI was also integrated into the curriculum. In addition, we offer both formal training on the fundamentals and ongoing agile learning formats that we conduct ourselves. However, we were fortunate to have a "community of practice" including early adopters with the necessary pedagogical skills.
Based on your experience, how long do you think it will take to make AI applications standard practice in the public sector?
Bernhard Drees: To quote Karl Valentin: "Predictions are difficult, especially when they concern the future." AI is already in use in many different areas. This is why I believe it’s particularly important to focus on the "grey areas" of AI usage. From experience with the introduction of new technologies, we know that people use technologies when they benefit them—regardless of whether it is officially permitted or not. Therefore, we must find ways to enable employees to use AI rather than prevent them from doing so.
What could speed up the process?
Bernhard Drees: Support and communication from leadership are crucial. Leadership can either foster or hinder a culture of learning. On a broad scale, a consistent shift toward agile learning processes—moving toward continuous learning within the work process—is necessary. This requires building the necessary competencies within public administration.