Security Challenges When Using AI in Businesses
When it comes to using AI in businesses, where do you think the biggest security challenges lie?
Dr Jörg Schneider: Basically, they’re the same challenges we’ve been facing in information security for years. The vulnerabilities exploited by AI bots in high-profile attacks are generally not fundamentally new. What is new, however, is that certain types of attacks are becoming easier to execute and that some AI tools are unexpectedly multiplying the attack surface. Above all, the situation is highly dynamic, which is why assessments and decisions made once must be continually reevaluated and updated with new tools.
How much (personal) responsibility should employees take?
Dr Jörg Schneider: Information security can only succeed if everyone is involved. Technical measures alone cannot replace everyone’s cooperation. The same applies to AI tools. To make matters more complicated, most AI tools allow for free communication in natural language—which, by design, means technical filters cannot be used to detect, for example, internal documents uploaded by mistake.
This means that all users must personally ensure that AI systems are used only within the predefined parameters. However, to do so, they need these parameters to be clearly defined while allowing sufficient flexibility to accommodate rapid technological progress.
Are there any corporate policies that could reduce this problem?
Dr Jörg Schneider: First, you need to determine where to draw your own individual red lines when it comes to using AI. Traditional risk assessment approaches can help, and you can work through the classic criteria of confidentiality, integrity, and availability.
This will then lead to the next steps, ranging from training to the procurement of more suitable AI tools, and, if necessary, technical restrictions.
Should every company have its own AI guidelines nowadays?
Dr Jörg Schneider: Absolutely. Perhaps not everyone needs a detailed document with dozens of subchapters, but they do need guidelines for its use. Anyone who says, "But we don’t use AI tools," is usually lying to themselves, because many devices and programs are now equipped with AI functions. Even a simple internet search makes it impossible to avoid AI-generated answers.
A broad-based, unreasonable ban isn’t the solution either. AI tools will still be used – unintentionally or secretly. So, the risks are still there, but they remain hidden.
That’s why there should be at least a brief, pragmatic overview and guide for everyone, along with a list of explicitly approved tools.
How can companies navigate this process? Who can provide assistance?
Dr Jörg Schneider: Since this trend is still relatively new, companies usually have to approach the matter on an individual basis. Too many factors come into play—from their specific security needs to their existing toolset and age distribution. Therefore, simply copying blueprints won’t help. Unfortunately, many companies are currently facing the same challenge, so demand is high and the range of options is becoming overwhelming. My advice when it comes to information security is to explicitly insist on experience in security—because, as I said, much of this isn’t fundamentally new and can be addressed using proven security approaches.