More than one in three office employees in Germany now use generative AI tools such as Copilot or ChatGPT on a daily basis in their work routines—whether for generating texts or as a replacement for traditional search engine queries.
The key question is therefore no longer, as it was just a few years ago, whether AI is being used in companies at all, but rather how AI tools can be used efficiently and in compliance with legal requirements.
For end users, one fact is crucial:
Each language model functions like an independent artificial brain that is trained and continuously improved using its own algorithms and vast amounts of training data.
A single language model is a highly intelligent but also one-sided sparring partner. Different models are trained on different data sources and weightings. They rely on different architectures and may be fed more heavily with technical, marketing, or social data. Some models apply stricter filtering or censorship of content than others, while some deliberately allow room for interpretation.
However, real-world business situations often involve complex strategic or analytical decisions for which there is not just one correct answer.
To achieve the best possible output, it is therefore advisable to consult multiple AI models (two to three) in parallel. This leads to better overall results, as errors are more likely to be identified quickly and differing perspectives or approaches are provided—often encouraging users to “think outside the box.”
It should also be noted that different AI tools have different core strengths, and that different models should be used depending on the specific task or requirement.
One point must be stated very clearly:
AI does not replace humans 100%. Employees should always critically review and question AI-generated content, as AI tools are still prone to hallucinations and inaccuracies.
When using AI tools on a daily basis, employees must ensure that no personal data or business secrets are entered into US-based cloud services in particular. Many models operate as black boxes, meaning there is little or no transparency regarding how input data is processed or used. Purpose limitation and deletion options are often completely absent.
As an alternative, companies can rely on models without third-country data transfers, without the use of data for training purposes, and on providers that offer transparent, GDPR-compliant information on the processing of personal data.
Ideally, on-premise models should be used, running on self-hosted virtual servers.