About the Responsible Use of AI
The following article was published in StartingUP – Das Gründermagazin, issue 04/2024, see https://www.starting-up.de/abo-service/ausgabe/edition/042024.html.
“Artificial intelligence (AI) has long been part of our everyday lives and is also increasingly finding its way into companies. Whether it’s creating texts, analyzing data, or automating routine tasks, AI is a powerful tool that can offer companies many advantages. But with these advantages come responsibilities and risks that require a clear and well-thought-out AI policy.” This introduction to an article on AI policies is AI-generated. It’s so convenient and quick: just call up one of the popular LLM systems online, such as ChatGPT, and the article is ready in seconds.
You know how it is, in your private life, but especially in the workplace. AI tools have become an integral part of the working world. However, the use of AI – especially for professional tasks – carries risks. Some AI, if trained incorrectly, tends to make discriminatory decisions, which is dangerous in areas such as HR. Carelessly created prompts can jeopardize the protection of trade secrets. As always, data protection principles must be observed. If we as users do not pay attention, generative AI does not care about the copyrights of those whose works we use or infringe, whether consciously or unconsciously. In addition, AI Acts are now in place around the world that impose extensive requirements on companies that develop, offer, or operate AI systems and models, in particular the assessment and mitigation of the risks posed by AI.
If you are a founder or in another position of responsibility in a company with several employees, you need to consider how such risks can be identified and limited. If you fail to do so, you may face claims for damages, fines, or the loss of assets under various legal bases — things that will be viewed negatively at the latest during the next round of financing. You must therefore raise awareness of these issues among your workforce and set specific guidelines. Because one thing is clear: AI will be used in your company, whether you know it or not. And before employees start using ChatGPT & Co. on their private devices to perform sensitive work tasks without any regulations in place, it is better to describe dos and don’ts in good time to ensure that AI is used in your company responsibly, securely, and in compliance with the law. An AI policy that informs and commits the workforce serves this purpose. Here is an overview of the aspects that should be included in such a policy.
1. Guidelines for the use of generative AI in the workplace
Generative AI models such as ChatGPT, DALL-E, and others create content based on user input. When using these prompts and the generated results, the following must be taken into account:
Protection of sensitive data
Entering confidential information into AI models poses a significant risk, as generative AI systems are trained on large amounts of data and can potentially process and store information in an uncontrolled manner. The AI policy should explicitly prohibit the entry of sensitive or confidential information into generative AI systems, especially if they are operated externally. The policy should describe how sensitive data is to be handled and which data is unsuitable for processing by AI systems.
Compliance with data protection
The AI policy should clarify that the use of generative AI tools must comply with the requirements of data protection laws (GDPR, BDSG). This includes requirements such as how personal data must be handled in a legally compliant manner and which of this data is suitable for use in AI models, as well as requirements for anonymizing and pseudonymizing data, and transparency requirements to inform data subjects about the use of their data. The use of personal data in public AI systems should be prohibited as far as possible.
Handling intellectual property
The use of generative AI models can lead to problems if the AI has been trained on protected material, if the user includes copyrighted works in the prompt, or if the AI creates works that infringe existing copyrights or other intellectual property rights. The AI policy should prohibit the use of works for which the rights of use necessary for such processing do not exist. It can also establish rules for reviewing and approving AI-generated content, for example by a legal department or a body specifically responsible for this task.
Transparency and labeling
The AI policy should stipulate that content created using generative AI must be clearly labeled. This helps to avoid misunderstandings and ensures that the origin of texts and images is clearly recognizable. This labeling requirement should apply in particular when AI-generated content is published, but also for internal use.
Positive list of permitted AI systems
In order to avoid leaving employees alone with the application of these guidelines to particular AI systems available on the market, an AI policy can also contain a list of systems approved by the company. This requires, of course, that these have been subject to prior technical review – which can be difficult because providers usually do not reveal their cards.
2. Guidelines for the development and implementation of AI
Even if you want to develop or implement AI systems in your company, ethical, legal, and technical requirements must be addressed in an AI policy.
Fairness, transparency, and non-discrimination
AI systems can reproduce discrimination or bias if they are trained on distorted data sets. An AI policy should therefore establish clear ethical guidelines to ensure that the models developed are fair, transparent, and free from discrimination. One possible approach is to conduct regular audits and bias tests and make them mandatory in the policy in order to identify and remedy distortions at an early stage.
Data protection and data security
The responsible handling of user data is one of the most important requirements in AI development. The AI policy should stipulate that strict protection requirements, ideally strict limitations, apply to the processing of personal data by AI. Purely automated decisions are already prohibited under the GDPR. Only data necessary for the respective application should be collected and processed, and this data should be anonymized or pseudonymized as far as possible. Regulations for access to this data and data security measures (such as encryption and access controls) should also be part of the policy.
Checking for hallucinations and misinterpretations
Generative AI models tend to “hallucinate” information, i.e., create false or inaccurate content. The AI policy should stipulate that AI models developed are regularly checked for accuracy and reliability. This can be done through mandatory tests and simulations in which the AI is used in various scenarios and tested for its ability to deliver correct results. The policy should also define how errors can be detected and corrected.
Explainability and user-friendliness
Complex AI models are often difficult to understand and appear like a “black box” whose decisions are difficult for outsiders to comprehend. The AI policy should therefore ensure that your AI is designed in such a way that its functioning is transparent and comprehensible to users. This is particularly important in areas where AI decisions can have serious consequences. You should ensure that users are provided with an understandable explanation of how and why the AI arrived at a particular result. This is a prerequisite for the control and legally compliant operation of AI.
Legal compliance in accordance with the AI Act
Last but not least, in August 2024, the European Union enacted the AI Act, which imposes strict requirements on the development and use of AI. The AI Act serves product safety and initially requires a risk assessment for AI systems. The requirements for such an assessment should be outlined in the policy and no-gos should be specified for the development of functionalities that are prohibited AI applications according to the AI Act. Special requirements apply to so-called high-risk AI systems, which must be taken into account during development and for which an AI policy should at least raise awareness. A policy cannot provide the review and classification of the respective system or model required in individual cases, but it should require initial and regular reviews and risk assessments.
Regular monitoring and maintenance of AI models
AI models continue to evolve and require regular monitoring and maintenance to optimize their performance and minimize potential errors. The AI policy should specify such repetitive maintenance, such as the regular updates, bug fixes, and performance reviews mentioned above, to ensure that AI systems always comply with current standards and requirements.
3. Overarching goals and objectives of an AI policy
An AI policy should not only contain detailed guidelines on the use and development of AI, but also general guidelines and principles for the use of AI in the company in order to create awareness of the potential and risks of the technology.
Regular review and adaptation
As AI technologies and legal requirements are constantly evolving, the AI policy should also be reviewed and updated regularly. This helps to ensure that the company is always up to date and that its AI systems comply with the latest legal, ethical, and technical standards. It is advisable to conduct regular audits and adapt the policy to new developments in AI research, legislation, and market requirements.
Corporate culture with regard to AI
An AI policy should serve to promote the transparent and open use of AI in all areas of the company and thus the acceptance of AI systems both within the company and among customers and partners. This also includes the company disclosing the areas in which AI is used and the decisions that the technology influences. Ultimately, the AI policy is a tool for promoting a responsible and ethical corporate culture with regard to AI. This ultimately also protects the integrity and values of the company.
Training and awareness measures for employees
In order to achieve such competent and responsible use of AI, regular training and awareness-raising measures for employees on the aspects mentioned here are essential.
Conclusion
This article can only provide a rough overview and framework. You must develop the specific content of a policy that is tailored to your needs and forms of use, taking into account the specifics of the AI systems used or to be developed. This is not something that can be delegated to AI: The rest of the AI-generated article mentioned at the beginning was not really useful, nor were the policies created with AI on a trial basis. Professional support is needed for this.