Digitalization is fundamentally changing the field of human resources (HR). Artificial intelligence (AI) is increasingly being integrated into HR processes to increase efficiency, optimize application processes, and make data-based personnel decisions. However, these opportunities are accompanied by legal challenges, particularly with regard to data protection, discrimination, and transparency.
AI is used in various areas of human resources management:
- Programmatic advertising: AI-supported systems such as Jobvector optimize the placement of job advertisements.
- Applicant and personnel management: Tools such as Rexx, Workday, and Personio use AI for resume parsing and applicant analysis.
- Personnel development and controlling: Platforms such as Zavvy use AI to plan and evaluate employee development measures.
- Generative AI and large language models (LLMs): Applications such as ChatGPT support HR teams with text generation, chatbots, and data analysis.
What is AI and what can it do – and what not?
According to the EU AI Act, AI is defined as follows, Art. 3 No. 1 AI Act:
“AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
A key feature of AI is that it is more than just a rule-based system for automatically executing processes (see also Recital 6 AI Act). It learns from data, sets its own rules, and draws conclusions from them. Only such systems are subject to the prohibitions and requirements of the AI Act (see below).
Using generative AI as an example, I would like to illustrate a few “characteristics” of AI that we should always keep in mind when evaluating it:
- AI does not learn facts – it does not provide the one correct answer, but rather many possible answers.
- AI always responds – even if information is missing, an answer is still generated, although more information does improve the results.
- AI does not know what is right – fact-checking depends on the context and ultimately remains subject to human control.
- AI does not think logically – logic is only imitated through learned patterns.
So we are constantly faced with the question “Should I distrust AI?” (otherwise errors will be overlooked and risks will manifest themselves) or “Should I trust AI?” (otherwise everything will be constantly questioned and no benefit will be generated). Corresponding background knowledge and constant human-AI interaction are therefore essential for this very reason.
Legal Challenges for AI in HR
Discrimination and Bias
AI models are based on training data. If this data is unbalanced or biased, there is a risk of discrimination, which can lead to violations of the General Equal Treatment Act (Allgemeines Gleichbehandlungsgesetz, AGG). One of the main issues here is the reversal of the burden of proof required by law: in the event of alleged discrimination, the AGG places the burden of proof that no violation has occurred on the employer. Companies must therefore be able to prove that their AI systems work in a non-discriminatory and fair manner. To minimize bias, attention should be paid to the sufficient quality of the training data or this data should be improved, i.e., the selection of data should be as diverse and representative as possible in order to reduce distortions. And the AI models used should be continuously monitored, e.g., through regular audits, in order to identify and correct discrimination at an early stage.
Data protection (GDPR & employee data protection)
Checklist and topics related to AI
When using AI in HR, the well-known data protection requirements and principles must first be observed. However, a few special features of AI do not make things any easier, quite the contrary:
- AI systems can link large amounts of data, which makes it easier to establish personal references and thus identify individuals.
- The AI systems and AI models currently available, most of which are from US providers, along with their “information policy” regarding their models, make it difficult or impossible in many cases to determine whether the operator is acting as the (sole) controller with the provider as the processor, or whether there is joint responsibility. The issue of third-country transfers, especially to the US (keywords: Data Privacy Framework, adequacy decision, standard contractual clauses, etc.) arises once again in an even more acute form.
- A legal basis must be demonstrated for the collection of personal data, for training and use of AI systems with this data, and the principles of the GDPR must be observed – keywords here include purpose limitation and necessity, consent and, in particular, so-called Art. 9 data or the necessity of each data processing.
- It must be ensured that data subjects are fully informed about data processing, including the frequent changes in purpose. Information must be provided (or made available) about the logic involved and the algorithms used and how they work.
- Incorrect data must be able to be corrected or deleted, whereby suitable mechanisms for data cleansing must be provided, which is actually only possible by means of subsequent filters.
Prohibition of automated decisions
However, the General Data Protection directive (GDPR) also contains a provision specifically addressing AI: Article 22 GDPR prohibits automated decisions with legal or other significant effects on the data subject. This means, for example, that AI may not conclude contracts or have a similarly significant impact on the data subject and its legal position. The final decision must always lie with a human being. An exception to this prohibition exists if it is necessary for the conclusion or performance of a contract, is based on explicit consent, or there is any other valid legal basis. A ruling of the European Court of Justice (ECJ) on Schufa Holding AG (“Schufa”) dated December 7, 2023, clarified that even pure decision-making aids can be considered automated decisions if they effectively predetermine the outcome.
Applied to AI systems: This is always the case if the AI system makes a decision based on personal data with an effect on the data subject. However, the same applies if the AI system calculates a probability value on the basis of which the controller makes an automated decision, i.e., if the decision is “significantly” based on that value. However, Art. 22 does not apply if, for example, the AI system calculates a probability value and the controller uses this value with “human in the loop” for a decision that is not significantly based on the value; the same where the AI system calculates several probability values that are combined by the controller (not automatically) according to its own logic when making its decision.
Decision-makers must therefore be able and willing to understand and question the AI’s suggestion – and actually do so! This is where a major problem with AI arises: AI models are often a black box, which makes this traceability difficult.
Special categories of personal data
AI systems for HR — and especially those trained with big data — often contain special categories of personal data subject to Art. 9 GDPR. This so-called Art. 9 data includes information on religious beliefs, ethnic origin, political opinions, and, in many cases, health data. This data may only be processed with the prior consent of the data subject. In the employment context, however, this is problematic because the effectiveness of such consent in an employment relationship, i.e., a relationship of dependency, is questionable due to the lack of voluntariness of the decision.
Further options here would be to restrict processing to publicly available data or to avoid Art. 9 data altogether. In my opinion, processing should be possible if the data subjects themselves have made this data generally available. However, this is interpreted by the European Data Protection Board (EDPB) in a restricted way in the case of data taken from social media. Avoiding sensitive data, on the other hand, requires a great deal of control, for example when training AI systems. It is also not always possible, although so-called “bycatch” of Art. 9 data should be acceptable if it is not processed in its specific context.
Draft Employee Data Act
On October 8, 2024, the draft bill for an Employee Data Act (Beschäftigten-Datenschutzgesetz, BeschDG) was published. The aim of the Act is to ensure fair handling of employee data and to offer both employers and employees greater legal certainty.
The law addresses many issues that have been developed or clarified in recent years through case law, including regulations on data processing prior to the establishment of an employment relationship, monitoring and performance review, and the handling of biometric data. So that also would extend to the use of AI in HR.
However, it is unlikely that the draft will be passed during this legislative period due to the collapse of the coalition government and early elections. We will therefore have to wait and see.
EU AI Act: High-risk Systems in HR
New, well, almost new, is the AI Act, which specifically addresses AI systems and applications in the field of HR. We therefore need to take a closer look at this as well.
Principles
As a product safety regulation, the AI Act takes a risk-based approach and regulates AI systems according to their potential risks to individuals and society:
- Unacceptable risk: Certain AI applications, such as social scoring, are prohibited.
- High risk: Strict requirements for so-called high-risk systems – this is the focus here (HR), see below.
- Limited risk: Transparency requirements, such as labeling chatbots.
- Minimal risk: Largely unrestricted use.
HR applications are high-risk systems
According to the upcoming EU AI Act, AI systems in the field of human resources are almost always considered high-risk applications, Art. 6 in conjunction with Annex III, Section 4) lit. a, b AI Act:
“[…] a) AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates; b) AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships.”
On the one hand, this therefore concerns work-related use in the context of employment and human resources management, in particular the selection and evaluation of applicants, decisions on promotions, transfers or dismissals, and the monitoring and evaluation of employee performance. On the other hand, it concerns the use of AI models with a general purpose that (may) be assessed as high-risk systems due to their specific task, such as image or speech recognition or chatbots.
Requirements for high-risk systems
Companies that use or intend to use AI in this way must meet strict requirements, including:
- Risk management: Continuous identification, assessment, and control of the risks associated with the use of a specific AI. This iterative process must cover the entire AI life cycle.
- Strict data governance: High requirements for the quality of training, validation, and test data sets. They must be checked for origin, relevance, purpose, and completeness.
- Comprehensive technical documentation: AI systems must be precisely documented, particularly with regard to how they work, in order to ensure traceability for authorities and affected individuals.
- Transparency requirements: Companies must ensure that users and affected individuals understand how AI decisions are made in order to guarantee traceable and explainable processes.
- Human oversight: AI-supported decisions must be reviewable by humans. Anomalies and errors must be detected and corrected at an early stage.
- High security standards: Protection against cyberattacks and manipulation of AI must be ensured.
Violations of the AI Act can be punished with fines of up to €35 million or 7% of global annual turnover. However, this is where we encounter the above-mentioned problems regarding a lack of transparency and information deficits, as AI is often and will remain a black box for operators of AI systems.
Exceptions
Under certain circumstances, the use of AI systems in HR may be exempt from the strictest regulations. Article 6(2a) of the AI Act provides for a catalog of exceptions that I believe we can take advantage of in HR without making the use of AI unprofitable.
- AI systems that only perform narrow procedural tasks, i.e., play a merely supporting but not decisive role, could be operated under less stringent requirements. Example: CV parsing for the pre-selection of applications.
- If AI is used exclusively to supplement or improve human decisions that have already been made, without significantly influencing them, an exception may apply. Example: An online assessment supported by scientifically validated diagnostics.
- AI systems that merely identify trends or deviations in decisions without influencing the actual decisions themselves are not subject to the strict regulations. Example: Personnel controlling that highlights gender-related salary differences.
- AI systems that contribute exclusively to supporting or preparing decision-making without themselves making the final assessment may be exempt from the high requirements. Example: Identification of employees for further training measures.
Companies must document whether an exception under Article 6(2a) of the AI Act applies to the AI systems they use and be able to prove this in the event of an audit.
Use of large language models
Large language models (LLM) such as ChatGPT & Co. can be used for unproblematic tasks in HR, provided that the following is observed: No customer or employee data should be entered if this data is uploaded to the cloud by the models and used for training purposes. The opt-out for training and history should therefore be used here and, if possible, function accounts should be used. The outputs of the models must always be checked for accuracy and discrimination, and any automated final decisions must be ruled out. Plus, of course, the other data protection aspects mentioned above.
Recommendations for Companies
To ensure that the use of AI in HR remains legally compliant and ethically acceptable, companies should definitely develop an AI strategy, including the identification of specific use cases and the definition of goals and limits for the use of AI. The introduction of an AI policy with definitions of permissible and impermissible applications can and should serve this purpose. Data protection measures must be implemented to ensure GDPR compliance. The risk of discrimination must be minimized through regular monitoring and adjustment of the AI models used. As we have learned, this is not possible without human supervision. Final decisions should be made by humans and not entirely automated by AI systems. Last but not least, technical and legal developments should be closely monitored.
For example, the training requirement under Article 4 of the AI Act has been in force since the beginning of February this year. If we take this to heart, there is a lot we can do between now and the entry into force of the above requirements for high-risk systems in the summer of 2026 to put the use of AI in HR on a healthy and profitable footing.
Incidentally, I have omitted the employment law topic of co-determination in the workplace with regard to the use of AI here, as this would go beyond the scope of this article and my area of expertise.
Conclusion
The use of AI in HR offers enormous potential, but requires careful legal and ethical consideration. Companies must be aware of the risks and implement appropriate safeguards. Only then can AI be used efficiently and in compliance with the law in human resources. But then AI can bring considerable benefits and cost savings.