If one talks about Artificial Intelligence, the immediate thought of a use case would most likely not be in labour and employment law, but other areas of law.

Beitrag als PDF (Download)

The use of Artificial Intelligence (AI) is becoming more and more common also in labour law contexts¹. There are different playgrounds where AI is already being used in the lifecycle of an employment algorithm-based election of candidates during recruitment², analysis and supervision of employees during the employment³, AI as support in defining the scope of employees in case of dismissal etc.

Thus, it is no surprise that the current activities of the EU regarding regulation of AI includes labour law. The way in which labour law and AI is presented in these regulations is nevertheless surprising.

Current proposal by the EU

In 2020, the European Commission issued in 2020 a White Paper⁴ and on April 4, 2021 a Proposal for an Artificial Intelligence Act⁵ (the “AI Proposal”). European Commission White Papers are documents containing proposals for European Union (EU) action in a specific area. The general reasoning for later Proposals can be found in the White Papers. The purpose of a White Paper is to launch a debate with the public, stakeholders, the European Parliament and the Council in order to arrive at a political consensus. The Commission’s 1985 White Paper on the completion of the internal market is an example of a blueprint that was adopted by the Council and resulted in the adoption of wide-ranging legislation in this field. Recent Commission White Papers include:

An agenda for adequate, safe and sustainable pensions (2012);
Towards more effective EU merger control (2014);
The future of Europe – Reflections and scenarios for the EU27 by 2025 (2017);
Artificial Intelligence -A European approach to excellence and trust (2020).

The current AI Proposal of the EU Commission, consequently, is based on its White Paper and the reasoning in the White Paper is the basis for the reasoning of the AI Proposal itself. The starting point for regulation of AI is a valid risk-based approach. On page 17 of the White Paper the Commission stated:
“As a matter of principle, the new regulatory framework for AI should be effective to achieve its objectives while not being excessively prescriptive so that it could create a disproportionate burden, especially for SMEs. To strike this balance, the Commission is of the view that it should follow a risk-based approach.”

And further on page 18:
“Notwithstanding the foregoing, there may also be exceptional instances where, due to the risks at stake, the use of AI applications for certain purposes is to be considered as high-risk as such – that is, irrespective of the sector concerned and where the below requirements would still apply. As an illustration, one could think in particular of the following:
In light of its significance for individuals and of the EU acquis addressing employment equality, the use of AI applications for recruitment processes as well as in situations impacting workers’ rights would always be considered “high-risk” and therefore the below requirements would at all times apply.(…)”⁶

It is so much more than a bag…

This perception of AI as “high-risk” in the area of labour law where workers’ rights are impacted is basically stated without any further reasoning. But the EU is not alone in this perception. Especially when it comes to recruitment, expressly mentioned by the AI Proposal and one of the areas where AI is used most within an employment context, questions of BIAS and discrimination have been raised and are documented by certain prominent examples⁷.Whereas court decisions are currently not widespread, one recent decision of the Labour Court of Appeal in Cologne, May 5, 2020, Reg. 9 TaBV 32/19⁸ underpins this notion of AI as a dangerous tool, even if in the current decision the recruitment tool was only used to store information of applicants. In its reasoning, the Appeal Court Ruling illustrates a similar view as the White Paper and the AI Proposal– AI in the labour law area is to be considered high-risk irrespective of the usage at stake:

Quote from Court’s reasoning:

”The documents to be submitted to the works council in accordance with Section 99 (1) German Co-Determination Code (BetrVG) are not limited to the documents stored in it when using an electronic application management system. Such a paper-based understanding of the term “document” would be too narrow in view of the development of electronic systems for recruiting. Because an applicant management tool is more than a collection of documents in file form. It offers a lot more: A modern applicant management system can be used to publish job offers online, enabling applications to be made directly via a link. (…) Even if, according to the user manual, S-Application Management does not know such a function but can rank and match the applicants solely on the basis of the evaluations by the members of the recruiting team, the application management system opens up functionalities for applicant selection that go far beyond the mere inspection of the stored documents and will become important for the employer’s selection decision. (highlight and translation by the author). “

Proper risk-based handling

This general labelling of AI as “high-risk”, potential dangerous, does not reflect the actual use of the tool in this specific case. It should not be denied that AI algorithms can massively interfere with the rights of employees, be it by discriminatory choices during recruitment processes, or by monitoring performance in a way which is beyond any legitimation.

For the latter, the “keylogger” decision of the Federal Labour Court BAG 2 AZR 681/16 is a perfect illustration⁹. With this decision, the Federal Labour Court took a clear stand on the exclusion of evidence which was improperly obtained via an AI-based supervision of any keystroke employees take. The impact on the fundamental rights of the employee irrespective of any suspicion of wrongdoing could not even be justified by the consent of employees. This decision has also been generally well receipted in international statements¹⁰. This decision clearly focused on the violation by use of the single algorithm.

On the contrary, a general statement such as, “AI is high-risk in the area of recruitment” by the EU, or “the application management system opens up functionalities for applicant selection that go far beyond the mere inspection of the stored documents” by the Labour Appeal Court Cologne¹¹ ignore the current usage of AI in the specific application.

The fear that AI e.g. in the context of recruitment makes the decision based on algorithms which might BIAS or discriminate should be already addressed by data privacy regulations. According to Art. 22 GDPR the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her¹². Thus, AI can legitimately be used as an augmented intelligence only.
The use of AI technologies in an area such as recruitment or workforce management does not per se qualify an application as high-risk. To do so would amount to regulating the technology rather than the use of the technology. This reinforces why it is paramount to identify the specific risk foreseen from the use of AI in a particular context rather than risking the exclusion of the employment sector from the potential benefits of AI.
Consequently, greater focus should be placed on the standard of the algorithm used in the specific AI application. The EU published April 2019 its Ethics guidelines for trustworthy AI¹³, and quite a few member states have published their own set of guidelines¹⁴. The common view is that trustworthy AI has three components, which should be met throughout the system’s entire life cycle:

  1. it should be lawful, complying with all applicable laws and regulations;
  2. it should be ethical, ensuring adherence to ethical principles and values; and
  3. it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm15.

These standards define how AI should be designed and used, and any misuse of AI can be measured against these principles. Whether AI in a labour law context creates a high risk or not is dependent upon whether AI and the underlying algorithm is based on the principles of trustworthiness and on the actual use of AI. A general ban of AI in the mentioned sectors of labour law would contradict the risk-based approach generally taken in the proposal of the EU and would ultimately ban technology irrespective of its actual usage. Even though burdensome, only understanding of the algorithm and its effect in the current use case will address the risk-based approach and ensure proper handling.


1 Carsten Orwat, Diskriminierungsrisiken durch Verwendung von Algorithmen, Antidiskriminierungsstelle des Bundes, Berlin 2020, page 34 f.
2 C Jorg Henning, Anika Nadler, Künstliche Intelligenz im Arbeitsrecht, in KI & Recht Kompakt, Editor Matthias Hartmann, Berlin 2020, page 239 f.
3 C German Federal Labour Court (BAG), decision July 27, 2017; BAG 2 AZR 681/16; https://www.bundesarbeitsgericht.de/entscheidung/2-azr-681-16/
4 C https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
5 C COM/2021/206 final – https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF
6 https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
7 A list is provided by Carsten Orwat in his study, Diskriminierungsrisiken durch Verwendung von Algorithmen, Antidiskriminierungsstelle des Bundes, Berlin 2020, page 34 f.
8 http://www.justiz.nrw.de/nrwe/arbgs/koeln/lag_koeln/j2020/9_TaBV_32_19_Beschluss_20200515.html
9 German Federal Labour Court (BAG), decision July 27, 2017; BAG 2 AZR 681/16; Nr. 21 f. of the reasoning https://www.bundesarbeitsgericht.de/entscheidung/2-azr-681-16/
10 Christopher Ritzer, German court: monitoring of employees by key logger is not allowed; https://www.dataprotectionreport.com/2017/08/german-court-monitoring-of-employees-by-key-logger-is-not-allowed/;
Julia Kaufmann et all; German Federal Labor Court: Employer cannot use information from secret keylogger software as evidence in court;
11 http://www.justiz.nrw.de/nrwe/arbgs/koeln/lag_koeln/j2020/9_TaBV_32_19_Beschluss_20200515.html
12 Dzida/Groh: Diskriminierung nach dem AGG beim Einsatz von Algorithmen im Bewerbungsverfahren(NJW 2018, 1917 f. 1920.
13 European Commission, Directorate-General for Communications Networks, Content and Technology, Ethics guidelines for trustworthy AI, Publications Office, 2019, https://data.europa.eu/doi/10.2759/177365
13 France: https://uk.ambafrance.org/France-s-AI-strategy; Germany: https://knowledge4policy.ec.europa.eu/publication/germany-artificial-intelligence-strategy_en,;
14 European Commission, Directorate-General for Communications Networks, Content and Technology, Ethics guidelines for trustworthy AI, Publications Office, 2019, page 5.

Aktuelle Beiträge