Opinion

Zooming in on AI #17: AI at work

Zooming in on AI #17: AI at work
The integration of AI in the workplace is revolutionising HR. From recruitment to performance analysis, AI use cases can streamline HR processes and enhance productivity. However, the deployment of AI by employers also brings significant legal challenges, particularly under the AI Act. 

This publication in our “Zooming in on AI” series explores the deployment of AI by employers, focusing on the specific provisions of the AI Act. It addresses (i) AI use cases in employment that qualify as unacceptable risk, high-risk, or limited risk, (ii) the qualification of employers under the AI Act and (iii) the policies that should be implemented by employers to ensure compliance and responsible use of AI. This publication does not address data protection law or specific national legislation (e.g., on the use of camera surveillance in the workplace) that might apply.

What is the risk level of AI systems used in an employment context?

The AI Act adopts a risk-based approach to regulate AI systems, categorising them into different risk levels, each with specific requirements and obligations. Below, we discuss the relevance of the three main risk levels in an employment context: unacceptable risk, high-risk, and limited risk.

1. Unacceptable risk

AI systems that pose an unacceptable risk are prohibited under the AI Act. Of particular relevance is article 5.1(f) of the AI Act that prohibits any AI systems that are used in the workplace “to infer emotions of a natural person [...], except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons”.

As a result, AI systems that infer the emotions of employees (e.g., systems that monitor employees’ happiness, frustration, satisfaction or boredom) are prohibited, except when they are used for medical or safety reasons. This prohibition stems from the legislator’s concerns that these systems lack scientific validation and may result in unjustified and discriminatory outcomes.

As mentioned, this prohibition does not apply when emotion recognition systems are used for medical or safety reasons. The interpretation of this exception has been the subject of vivid debates. However,  recent guidance from the European Commission appears to have largely settled these debates, advocating a restrictive reading of the exception. In an employment context, it is important to note that the European Commission considers that AI systems used to detect burnout or depression in the workplace are not covered by the exception and remain prohibited. Conversely, AI systems designed to assist blind or deaf employees in performing their tasks would fall under this exception.

2. High-risk

Due to their potential impact on individuals’ rights, high-risk AI systems are subject to stringent legal requirements, including the establishment of a risk management system, data governance requirements, record keeping and transparency obligations.

Annex 3 of the AI Act includes two main AI use cases that are relevant in the employment context:

  • AI systems intended to be used for the recruitment or selection of natural persons, including AI systems used for advertising vacancies, analysing or filtering applications and evaluating candidates during interviews or tests; and
  • AI systems intended to be used to make decisions in the workplace, in particular where such AI systems are used for decisions regarding promotion, termination, task allocation based on personality traits or performance, or evaluating employees’ performance. 
These AI systems are considered high-risk due to the impact they may have on employees’ work and life. This has been illustrated by resume filtering systems that were found to discriminate against women as a result of being trained on historical data that predominantly included male employees.

3. Limited risk

AI systems with a limited risk are subject to less onerous obligations, primarily focussing on transparency.

In an employment context, this mainly concerns AI chatbots or virtual assistants. If an employer wants to offer a chatbot or virtual assistant to its employees, it must be clear to the employees that they are interacting with an AI system. For example, if an HR chatbot is used to answer employee queries, it must be clearly disclosed that the responses are generated by AI.

Does an employer qualify as a deployer or provider?

When deploying AI, employers will typically purchase and use externally developed AI systems. In such case, the employer will be considered the deployer of the AI system.

However, in limited situations, an employer may be considered the provider of an AI system. This may occur when the employer has developed its own in-house AI system, or when it is requalified from a deployer to a provider under article 25.1 of the AI Act.

Under this article, a deployer is requalified to a provider when it (i) puts its name or trademark on a high-risk AI system, (ii) makes a substantial modification to a high-risk AI system in such a way that the AI system remains high-risk, or (iii) modifies the intended purpose of an AI system, turning it into a high-risk AI system.

Consequently, employers should be careful to rebrand an externally developed AI system or to expand its use, since these minor interventions could trigger a requalification from deployer to provider. For instance, if an employer is granted a license to use an externally developed HR chatbot and it brands that chatbot with the name of its organisation, the employer risks being qualified as a provider of the chatbot. Similarly, if the chatbot was not intended to be used in a recruitment context, and the employer repurposes it for such use, the employer also risks being requalified as a provider.

We refer to this article of our series for further information on the distinction between deployers and providers, as well as the risk of requalification.

What are some of the key policies an employer must put in place towards its employees?

It is often stated that the greatest cyber risk lies with individuals. This holds true in an employment context as well: without clear guidelines on what employees can and cannot do with an AI system, the risks are plentiful. For instance:

  • unchecked use of the output of an AI system by employees can subject the employer to liability risks, for instance where employees use incorrect output to create documents for customers;
  • unchecked prompt input into an AI system by employees risks disclosing confidential information or key intellectual property rights of the employer, or could even lead to the disclosure of customers’ confidential information; or
  • employees that develop an AI system can use incorrect or incomplete data sets, leading to inaccurate or discriminatory outcomes of the AI system.
To mitigate these risks, it is important that the employer adopts a thorough policy framework for its employees, as a part of its broader governance framework. Below, we provide some thoughts on the type of policies and associated training employers typically put in place.

1. Written policies

Employers should establish a clear AI policy to ensure that the use of AI in their organisation is compliant with applicable legislations and to limit the risks of AI use to the largest extent possible. Such policy should include at least the following elements:

  • Scope and applicability: who and which AI systems does the policy apply to? It is generally recommended that the policy is not limited to employees, but also covers independent contractors or subcontractors.
  • Permitted and prohibited AI systems: which AI systems may employees use? The policy should clearly set out which AI systems are permitted in the organisation (e.g., AI systems that have undergone rigorous testing) and which AI systems are prohibited. Particularly for organisations that work with highly sensitive data, it is recommended not to allow the use of publicly accessible AI systems and instead opt for tailor-made and secured solutions.
  • Proper use of AI: how should employees use the AI systems? Proper use of AI could for instance include a prohibition to use confidential or sensitive information in prompts, an obligation to verify all output before circulating it, or to clearly indicate which output was generated by AI. Employers could also indicate for which tasks AI can or cannot be used. As such, the use of chatbots for research could be prohibited, whereas it might be allowed for internal communication and translations.
  • Accountability and governance: who is responsible for AI within the company? The policy should clarify who is responsible for the various aspects of the use of AI. This should also include contact information and procedures for incident reporting.
  • AI audits: how is compliance verified? The policy should set out how compliance with the policy will be verified. It should for instance indicate whether and how often audits should take place and which standards apply. 
  • Disciplinary action: how is the policy enforced? Enforcement of the AI policy is crucial to ensure its effectiveness. The policy should therefore clarify what consequences are associated to employees’ failure to comply.

In addition to this elaborate policy, we recommend creating a concise document in a style of “dos and don’ts” or “10 key rules”. This short document can be used to effectively convey the key messages to employees in a practical and easily understandable manner.

2. Training

Although written policies are crucial, they are only effective when employees are actually aware of their provisions and able to apply them in practice. It is therefore essential to offer appropriate training to employees, clearly explaining the key rules of the policy. The level of training should vary based on the employee’s role. While all employees will benefit from basic training, those involved in higher risk tasks, such as AI development or deployment of high-risk AI systems for recruitment, should arguably be provided with tailored and more extensive training.

Conclusion

The integration of AI into HR practices offers significant benefits, but also brings legal challenges. It is key that employers are aware of these challenges and implement policies and trainings to adequately manage and mitigate those risks.

Related capabilities