Insights

Using chatbots in employment/HR: what are the legal implications?

14/02/2023

The launch of ChatGPT by OpenAI in November 2022 has reignited discussion around automated decision making (ADM), in particular the use of chatbots and the implications. ChatGPT is a chatbot that is trained to respond to human input in a conversational way and can generate human-like text based on the context of the conversation. This technology, and ADM, can be used by organisations to automate repetitive processes and streamline tasks.  In the employment/HR sector, this includes:

  • throughout the recruitment process: to search for prospective employees; filter and screen applications; conduct interviews; and answer candidate questions;
  • onboarding and training employees;
  • answering questions about company policies and procedures;
  • managing staff and assigning tasks; and
  • in the termination process, to include drafting termination letters.

What are the legal implications? 

As chatbots are developed and trained by humans, biases and inequalities in society are inadvertently coded in. Where a chatbot has applied a policy, process or job requirement that inadvertently disadvantages a group with a protected characteristic because of these learned biases, employers are at risk of indirect discrimination claims. Relying on the justification that the use of ADM was a proportionate means of achieving a legitimate aim has not yet been tested in the Tribunal: whilst employers may be able to prove their aim was legitimate, it may well be difficult to prove it was proportionate. Financial penalties for a finding of discrimination can be severe, including compensation for loss of earnings as well as "injury to feelings." There may also be reputational repercussions.

Chatbots regularly process personal data when they interact with employees or applicants. Under UK GDPR, a data subject has the right not to be subject to a decision based solely on automated processing, except in limited circumstances, including where the decision is necessary for entering into or performing the employment contract, or where the data subject has consented. Article 22 has additional rules to protect employees from solely automated decision-making and profiling that has legal or similarly significant effects on them.

Guidance for employers

To remain compliant with GDPR and the Equality Act 2010, the use of chatbots should be monitored and their decisions considered by humans before they are applied to employees or applicants. In June 2022, following consultation, the Government announced its intention to clarify and limit the scope of the Article 22 protections. If introduced, this would potentially make it easier for employers to rely on automated processing and decisions.

Employers should establish clear and transparent policies around the use of chatbots in the workplace and this should be kept under careful review as the technology develops. Chatbots should be programmed and trained to identify complex issues that may have legal or ethical consequences.

Employers should look out for the awaited further regulation and guidance on the use of chatbots and responsibility for their decisions in order to avoid fines and potentially costly and reputationally-damaging litigation.

featured image