The use of Artificial Intelligence ("AI") in the workplace is increasingly common. Despite legal constraints around automated decision-making, it's often used for the benefit of a business' customers or clients, and increasingly in workforce management.
An example of this is the so-called empathic AI. Artificial empathy (sometimes referred to as computational empathy) is an artificial intelligence system that can detect emotions, for example from properties of an individual's voice, and respond to them in an empathic manner.
With UK businesses losing around 18 million work days each year to mental ill-health, there are clear incentives for businesses to invest in solutions which could alleviate the problem.
Does empathic AI have a role to play, alongside Employment Assistance Programmes (EAP), Occupational Health services and other professional advice?
While empathic AI could be an additional and innovative tool to tackle mental ill-health, employers should remember that the legal issues associated with workplace deployment are not straightforward.
Employers are (or should be!) already aware of the risks associated with the use of AI in relation to breaches of data protection legislation, copyright issues, breach of confidentiality obligations and discrimination. Businesses should also bear in mind the dangers of AI hallucination, where an AI tool generates incorrect or misleading results, often due to incorrect assumptions, lack of training data or biases. This highlights the importance of due diligence before buying in any new technology and regular audit and review once it is in place.
We anticipate that there could be significant legal challenges in employers using empathic AI in the workplace, for example in relation to the processing of special category data and from a trust and confidence perspective. EU-based employers will also need to consider the impact of the EU AI Act, which is likely to restrict workplace deployment.
Employers would also need to consider potential liability for AI failures. Employers have a duty to ensure, so far as reasonably practicable, the health, safety and welfare at work of all of their employees. Employers would have to carry out a risk assessment and assess whether empathic AI tools might, inadvertently, worsen the mental health condition of the employee concerned (for example, due to biases or faults in the AI tool's responses to an individual's emotions). In practical terms, employers could be held liable for any personal injury that this might cause, and they would have to ensure that any insurance policy would cover AI-generated liability in this context.
Agentic AI (of which empathic AI is an example) is an exciting development in a fast-moving area, but employers considering deploying it should consider the legal and practical issues carefully and ensure that new technologies are introduced only following thorough due diligence and risk assessment.