HR teams have rapidly embraced the use of AI technology for recruitment, administrative tasks, performance tracking and employee training, with around 60% of UK employers using it in workforce management in some form. In that context, it's unsurprising that employees have embraced AI with the same alacrity, and it's now a frequent feature of employee relations disputes and employment litigation.
It's now very common for employees to use ChatGPT or other generative AI tools to draft grievances, responses to disciplinary allegations and challenges to redundancy processes. This poses practical and legal issues for employers. On a practical level, it makes it far simpler for employees to deluge employers with material which employers either need to respond to or justify not responding to. The peculiar feature of generative AI - its ability to mix the superficially plausible and the wildly inaccurate - means that employers can easily spend a lot of time trying to sort the reasonable points from the inaccurate or irrelevant. While some employees have always adopted this approach to internal processes, generative AI means that it's not longer only the most doggedly determined employees who can generate this volume of material - a mildly disgruntled employee with 5 minutes and a smartphone can now do the same. Employers now need to be a lot more tactical in identifying what actually merits a response - e.g. what is likely to affect an Employment Tribunal's view of whether the internal process was fair and legally compliant.
Another practical issue is that the AI-drafted document may bear little relation to reality - it may include embellishment, repetition and complete invention. Although AI-detection tools are readily available, in most cases employers should focus on dealing with the substance of the issues raised, including meeting with the employee to understand their actual concerns. Employers should be alive to the legal ramifications of using AI-detection tools in internal disputes (which may develop into litigation). Such tools are likely to involve automated processing, which would need to be covered by the employer's privacy notice and carried out in accordance with data protection requirements. Employers should also be wary about assuming that a grievance was drafted by AI simply because it appears more eloquent than the employee's usual writing - assumptions about writing style and ability are vulnerable to accusations of discrimination and implicit bias. In many cases, trying to assess how the employee drafted their grievance is likely to prove at best a distraction and at worst another potential avenue of complaint.
Confidentiality is also a real concern. Employees who use generative AI to draft (for example) a grievance may well be inputting highly sensitive internal information into the AI tool, and that data may then be used as training data unless the employee has adjusted their privacy settings to prevent this. The reality is that such breaches are very difficult for employers to police if the employee has used a personal device rather than a work computer, and in most cases this will not be a helpful avenue for the employer to pursue.
However, although employers should be aware of these issues, they should not exaggerate the risks and disadvantages. For all that employees are instructed to keep grievances confidential, most disgruntled employees will discuss a grievance or disciplinary issue with their family or friends - ChatGPT is, on one view, not all that different. And although its accuracy is (to say the least) imperfect, it can help employees to organise their thoughts and consider potential resolutions, so it may have some role to play in helping employers resolve issues - it can certainly level the playing field for employees who find it difficult to express themselves in writing or don't speak English as a first language. While employers should be alive to the issues, a heavy-handed approach is likely to cause more problems than it solves.
