Insights

Using AI ethically - and effectively - in recruitment

20/06/2024

Ask any business leader what their key priorities are, and the chances are that “hiring and retaining top talent” will be near the top of their list.   With high performers contributing disproportionately to the success of a business, it's logical to place a premium on the best candidates.  But recruiting is costly, time-consuming and often unreliable - according to the Harvard Business Review, 40 - 60% of executive hires fail within the first 18 months.  As a result, AI technology which promises to streamline the process and make it easier to pick the best candidates is very attractive.  However, businesses need to take care when implementing AI recruitment tools, as recent Government guidance highlights.  

Taking a look at some key ethical and practical considerations: 

Reliability:   Any AI system is only as good as the data which is plugged into it.  It's essential to assess whether your existing data (whether it relates to performance, skills, career progression, etc) is reliable and consistent before you try to implement tools which would use that data.  If your people analytics aren't thorough or reliable, fixing that should be the first priority.  

Data protection and transparency:  Use of AI in recruitment is likely to require a data privacy impact assessment to identify risks and how to mitigate them. If candidates' data will be subject to solely automated processing or profiling in the recruitment process (e.g. CV screening), this can only be carried out with their explicit consent or where it's necessary for entry into a contract, and with additional protections such as a right to contest the decision.   If a human being will be involved in the decision-making, the rules on solely automated processing won't apply, but candidates still need to be told how their data will be used via a privacy notice.  From an ethical standpoint, transparency is key, as is ensuring that candidates have the ability to challenge unfair decisions. 

Bias and discrimination:  Many organisations use AI to try to remove human bias in decision-making - but AI tools are susceptible to different forms of bias (such as amplifying bias in the underlying data) and can produce discriminatory effects.  The Government guidance gives some useful examples. CV screening tools which screen out candidates with career gaps may disadvantage working parents and those with disabilities, facial recognition technology is notoriously ineffective for black people and those with facial differences, and interview analysis tools may perpetuate bias against neurodivergent candidates and some ethnicities. 

Employers looking to buy in these tools should ask searching questions of providers about what steps they have taken to eliminate these sources of bias and build equality outcomes into their contract criteria. 

Effectiveness:  As AI tools become more sophisticated, it's easy to forget that they don't actually think for themselves (yet!).  AI is a predictive tool, which means that it's necessarily based on averages.  It's not good at predicting outliers,  but sometimes the best performers are the least obvious candidates.  An effective AI-enhanced recruitment process will build in human involvement ("human in the loop") to ensure that promising candidates who don't fit the mould aren't missed.   Another point to consider is that candidates increasingly use generative AI themselves to enhance their applications: employers may want to design recruitment processes to minimise the scope for this and avoid identikit applications. 

Cultural:  One aspect that the Government's guidance doesn't address is culture.   What does it say about your organisation if you use AI heavily in the recruitment process?  For some employers, this will align with their values and culture - but not for all.   Employers should consider carefully how this fits in with their organisational culture and the candidates they want to attract.  

 AI is a powerful tool for recruitment teams - but one to be used thoughtfully and with an eye to the legal and ethical complexities. 

featured image