Using AI in recruitment to help not harm diversity
In 2021, DCA partnered with Hudson RPO and Monash University in a groundbreaking three-year research project, Artificial Intelligence (AI) at Work in Recruitment. The project was initiated in response to unprecedented levels of activity and investment in AI, occurring both globally and in Australia.
The first phase of the three-stage project identified the advantages and disadvantages of using AI in recruitment, including advice on its implementation to potential users. Stage two mapped out the current state of play in artificial intelligence, highlighting how widespread the application of AI tools and software in recruitment is in Australia.
Stage three is the final phase of AI at Work in Recruitment, building on research from the initial phases, combined with expert panel advice and insights from lived experience within the field. In this report, DCA delves into the different ways AI in recruitment can either help or inadvertently harm diversity and inclusion in the workforce, and shares guidelines to assist leaders and practitioners in selecting and deploying AI tools.
Why is it important to consider diversity when it comes to AI?
Used correctly, artificial intelligence can reduce costs, save time, and create fairer outcomes for minority groups. However, this project has revealed that without diversity front of mind, AI has the potential to mirror society’s inequities and further bake in bias.
Why is it hard to avoid embedding bias when using AI?
- If AI is not designed with diversity in mind, it mirrors society’s inequities: Our society is characterised by many systemic inequities, and these show up in the way AI recruitment tools are built and used.
- Tech workforce has little lived experience of marginalisation: The tech and AI workforce is not very diverse and therefore, it can be harder for developers to recognise exclusion and bias when it occurs, and to pre-empt how AI recruitment tools can include or exclude a diversity of talent.
- Measuring fairness and eliminating bias in AI is tricky: In many cases, it can be impossible to satisfy all types of fairness, and maximising fairness for one diversity group may inadvertently lead to disadvantages for others, and vice versa.
- AI governance is still in its infancy: The rapid advancements in AI technology have outpaced the development of comprehensive governance frameworks. However, it is important to remember that existing anti-discrimination legislation covers recruitment, and through that also AI recruitment.
T.R.E.A.D. (Team Up, Reflect, Educate, Acquire, Decide) – a 5-step process to help employers ‘tread carefully’ when it comes to possible D&I risks in AI recruitment.
Reflective assessment checklist – for use in the ‘Decide’ stage, a checklist that enables employers to make an informed decision about how they can best proceed with deploying an AI recruitment tool, so that it helps rather than harms workforce diversity.
DCA CEO, Lisa Annese said:
“We know that unless AI is deployed with a focus on diversity and inclusion it has the potential to mirror society’s inequalities and bake in systemic biases. Conversely, if it’s used with D&I front of mind, the benefits can be astounding.
“As the third and final stage of DCA’s Inclusive AI at Work in Recruitment project, the Inclusive AI at Work in Recruitment Employer Guidelines is the accumulation of three years of trailblazing research, distilled into practical steps for crucial reflection and action.”
Hudson RPO CEO, Kimberley Hubble, said:
“As a global leader in Recruitment Process Outsourcing and Talent Advisory services, being at the forefront of technological advances that impact recruitment and, subsequently people is a priority.
“DCA’s three-year research program found that AI-powered recruitment can be a double-edged sword regarding workforce diversity.
“If used appropriately, it can pave the way for superior experiences and job opportunities for diverse candidates, but conversely, if used without the necessary understanding, it can reinforce systemic bias and discrimination.”
Want to use our research?
If you wish to use any content in this report, please contact us at email@example.com to seek consent.
Where you refer to our research publicly, you must correctly attribute it to DCA.
We require formal attribution for all written references to our research material. Citing DCA as a source will suffice where the reference is verbal.