In Episode 12 of AI in Recruitment, co-hosts Jasper Spanjaart and Martijn Hemminga explore the evolving role of artificial intelligence in the recruitment industry, with a focus on AI-driven bias and discrimination. With soundbites from Brando Benifei of the European Commission and Laurens Waling of 8vance.
New music and Apple’s AI adventure
The episode begins with a conversation about recent advancements and controversies in AI, including new music generators like Suno and Udio, which raise ethical concerns about copyright infringement and the use of artists’ work without compensation. Other news includes Apple’s venture into the AI space… finally. Apple is working on a series of AI announcements for Worldwide Developers Conference 2024 in June, but we don’t yet know exactly what these will entail.
Now Apple’s AI researchers this week published a research paper that may shed new light on Apple’s AI plans for Siri, which could let Siri remember your conversation history, understand what’s on your iPhone screen, and be aware of surrounding activities, such as recognizing the music playing in the background. And they say it could outperform GPT-4 ‘substantially’.
Bias in recruitment
The main discussion centers on the persistent issue of bias in AI systems used for recruitment. The co-hosts detail how AI, trained on historical data that may reflect past prejudices, can inadvertently perpetuate these biases. They discuss specific examples, such as facial recognition software’s failures with faces and AI systems favoring resumes with traditionally masculine language. “Despite advancements in AI technology, bias remains a significant concern”, Spanjaart says. “Artificial Intelligence seems futuristic at times, but it’s not immune from the prejudices of the past.”
“These biases aren’t just theoretical, they affect real people. This isn’t just a glitch; it’s a reflection of the data it’s trained on.”
“And that’s exactly where the problem starts. The AI learns from vast amounts of data it is fed. If this data reflects historical biases, say, in hiring practices, then the AI will likely replicate these biases. And these biases aren’t just theoretical, they affect real people. This isn’t just a glitch; it’s a reflection of the data it’s trained on. But it goes beyond just recognition software.”
It’s all about the data
The problem lies in the data used to train these AI systems. “Imagine an AI consistently recommending male candidates for engineering roles, simply because that’s the historical pattern in the data. It overlooks qualified female candidates”, Hemminga says. The episode includes a soundbite from Brando Benifei, co-rapporteur on Europe’s AI Act, who discusses the importance of responsible AI development and the implementation of checks and standards for training data to combat bias.
Listen to the full episode of AI in Recruitment
For those keen on delving into the complete discussion, the full episode is available for listening on Spotify, Apple Podcasts, or through the link below.