‘Fear is slowing AI adoption’ Warden AI says, as 75% of HR leaders see bias as a top concern
Warden AI’s State of AI Bias in Talent Acquisition 2025 report highlights that bias is a primary concern for HR and TA leaders. Warden AI’s CEO and Co-founder explains more exclusively to UNLEASH.
UNLEASH takes a deep dive into the latest research from Warden AI’s State of AI Bias in Talent Acquisition 2025 report.
The data shows that 85% of audited AI models met fairness thresholds.
Speaking exclusively, Warden AI CEO and Co-founder, Jeffrey Pole shares what this means, and why AI needs “clarity, transparency, and responsible use”.
Share
The implementation of AI comes with a number of uncertainties, with many individuals and businesses alike showing concern for its ethical implications, privacy issues, and ability to take jobs – to name a few.
Integrating it into HR systems also brings the topic of bias into play, with new research from Warden AI’s State of Al Bias in Talent Acquisition 2025 stating that 75% of HR and TA leaders believe that bias is a primary concern for those adopting Al.
However, 85% of Al systems were found to meet fairness threshold, with data also suggesting that AI systems can deliver outcomes that are 39% fairer for female candidates and 45% fairer for racial minorities.
Speaking exclusively to Jeffrey Pole, CEO and Co-founder of Warden AI, UNLEASH explores more about what this bias means for HR.
Is AI truly unbiased?
The report, which analyzed “high-risk” AI systems used in TA and intelligence platforms, found that on average, AI audited models delivered fairer and more consistent outcomes than human judgement.
From this, Pole warns that HR and business leaders are at a crossroads.
He says: “Fear is slowing AI adoption, but as our research shows, AI can deliver better outcomes and a better future for workers when it is used in the right way.
The data is heartening: 85% of audited AI models met fairness thresholds, and some improved fairness for female and minority candidates by up to 45%.
“That’s not just progress, it’s a powerful counter to fears that AI is inherently unfair, particularly in the wake of high-profile cases such as Mobley v. Workday.”
Yet not all AI systems performed equally, with 15% of AI tools failing to meet fairness thresholds for all demographic groups.
This performance was found to vary between vendors by 40%, highlighting the care HR leaders need to have when selecting responsible vendors to partner with. AI was still found to outperform humans on fairness metrics.
Despite this, only 11% of HR buyers disclosed overlooking AI risk when assessing vendors – suggesting they remain cautious about their buying decisions.
What’s more, 46% shared vendors that demonstrate a clear commitment to responsible AI drives critical success in the procurement process.
To counter this, Pole states that although AI systems can be biased, leaders should focus on the many positive aspects that AI brings to HR and talent processes, rather than ignoring them due to fear.
To mitigate overlooking these positive attributes, he highlights the importance of having a “strong culture of responsible use”, which includes formal processes for reporting, investigating, and uncovering AI bias.
“HR and IT partnering to understand where AI can be used for the greatest business impact or where unnecessary risk is being introduced can provide a useful framework for investing in and using new AI technologies,” he adds.
The benefits of AI can only be realized with clarity, transparency, and responsible use, across HR teams and the vendors they partner with.
“By putting responsible AI measures in place, we can introduce AI solutions that build better, fairer outcomes for hiring and elsewhere.”
Sign up to the UNLEASH Newsletter
Get the Editor’s picks of the week delivered straight to your inbox!