The AI Accountability Crisis: Legal Lessons from Hiring Bias Lawsuits
- Helen
- 2 days ago
- 10 min read

The Promise of AI in Hiring
The allure of artificial intelligence in the recruitment landscape is undeniable. AI powered Applicant Tracking Systems (ATS) and other automated tools promise to streamline the hiring process, enhance efficiency, and objectively identify the best candidates from a vast pool of applicants. By automating tasks like CV screening and initial assessments, these technologies allow HR and Recruitment professionals to focus on strategic, high-value interactions. They often position AI as a powerful solution for reducing unconscious bias, arguing that an algorithm, unlike a human, is unswayed by subjective factors such as personality or physical appearance.
However, this widespread adoption of AI has exposed vulnerability. The technology is not inherently neutral. While AI can improve efficiency and if used correctly, mitigate some forms of bias, it can also amplify existing inequalities if not managed with extreme care and diligence.
This has created a new layer of legal and risk for organisations. As companies race to integrate AI, a growing number of legal challenges and regulatory frameworks are emerging to hold them accountable. These developments signal a turning point in the use of HR technology. This report will serve as a guide to navigating this complex new landscape, focusing on the sources of algorithmic bias and the landmark legal cases that are reshaping the future of hiring.

The Invisible Threat: A Deep Dive into Algorithmic Bias
Artificial intelligence bias occurs when an AI system exhibits a preference or prejudice toward certain candidates, resulting in a "statistical skew" that disadvantages specific groups of people. These tools, which lack human emotions and experiences, can perpetuate biases from their training data or design. The problem is not that the AI is malicious; rather, it is that the technology is a reflection of the data and assumptions on which it was built. Understanding the origins of this bias is the first step toward mitigating it.
One of the most common and visible culprits is biased training data, also known as historical bias. An AI model learns by analysing vast amounts of historical data. If an organisation's past hiring records are dominated by a specific demographic—for example, white males in technical roles, the algorithm will learn to identify and favour characteristics common to that group.
As a result, it may penalise candidates from historically underrepresented or discriminated groups because they are too dissimilar from the existing workforce. This creates a dangerous feedback loop where the technology recreates the very inequalities it was intended to eliminate. For instance, one case involved an algorithm that learned to downgrade any CV containing the word "women's," thereby sidelining qualified applicants from all-women's colleges or female-led organisations.
Beyond the training data itself, algorithmic and design flaws can also introduce or amplify bias. An algorithm may appear neutral on the surface but can still produce unfair outcomes. For example, a system might be designed to screen for seemingly benign attributes like the length of employment gaps or the prestige of a university. While these data points may seem objective, (Although the need to ask for exact university, or address shouldn't be relevant to hiring) they can disproportionately affect certain groups, such as caregivers, who are often women, or individuals from disadvantaged socioeconomic backgrounds who may not have had access to elite universities.
This phenomenon illustrates a crucial point: an algorithm can find and exploit indirect, "shadow" data points that correlate with protected characteristics, even if the system is programmed to ignore explicit identifiers like names or gender.
The lawsuit against SiriusXM alleges that its AI hiring tool used data points like "education" and "address," which can be proxies for a candidate's race and lead to intentional discrimination. This shows that employers must understand the entire data model of a vendor's tool, not just the surface-level inputs.
Finally, biases can manifest in more subtle ways through predictive and measurement bias. Predictive bias occurs when an AI system consistently overestimates or underestimates a particular group's future performance, perhaps by ranking candidates from one school lower than those from others despite similar qualifications.
Measurement bias, on the other hand, arises from errors in the AI model's training data, which can lead to inaccurate or unfair conclusions when the system is applied to real-world data.
The complexity of these issues is compounded by the fact that AI bias is not a singular, uniform problem; it is often intersectional. For example, a recent study found that CV screening technology discriminated against Black women with STEM degrees, highlighting how bias can be multiplicative, not just additive, in its effect. This multi-layered nature of AI bias requires a more sophisticated approach to auditing and oversight, as evidenced by a recent complaint filed against Intuit/HireVue.
Three Landmark Cases Reshaping HR Tech

The theoretical risks of AI bias are no longer abstract; they are the subject of legal battles that are actively redefining the legal responsibilities of both employers AND AI vendors. The following three cases offer a clear picture of this evolving landscape and the critical lessons for any organisation using automated hiring tools.
Case 1: The Vendor as Agent (Mobley v. Workday)
The lawsuit Mobley v. Workday has emerged as a landmark case in the U.S. challenging the traditional division of liability between a company and its technology vendor. Filed in February 2023, the plaintiff, Derek Mobley, who is African American, over 40, and disabled, alleges that Workday's automated CVscreening tool discriminated against him, leading to his rejection for over 80 jobs.7
The initial lawsuit was dismissed because Workday was not legally considered an "employment agency". However, a federal judge allowed the case to proceed after an amended lawsuit argued that the software acted as an employer's "agent" in the hiring process. The judge's ruling was pivotal, stating that Workday's software was "not simply implementing in a rote way the criteria that employers set forth, but is instead participating in the decision-making process by recommending some candidates to move forward and rejecting others".
This ruling establishes a powerful legal precedent: an AI vendor's software can be considered a direct participant in the hiring decision-making process, opening the vendor itself to liability. The case was later certified as a collective action under the Age Discrimination in Employment Act (ADEA), and the judge ordered Workday to provide a list of customers who had enabled certain AI features from HiredScore, a company Workday had acquired. This legal development underscores that a vendor cannot escape liability simply by acquiring a technology or claiming its AI is different.
Case 2: The Employer is Accountable (Harper v. SiriusXM)
While the Mobley case focused on vendor liability, the lawsuit against SiriusXM provides a critical reminder of the employer's ultimate responsibility. In this case, an applicant, Arshon Harper, filed a federal lawsuit against SiriusXM Radio, alleging that the company's use of an AI tool from the iCIMS Applicant Tracking System resulted in racial discrimination. Harper, an African American man, was rejected for nearly 150 jobs despite his qualifications.
The lawsuit contends that the iCIMS AI evaluated candidates using data points such as education and address, which, while seemingly neutral, can be used to infer a candidate's race and lead to intentional discrimination. The key legal takeaway here is that Harper is suing the employer for the discriminatory outcome, not the AI developer. Employment lawyers and the Equal Employment Opportunity Commission (EEOC) have made it clear that employers are always liable for discrimination in their hiring processes, regardless of whether a third-party technology is the cause.
This case serves as a stark warning: an employer cannot outsource its legal liability to a software vendor. The responsibility for ensuring fairness remains squarely with the organisation that deploys the tool.
Case 3: The Challenge of Accessibility (D.K. v. Intuit & HireVue)
The complaint filed against Intuit and HireVue by the American Civil Liberties Union (ACLU) introduces a new dimension to the legal discourse: the intersection of AI bias and disability rights. The complaint was filed on behalf of D.K., an Indigenous and Deaf woman who was denied a promotion. The alleged discrimination stemmed from her required use of HireVue's video interview platform, which employs automated speech recognition to create transcripts of an applicant's spoken responses.
Another Interesting point about this is the platform being used as an internal mobility tool. We wonder how many businesses use Hirevue in this way.
According to the complaint, these systems are known to perform poorly for non-white and deaf or hard-of-hearing individuals, who may have different speech patterns, accents, and word choices.8 Despite D.K.'s positive work performance and her request for an accommodation, she was rejected for the position and received generic feedback on her "effective communication". The legal complaint alleges violations of the Americans with Disabilities Act (ADA) and Title VII of the Civil Rights Act, emphasising that the faulty technology deepens existing racial and disability inequities. This case highlights that employers must ensure their AI-based systems are accessible and that they provide reasonable accommodations to avoid discrimination. It also brings to light the risks inherent in video analysis tools, which go beyond simple CV Screening.
These three cases, while distinct, are not contradictory. They are two sides of the same coin, demonstrating a new legal doctrine: both the creator of the tool and the end-user can be held responsible for its discriminatory outcomes. This creates a powerful dual-liability model that raises the stakes for the entire HR tech ecosystem. It means vendors must build fairer tools, and employers must perform rigorous due diligence and continuous audits.
Beyond Litigation: The Broader Business Case for Fairness
While the legal risks are significant, the case for mitigating AI bias extends far beyond the courtroom. A biased hiring process isn't just a legal liability; it is a strategic misstep that erodes a company’s competitive advantage and stifles innovation. Let alone the impact it will have on your employer brand.
Negative headlines about biased hiring can severely damage a company's brand reputation and make it difficult to attract top talent. Today's job seekers are increasingly aware of these issues and may actively avoid organisations with a reputation for unfair practices. A company's commitment to diversity, equity, and inclusion is now a critical factor in talent attraction, and the use of biased technology can quickly undermine that commitment.
This also leads to a more fundamental problem: an algorithm doesn't necessarily hire the most qualified candidates; it often hires the candidates who are best at "beating the ATS". In a culture of self-censorship, job seekers may feel compelled to remove information from their CVs or even Anglicise their names to bypass an automated hurdle. The result is a company that misses out on genuinely talented, but "unconventional," applicants who do not conform to the algorithm's predetermined profile.
A Guide to Mitigating AI Bias

For organisations committed to building a fair and inclusive workforce, a proactive approach is essential. This involves not only choosing the right technology but also implementing a robust framework for its oversight.
Audits, Metrics, and the Human-in-the-Loop
It’s recommended that employers conduct continuous self-audits of their AI hiring tools to identify and correct discriminatory patterns. This includes monitoring for "disparate impact," which is when a facially neutral practice has a disproportionate, adverse effect on a protected characteristic.
Ultimately, the most effective strategy for mitigating bias is to maintain a human-in-the-loop system. While AI can streamline initial screening, human judgment should remain central to final hiring decisions. AI systems should serve to assist, not to rule, human decision-making, ensuring a comprehensive evaluation that goes beyond what an algorithm can provide.
Vendor Due Diligence
Given the dual-liability model emerging from recent lawsuits, selecting a conscientious and transparent AI vendor is more critical than ever.Employers must conduct rigorous due diligence and ask vendors a series of tough questions:
How was the model trained, and what specific steps were taken to ensure data diversity?
What measures have you implemented to assess and mitigate bias, such as Explainable AI features or fairness metrics?
Can you provide documentation or validation results of your algorithmic audits?
Does the tool offer transparency and insight into how it makes decisions?
Can we customise the tool to remove potentially biased data points?
Working with a vendor that is transparent and committed to continuous monitoring and improvement is crucial for mitigating risk and building a truly equitable hiring process.
Cultivating a Culture of Accountability
Finally, a strong governance framework and a commitment to accountability are paramount. HR teams should partner with legal experts to ensure compliance with existing anti-discrimination laws. Thinking here about the Equality Act in the UK and The European Union's AI Act, for instance which classifies CV-scanning tools as "high-risk" and subjects them to strict obligations, including risk assessment and high-quality datasets to minimise discriminatory outcomes. This regulatory movement provides a clear signal of the direction of global law.
Employers should also invest in AI literacy training for their teams. This training should equip recruiters with a deep understanding of how AI tools function, including their strengths and limitations, and empower them to interpret AI-generated insights critically and override biased decisions when necessary. Finally, a commitment to transparency with candidates can build trust and reduce legal exposure. This includes clearly communicating when and how AI is used in the hiring process and, whenever possible, offering an appeal process for AI-driven decisions to ensure fairness and prevent automation errors.
Conclusion: The Future of Fair and Ethical Hiring
The lawsuits against Workday, SiriusXM, and Intuit/HireVue are not just isolated legal battles; they are milestones that mark a fundamental shift in the relationship between technology, employers, and the law. They underscore that AI in recruitment is not a neutral force and that its outcomes carry significant legal, reputational, and strategic weight. The analysis of these cases demonstrates that liability for biased hiring can extend to both the employer and the technology vendor, creating a powerful dual-accountability model.
The path forward for organisations is not to retreat from innovation but to mature in their use of technology. This requires a commitment to proactive governance, including continuous audits, rigorous vendor due diligence, and a steadfast commitment to human oversight. By doing so, companies have an opportunity to lead by example, leveraging AI not just for efficiency, but as a tool to build more equitable, diverse, and innovative workforces. The true measure of an AI system's success lies not in its speed, but in its fairness.
Works cited
AI Recruiting: The Best Practices to Hire Top Talent in 2025 - Loopcv blog https://blog.loopcv.pro/ai-recruiting-the-best-practices-to-hire-top-talent/
Learn How AI Hiring Bias Can Impact Your Recruitment Process - VidCruiter, https://vidcruiter.com/interview/intelligence/ai-bias/
Biases in AI Recruitment Systems and Their Impact - Apollo Technical LLC, https://www.apollotechnical.com/biases-in-ai-recruitment-systems-and-their-impact/
The Ethics of AI in Recruiting: Bias, Privacy, and the Future of Hiring | Mitratech, https://mitratech.com/resource-hub/blog/the-ethics-of-ai-in-recruiting-bias-privacy-and-the-future-of-hiring/
A Guide To The Different Types of AI Bias - Zendata, https://www.sendata.dev/post/a-guide-to-the-different-types-of-ai-bias
3 Ways to Neutralise AI Bias in Recruiting - Visier, https://www.visier.com/blog/neutralise-ai-bias-in-recruiting/
SiriusXM Sued For Alleged AI Hiring Bias - FairNow, https://fairnow.ai/siriusxm-sued-for-alleged-ai-hiring-bias/
Complaint Filed Against Intuit and HireVue Over Biased AI Hiring https://www.aclu.org/press-releases/complaint-filed-against-intuit-and-hirevue-over-biased-ai-hiring-technology-that-works-worse-for-deaf-and-non-white-applicants
The Intersection of Artificial Intelligence and Employment Law - Ogletree, https://ogletree.com/insights-resources/blog-posts/the-intersection-of-artificial-intelligence-and-employment-law/
AI Recruitment Mistakes: Top Pitfalls and How to Avoid Them - GoCohttps://www.goco.io/blog/common-ai-recruitment-pitfalls-to-avoid
AI Bias Audit Strategies for Fair Hiring Practices - BarRaiser,https://www.barraiser.com/blogs/ai-bias-audit-strategies-for-fair-hiring-practices
Judge orders Workday to supply an exhaustive list of employers that https://www.hrdive.com/news/workday-must-supply-list-of-employers-who-enabled-hiredscore-ai/756506/
AI Act | Shaping Europe's digital future - European Union, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
Comments