AI Usage In Recruitment
Part 3: Overview of the Ethical and Legal Challenges of AI in Recruitment

The relentless pace of technological change has brought Artificial Intelligence (AI) from the periphery to the core of modern business operations, nowhere more profoundly than in talent acquisition.

Confronted with the strategic imperatives of processing vast volumes of applications and securing a competitive edge in the perpetual “war for talent,” organizations are rapidly adopting AI-powered tools to automate and enhance the hiring process. Herein analyzes the ethical challenges facing companies, and applicants, adopting AI in the recruitment process.

The scale of this adoption is staggering. An estimated 99% of Fortune 500 companies now use some form of automation in their hiring process, even before AI adoption. Looking ahead, a recent survey revealed that 70% of employers planned to use AI in hiring by the end of 2025. These tools are being deployed across every stage of the recruitment cycle, from initial sourcing to final selection, performing tasks that were once the exclusive domain of human recruiters. The table below highlights the associated risks for each stage of the recruitment cycle.

 

 

Recruitment Stage AI Application Examples Associated Risks (Examples)
 

 

 

 

Sourcing

Job Description Review Software: Analyzes and optimizes job ads for inclusive language.

Targeted Advertising: Profiles and targets potential candidates on online platforms.

Recruiting Chatbots: Engage with candidates and answer FAQs.

May use vague or dissuasive language that discourages applicants.

Perpetuating historical bias by targeting based on geography or protected characteristics.

May not be trained on relevant data, risking illegal or incorrect answers.

 

 

 

 

Screening

 

Resume Screening & Parsing: Automates the sifting and ranking of CVs.

CV Matching: Extracts data from CVs to find similarities with ideal candidate profiles.

Inheriting bias from past recruitment practices.

Disproportionately affecting parents or people with disabilities due to gaps in employment.

Perpetuating underrepresentation of certain groups.

Assessment Automated Video Interview Analysis: Uses facial and voice recognition to infer emotion, engagement, and competencies. Divergent error rates across demographic groups.

Little to no scientific consensus on the

 

  Psychometric Tests & Gamified Assessments: Measure cognitive skills, personality traits, and problem-solving abilities. validity of inferring emotion.

Lacks scientific validity and may be inaccessible for neurodivergent candidates.

 

 

Facilitation

Automated Scheduling: Coordinates interview times between candidates and hiring managers.

Candidate Communication: Chatbots provide applicants with progress updates.

Can lead to digital exclusion for applicants with limited tech access.

May provide incorrect information if not trained on sufficient, relevant data.

 

 

2.  The Double-Edged Sword: Critically Analyzing the Risks of AI-Powered Hiring

Despite the promises of a more streamlined and equitable recruitment process, the deployment of AI in hiring is plagued with significant ethical and legal risks. When implemented without rigorous oversight and a deep understanding of their limitations, these tools can perpetuate historical biases, erode candidate trust, and expose organizations to costly litigation. The cautionary tales are already numerous and stark.

In 2018, Amazon abandoned an experimental AI recruiting tool after discovering it was systematically biased against women. The system, trained on a decade’s worth of resumes submitted predominantly by men, had learned to penalize CVs that included the word “women’s” and downgraded graduates of two all-women’s colleges.

This case serves as a powerful reminder that AI is not inherently objective. Instead, it is a mirror that reflects the data, and the historical biases, upon which it is trained.

 

2.1.  Algorithmic Bias: The Hidden Perpetuation of Inequality

The core challenge of AI in hiring is algorithmic bias. If an AI system is trained on historical hiring data that contains existing societal inequities and human prejudices, it will not only replicate but often amplify those biases at scale. This “garbage in, garbage out” phenomenon can lead to automated systems that systematically and unlawfully disadvantage qualified candidates from underrepresented groups. Companies employing AI in these process tend to forgo human oversight as a part of the raison d’être of implemting AI for these processes.

  • Gender Bias: A recent study titled “Who Gets the Callback?” audited several large language models (LLMs) and found that most tend to favor men, particularly for higher-wage The models’ recommendations also reinforced traditional occupational segregation, favoring women for roles in personal care and service while recommending men for construction and extraction jobs.
  • Racial Bias: Research unveiled at the 2024 AAAI/ACM Conference on AI, Ethics, and Society found that resumes with White-associated names were selected for the next hiring step 85% of the time, compared to only 9% for resumes with Black-associated The study noted that

 

Black men were the most disadvantaged group, as the systems never preferred a resume with a Black male-associated name over an identical one with a White male-associated name. These findings are echoed in litigation, such as the Harper v. Sirius XM Radio lawsuit, which alleges that the company’s AI screening tool disproportionately disadvantaged African-American candidates.

  • Age Bias: The EEOC reported a case where iTutorGroup programmed its software to automatically reject female applicants over 55 and male applicants over This issue is at the heart of the Mobley v. Workday, Inc. class-action lawsuit, which alleges that Workday’s screening algorithms have a disparate impact on applicants over the age of 40.
  • Disability Bias: The American Civil Liberties Union (ACLU) has filed administrative actions against vendors like Aon and Intuit, challenging hiring tools that allegedly screen out and discriminate against people with These tools, which may analyze facial expressions or require specific interaction formats, can create insurmountable barriers for qualified candidates.

 

2.2.  The ‘Black Box’ Dilemma: Transparency, Explainability, and Accountability

Many advanced AI models operate as a “black box,” meaning their internal decision-making processes are so complex that even their developers cannot fully explain why a specific recommendation was made. This lack of transparency poses a profound challenge to fairness and accountability.

When a candidate is rejected, the inability to provide a clear, justifiable reason makes it nearly impossible for them to contest an unfair outcome. This opacity can effectively mask discrimination, allowing biased systems to operate without scrutiny. It also creates a “responsibility gap”: if a biased decision occurs, it is often unclear who is accountable—the employer who deployed the tool, the vendor who supplied it, or the data scientist who trained the model. This accountability vacuum is precisely why Embed Meaningful Human Oversight and Commit to Radical Transparency are foundational to any ethical AI strategy.

2.3.  Data Privacy and Candidate Rights

AI recruiting tools function by collecting and processing vast amounts of personal and often sensitive data, from resumes and social media profiles to biometric information from video interviews. This practice raises significant data privacy risks and legal obligations under frameworks like the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

The GDPR, for instance, includes a potential “right to explanation” for decisions made by automated systems and may require a “human in the loop” for high-stakes decisions. Furthermore, the ethical ambiguity of scraping data from social media is a major concern. Candidates may not have consented to their online activity being used to evaluate their job suitability, creating a clear tension between an organization’s desire for data and an individual’s right to privacy.

 

2.4.  Questionable Validity and the Threat of ‘Digital Pseudoscience’

A significant number of AI hiring tools on the market lack rigorous, peer-reviewed scientific validation. In particular, systems that claim to infer personality traits, engagement levels, or emotions from facial recognition or tone of voice operate on questionable, at best, scientific ground. There is little to no

 

scientific consensus on the validity of such inferences, leading to a high risk of arbitrary and inaccurate assessments. Personality assessments as part of the recruiting processes, even before AI adoption, have been considered ethically questionable.

A 2021 investigation by journalist Hilke Schellmann provided a stark example of this lack of effectiveness. She tested an AI-powered interviewing tool and received a high score for English proficiency even when she conducted the entire interview speaking only German. Such tools risk making recommendations based on unexplained correlations rather than causal links to actual job performance, devolving into a form of digital pseudoscience. This proliferation of ‘digital pseudoscience’ makes rigorous, evidence-based vendor due diligence a non-negotiable step in any responsible AI procurement process.

 

2.5.  Candidate Experience and Emerging Threats

Beyond bias and validity, the overuse of automation can severely degrade the candidate experience. An entirely automated process can feel dehumanizing, alienating qualified applicants who value human interaction and personalized communication. This can tarnish an employer’s brand and drive top talent to competitors who offer a more engaging process.

Furthermore, a new and growing threat is undermining the integrity of the hiring process itself: deepfake job applicants. A recent Resume Genius survey found that 17% of hiring managers had already encountered candidates using deepfake technology to alter their appearance or voice in video interviews. The advisory firm Gartner predicts that by 2028, one in four job candidates worldwide will be fake. This trend is not only a matter of fraud but also a national security concern, as evidenced by a 2024 Justice Department action against impostors tied to North Korea who were hired for remote IT roles at U.S. companies.

The proliferation of these risks has not gone unnoticed, and a wave of legal and regulatory actions is now forcing organizations to confront the consequences of deploying AI without adequate ethical consideration.

 

3.  The Emerging Legal and Regulatory Gauntlet

The theoretical risks associated with biased and poorly implemented AI are now materializing into significant legal challenges and a growing patchwork of regulations. Employers can no longer claim ignorance of these issues and must proactively navigate an evolving legal landscape where accountability is increasingly being enforced. The era of treating AI as an experimental tool is over; it is now a domain of legal compliance and liability.

 

3.1.  Landmark Litigation: Setting Precedents for AI-Driven Bias

A series of high-profile lawsuits and administrative actions are establishing important precedents for AI-related discrimination, clarifying the responsibilities of both employers and technology vendors.

 

Case/Action Defendant(s) Core Allegation Key Implication
Mobley v. Workday, Inc. Workday, Inc. AI screening algorithms have a disparate impact on job Establishes that vendors may be held liable as ‘agents’ of employers,

 

    applicants based on race, age (40+), and disability. creating a shared liability model with broad exposure across their client base.
 

Harper v. Sirius XM Radio, LLC

 

Sirius XM Radio, LLC

AI-powered hiring system (iCIMS) perpetuates historical biases, resulting in discrimination against Black applicants. Reinforces that both disparate treatment and disparate impact are viable legal theories against AI tools.
 

ACLU

Administrative Actions

 

Aon Consulting

Hiring tools (ADEPT-15, vidAssess-AI) are discriminatory against people with disabilities and certain racial groups.  

Employers are liable for vendor bias, even if they did not design the tool and were unaware of its flaws.

 

 

ACLU Charges

 

 

Intuit

Automated video interview tool lacked proper accommodations for a Deaf Indigenous applicant, violating the ADA. Confirms that accessibility and reasonable accommodation requirements apply to AI-powered assessment tools.

3.2.  A Patchwork of Regulation: Navigating Global and Local AI Laws

In response to the growing risks, governments are beginning to enact specific legislation to govern the use of AI in employment.

  • The EU AI Act: This landmark regulation takes a risk-based approach, classifying AI systems used in employment as “high-risk.” The Act imposes stringent requirements for risk assessment, data governance, and human oversight. Non-compliance can lead to severe penalties, with fines of up to 35 million euros or 7% of a company’s global annual
  • New York City’s Local Law 144: Taking effect in 2023, this pioneering law requires employers using “automated employment decision tools” (AEDTs) in New York City to conduct independent, annual bias The results of these audits must be made publicly available on the employer’s website.
  • Other State-Level Actions: Other jurisdictions are following Illinois’ Artificial Intelligence Video Interview Act requires employer transparency and candidate consent for AI-analyzed video interviews. Colorado has passed a sweeping law requiring transparency notices and appeal rights, and California has issued regulations clarifying that AI bias falls under existing discrimination statutes.

 

3.3.  Clarifying Liability: The Shared Responsibility of Employers and Vendors

A critical legal question is where liability falls when a third-party AI tool produces a discriminatory outcome. The answer emerging from EEOC guidance and recent court rulings is clear: liability is

 

shared.

Under federal anti-discrimination laws, employers are ultimately liable for the outcomes of the hiring tools they use, regardless of whether those tools are developed in-house or procured from a vendor. An employer cannot deflect responsibility by blaming the technology provider.

Furthermore, the court’s decision in Mobley v. Workday to allow the case to proceed suggests that vendors may also be held directly liable as “agents” of the employers they serve. This establishes a shared risk model where both the company using the AI and the company that created it can be held accountable, underscoring the need for deep partnership and diligence in the procurement process.

Faced with this complex web of legal and ethical challenges, organizations must move beyond reactive compliance and adopt a proactive, principles-based framework for responsible AI implementation.

 

4.  A Framework for Responsible AI Implementation: A Strategic Guide for Employers

Moving from an awareness of the risks to meaningful action requires a deliberate, principles-based framework for procuring, deploying, and governing AI in recruitment. This strategic guide provides actionable principles for business leaders and HR professionals to ensure that AI is leveraged not just for efficiency, but as a force for fairness and ethical excellence.

 

4.1.  Principle 1: Mandate Proactive and Continuous Auditing

The first principle must be to mandate proactive and continuous auditing. Relying on vendor claims of “bias-free” technology is insufficient. As recommended by the UK government’s “Responsible AI in Recruitment” guidance, bias audits and performance testing must be conducted at regular intervals, not just as a one-time check before procurement.

One must ensure these audits assess whether the AI tool has a disparate impact on any group based on protected characteristics such as race, gender, age, or disability. This involves regularly analyzing the demographic results of the AI screening tools to identify and address any emerging biases before they become systemic problems.

 

4.2.  Principle 2: Embed Meaningful Human Oversight

One must accept that AI should be used as a tool to augment and support human decision-makers, not to replace them entirely. The final decision to hire a candidate must always be made by a human being. The most responsible use of AI involves it serving as a powerful assistant, handling high-volume tasks like initial screening to produce a qualified shortlist that is then subject to human review and judgment. This “human-in-the-loop” approach ensures that context, nuance, and ethical considerations remain central to the hiring process.

 

4.3.  Principle 3: Commit to Radical Transparency and Contestability

An organization must commit to transparency to build trust with candidates. This requires the organization to clearly communicate to applicants when and how AI is being used in the hiring process. This transparency should be accompanied by clear channels for contestability and redress, allowing an applicant to question an automated decision and request a human review.

This principle is also a matter of legal compliance. Under laws like the UK’s Equality Act 2010 and the

 

Americans with Disabilities Act (ADA) in the US, a company posses a legal obligation to provide reasonable adjustments and alternative assessment paths for applicants with disabilities who may be disadvantaged by a specific technology.

 

4.4.  Principle 4: Implement Rigorous Vendor Due Diligence

The legal principle of shared liability makes rigorous vendor due diligence non-negotiable. Before procuring any AI tool, an organization must conduct a thorough investigation that goes far beyond a sales pitch.

  • Demand Evidence: Do not accept marketing claims at face An organization must require vendors to provide tangible proof of their system’s validity, accuracy, and fairness. This includes documentation like third-party bias audit results and “model cards,” which detail the system’s intended use, limitations, and performance metrics across different demographic groups.
  • Scrutinize Training Data: Inquire deeply about the data used to train the Is it diverse, representative of the relevant talent pool, and directly relevant to the skills required for the job? A model trained on a homogenous dataset is a red flag for inherent bias.
  • Allocate Liability: Work with legal counsel to closely assess vendor contracts. Ensure they include appropriate representations, warranties, and indemnification clauses that properly allocate the legal and financial risk associated with discriminatory outcomes.
  • Ensure Technical Understanding: An organizastion must develop a foundational understanding of how the chosen tools work. Overreliance on a vendor’s explanation without sufficient independent analysis is a common and dangerous pitfall.

 

4.5.  Principle 5: Establish Robust Data Governance and Privacy Protocols

Effective AI use requires a strong internal governance framework. Before deploying any AI system, the organization must conduct a Data Protection Impact Assessment (DPIA), as required by regulations like the UK GDPR for high-risk processing. This process helps to proactively identify, assess, and minimize data protection risks to candidates.

A core tenet of this framework must be data minimization—collecting, processing, and storing only the data that is absolutely necessary and relevant for the hiring decision. This reduces the organization’s risk profile and demonstrates respect for candidate privacy.

These five principles provide a strategic foundation for harnessing the power of AI while upholding the highest ethical and legal standards in talent acquisition.

 

5.  Conclusion: Building a Future of Fair and Effective Hiring

Artificial Intelligence offers a powerful and transformative suite of tools with the potential to make recruitment more efficient and data-driven than ever before. However, as demonstrated herein, these benefits are not automatic; they are contingent on a steadfast commitment to responsible governance. The risks of algorithmic bias, opacity, questionable validity, and legal liability are not minor flaws but fundamental threats that can undermine the integrity of the entire talent strategy.

The principles outlined are immediate, necessary actions for survival and leadership in this new landscape.

The path forward is not a retreat from technology, but a more thoughtful and deliberate embrace of it.

 

The future of talent acquisition is not about choosing between humans and machines. It is about creating a symbiotic relationship where technology, guided by human values and robust oversight, amplifies our ability to identify and nurture talent. By embedding the principles of auditing, human oversight, transparency, due diligence, and data governance into strategies, companies can build a future where AI fosters a more efficient, insightful, and fundamentally fairer workplace for all.

 

Bibliography

  • Arntz, (2025, July 10). McDonald’s AI bot spills data on job applicants. Malwarebytes.
  • Carnegie Mellon (2024, June 21). AI tools reshape job application process: A Q&A with CMU’s director of employer relations Sean McGowan on the use of generative AI in the job application process.
  • Chaturvedi, , & Chaturvedi, R. (2025, May 1). Who gets the callback? Generative AI and gender bias. arXiv. https://arxiv.org/abs/2504.21400
  • Deady, (2025, June 26). How are candidates using AI in the hiring process? SocialTalent.
  • Department for Science, Innovation & (2024, March 25). Responsible AI in recruitment. GOV.UK.
  • Diem, (2025, February 4). The ethics of AI in recruiting: Bias, privacy, and the future of hiring. JDSupra.
  • Fitzpatrick, , Basu, K., & Klar, R. (2024, October 4). AI hiring tools risk discrimination, watchdog tells Congress.

Bloomberg Government.

  • com. (2024, November 25). AI hiring: The hidden discrimination and how to counteract it.
  • Hunkenschroer, L., & Kriebitz, A. (2022, July 25). Is AI recruiting (un)ethical? A human rights perspective on the use of AI for hiring. AI Ethics, 3(1), 199–213. https://doi.org/10.1007/s43681-022-00166-4
  • Hunkenschroer, L., & Luetge, C. (2022, February 8). Ethics of AI-enabled recruiting and selection: A review and research agenda. Journal of Business Ethics, 178, 977–1007. https://doi.org/10.1007/s10551-022-05049-6
  • com. (2025, March 26). What employers think of job seekers leveraging Gen AI tools.
  • Johnston, (2025, February 28). Nearly two-thirds of job candidates are using AI in their applications, report says

—experts share their best practices. CNBC Make It.

  • Martucci, C., Erbaz, B., & Jutt, M. (2024, October 2). The role of AI in employment processes. The Journal of The Missouri Bar, 80(5).
  • Milne, , & Caliskan, A. (2024, October 31). AI tools show biases in ranking job applicants’ names according to perceived race and gender. UW News, University of Washington.
  • Richmond, (2024, June 21). How to avoid the pitfalls when using AI to recruit new employees. The Times.
  • Robinson, (2024, October 20). Why 80% of hiring managers discard AI-generated job applications from career seekers. Forbes.
  • (2025, July 1). The future of talent acquisition: How AI skill assessment is revolutionizing the hiring

 

landscape in 2025 and beyond. https://superagi.com/the-future-of-talent-acquisition-how-ai-skill-assessment-is- revolutionizing-the-hiring-landscape-in-2025-and-beyond/

  • Taylor, (2025, May 13). People interviewed by AI for jobs face discrimination risks, Australian study warns. The Guardian.
  • Thapa, , & Son, H. (2025, July 11). How deepfake AI job applicants are stealing remote work. CNBC.
  • (2025, July 11). Google AI overview: Jobs AI. [Unpublished text document].
  • University of (n.d.). AI in recruitment: Guidance for job applicants.
  • Walsh, (2025, May 22). Using AI in your job search: Helpful or hurtful? Bentley University Newsroom.
  • (n.d.). Artificial intelligence in hiring. Retrieved November 26, 2024, from https://en.wikipedia.org/wiki/Artificial_intelligence_in_hiring
  • Zappulla, (2024, November 19). Business leaders risk sleepwalking towards AI misuse. Ethical Corporation Magazine.
  • Zilber, (2025, June 24). AI-powered hiring tools favor black and female job candidates over white and male applicants: study. New York Post.