Algorithmic Hiring:
Addressing Algorithmic Bias in Automated Hiring Systems
Tarunya Dharmarajan
April 2026
5 Minute Read
I. Introduction
In 2014, Amazon began testing an artificial-intelligence (AI)-powered hiring tool. This tool was intended to make hiring more efficient by automatically selecting the strongest applications from the applicant pool.[1] For hiring teams across many Fortune 500 companies, this technology appeared to offer a way to transform and modernize the hiring process.
However, by 2015, problems arose: a review of the machine learning (ML) model’s training data revealed that male applicants were heavily overrepresented.[2] Because most of the training data came from male employees in technical roles, the model was prone to rejecting the applications of equally qualified female applicants. The model was inherently flawed, built on biases that further perpetuated inequitable hiring practices.[3]
While this system was ultimately scrapped by Amazon, similar technologies have resurfaced over the past decade as AI continues to evolve. Today, 88% of companies now use AI to screen applicants.[4] AI is also used for functions such as interview scheduling, candidate ranking, and qualification checks. These systems promise greater objectivity and efficiency during the hiring process, yet they reproduce and amplify the very discrimination that employment law was designed to prevent. Algorithmic application screening can disproportionately harm women, racial minorities, and individuals with disabilities, yet current policy frameworks provide inadequate protection against such discrimination.
II. Inadequacy of Current Legal Protections
Title VII of the Civil Rights Act has served as the primary federal defense against employment discrimination for more than 60 years.[5] Under the disparate impact provision in Section 703(k), workers can challenge employment practices that negatively affect protected groups, regardless of whether discrimination was intentional.[6] Theoretically, this framework should extend to algorithmic hiring systems.
However, AI hiring software weakens the protection Title VII provides applicants. Namely, workers cannot challenge discrimination they are unaware of. When an algorithm rejects an application, applicants typically receive a generic rejection notice without explanation of the decision rationale or disclosure of AI usage. This lack of transparency prevents applicants from identifying discrimination and initiating legal action. It also sets a low standard of hiring transparency across industries, creating a precedent that limits workers’ rights.
Further, opacity in algorithmic decision-making limits workers’ ability to challenge hiring outcomes, reducing the accountability of corporations and hiring managers for discriminatory practices. Even if one suspects hiring discrimination, proving it would require an advanced understanding of the algorithm's internal processes. However, these AI systems often operate as black boxes, where even developers are unsure of the causal patterns that influence the model's decisions.
III. Future Legal Changes
The limitations of Title VII call for a more comprehensive federal framework to address algorithmic discrimination in hiring. The Algorithmic Accountability Act of 2022, proposed in both the House and Senate, would require employers to conduct impact assessments of automated decision systems used in employment decisions, evaluating potential bias and ensuring algorithms do not discriminate based on race, gender, age, and other protected characteristics.[7] This approach directly addresses the issue of hiring opacity by obligating organizations to understand their systems before deployment. It also allows workers to hold companies accountable when they believe hiring decisions were biased. Such legislation would establish meaningful transparency and accountability requirements for companies that use automated decision systems.
State-level regulations also serve as a potential model for future federal reform. New York City's Automated Employment Decision Tools Act, passed in July 2023, prohibits employers from using automated employment decision tools without a bias audit within one year of use,[8] with fines up to $1,500 per violation for failing to conduct audits or provide required notices.[9]
Enforcing clearer standards for AI hiring algorithms through audits, transparency requirements, and reporting measures increases accountability not only for employers but also for developers themselves, addressing a major flaw in the existing legal framework. Additionally, by imposing duties on system developers and deployers at the implementation stage, rather than only after hiring decisions are made, these approaches proactively prevent algorithmic bias and pre-emptively reduce discrimination.
IV. Conclusion
Algorithmic hiring systems were introduced with the promise of efficiency and objectivity. Yet, as Amazon’s failed hiring tool demonstrates, these technologies can replicate and reinforce the same structural biases they were intended to eliminate. As automated screening systems become increasingly common in hiring processes, the gap between technological innovation and legal protection continues to widen.
While Title VII’s disparate impact framework remains a key safeguard against employment discrimination, its effectiveness depends on workers' ability to identify and challenge discriminatory practices. Considering the “black-box” nature of algorithmic decision-making, the opacity of models prevents applicants from recognizing patterns of discrimination. This problem is exacerbated by the lack of updated regulations addressing emerging issues from rapidly evolving technologies. Thus, adopting updated regulations, such as the New York model, would better protect against new risks.
Recent legislative efforts suggest a more proactive regulatory model, ideally combining current proposals across state and federal levels of government. The Algorithmic Accountability Act, an executive order, and state measures such as New York City’s bias audit requirements, impose pre-deployment provisions that increase accountability.
As artificial intelligence becomes increasingly integrated into employment decision-making, employment discrimination law must evolve to address the unique risks posed by algorithmic systems. Without clear standards for transparency and accountability, automated hiring tools will continue to reproduce and intensify existing inequalities within the labor market.
[1] BBC News, “Amazon Scrapped ‘Sexist AI’ Tool,” BBC News (Oct. 10, 2018), www.bbc.com/news/technology-45809919.
[2] Ibid.
[3] Ibid.
[4] Utkarsh Amitabh & Ali Ansari, “Hiring with AI Doesn’t Have to Be So Inhumane. Here’s How,” World Economic Forum (Mar. 28, 2025), www.weforum.org/stories/2025/03/ai-hiring-human-touch-recruitment/.
[5] U.S. Equal Employment Opportunity Commission, “Title VII of the Civil Rights Act of 1964,” www.eeoc.gov/statutes/title-vii-civil-rights-act-1964.
[6] April J. Anderson, “What Is Disparate-Impact Discrimination?” Congressional Research Service (July 9, 2025), www.congress.gov/crs-product/IF13057.
[7] Lena Kempe, “Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies,” Business Law Today (Apr. 2024), www.americanbar.org/groups/business_law/resources/business-law-today/2024-april/navigating-ai-employment-bias-maze/.
[8] Columbia Science and Technology Law Review, “STLR Link Roundup – December 6, 2019,” Columbia Sci. & Tech. L. Rev. Blog (Dec. 6, 2019), https://journals.library.columbia.edu/index.php/stlr/blog/view/654.
[9] New York City Department of Consumer and Worker Protection, “Automated Employment Decision Tools (Updated),” NYC Rules (effective July 5, 2023), https://rules.cityofnewyork.us/rule/automated-employment-decision-tools-updated/.

