Job search and matching
Article

If Left Unchecked, Algorithmic Decisionmaking Could Perpetuate Workplace Bias and Harms

Jessica Shakesprere, Batia KatzDecember 15, 2021

Employers are increasingly using algorithms—computer programs that mimic human decisionmaking—to manage workplace practices, from hiring to day-to-day employee management.

In hiring, employers seeking to fill entry-level, hourly positions may use algorithms or other predictive tools to screen and filter the sheer volume of applications. Applicants for these entry-level jobs—including people of color, young people, and people with disabilities, all of whom disproportionately experience discrimination in hiring—must navigate online application processes in which they may never encounter a human manager.

On the job, employers are using technology to collect data on workers, measure worker productivity, and automate management decisions based on these data in unprecedented ways. The use of monitoring tools and algorithms can exacerbate bias and place harmful pressures on workers. 

In this post, we highlight these challenges as well as the promise algorithms and other predictive tools hold for the workplace.

Algorithmic hiring

Hiring is a first step toward creating economic opportunity for workers. For employers, it is a costly process that involves sourcing, screening, interviewing, and selecting candidates to hire. As a recent study by Upturn on hiring algorithms and bias notes, big box retailers that employ predominantly low-wage, hourly workers are now routinely using predictive hiring tools to achieve greater efficiency and cost savings.

Hiring algorithms rely on machine learning, where computers detect patterns in existing data to make predictions about potential employees and influence jobseeker and employer decisionmaking. Beyond efficiency, many employers are adopting predictive tools to remove the subjective bias human hiring managers may bring to the process.

Yet, research indicates algorithmic tools may replicate or exacerbate human biases at different stages of the hiring process because they are programmed with data—known as training data—that contain embedded institutional or structural biases. These training data could include past employee profiles or employer preferences, reflecting flawed assumptions of who makes a “successful” hire and historical patterns of privilege or disadvantage. Literature indicates that despite antidiscrimination and civil rights laws, racial bias and discrimination in the labor market persist. Predictive tools with bias baked into their data could be no more objective than human managers rejecting candidates based on personal prejudice.

For example, if an employer has never hired a candidate without postsecondary education credentials or a graduate of a historically Black college or university, an algorithmically driven candidate search could exclude jobseekers with these profiles. Similar algorithms used to control who sees an online job posting may mean these candidates would not even be part of the applicant pool. 

Algorithmic management

Once hired, employers may use technology to collect data on employee productivity, performance, and behavior in the workplace, such as software that monitors employees’ keyboard activity, video surveillance, attention-tracking devices to measure worker attentiveness, and technology that tracks worker location.

These data are then used to automate decisions about pay, scheduling, performance evaluation, promotion, and termination, a process known as algorithmic management. As Data & Society researchers note in a 2019 report, algorithmic management poses challenges for workers in four key areas: increased surveillance and control, a lack of transparency in how decisions are being made, decisions based on data that are biased or discriminatory, and a lack of employer accountability.

Gig economy companies have pioneered the use of algorithms powered by worker surveillance and customer survey data to manage a globally dispersed, decentralized workforce. Although employee surveillance predates digital technologies, the sheer volume of data collected on employees has expanded significantly and accelerated during the COVID-19 pandemic, when remote and online work became more prevalent.

Though some employers may feel these tools help them better track performance and productivity, they can compromise worker data privacy and security and increase stress and on-the-job illness or injury. Research has found workplace surveillance and algorithmic management disproportionately harm people of color, women, and other members of marginalized communities. Increased surveillance could also prevent workers from collectively organizing, but it has also fueled unionization drives.

Several studies have shown that racial and gender biases pervade consumer ratings in online marketplaces and platforms, which, in turn, feed into algorithmic decisions about pay, scheduling, andtermination. .

A 2016 study found that Uber’s customer rating systems—which determine whether a driver will be deactivated—allow consumers to express biases and preferences in which federal law forbids companies from engaging.  This and other studies show that consumer-sourced ratings are highly likely to be influenced by racial or ethnic bias. Northeastern University researchers found evidence of racial and gender bias on TaskRabbit, a personal services platform, and Fiverr, a marketplace for creative services. When consumer ratings determine the quality or frequency of assignments or terminations on online platforms, consumer biases are the main determinant of employment decisions. 

Can algorithms be programmed to advance equity?

Despite evidence that algorithms can reproduce discrimination throughout hiring and management processes, some argue that with greater transparency and oversight, algorithms have the potential to advance equity. If algorithms are programmed with biased data, then they will perpetuate these biases. But if they are programmed to deliberately expand applicant pools to include historically underrepresented groups and recruit for job-related skills and traits rather than proxies such as college degrees, then they could be tools of more inclusive hiring. 

MIT researchers recently tested this theory by developing three different hiring algorithms and testing them using job application data from a Fortune 500 firm. The first relied on a typical “static supervised learning model” that used previous applicant data, the second used a more dynamic approach by incorporating new data of applicants selected for interviews, and the final incorporated “exploration bonuses” to increase the algorithm’s selection of nontraditional candidates. Researchers found the third type of algorithm doubled the share of Black and Latine applicants.

Many algorithms are complex tools that operate as a black box, in that little is known about how they are constructed. Vendors claim their algorithms and the data that power them are proprietary and rarely disclose information on their design. Employers and vendors of algorithmic hiring tools are increasingly being called upon—and in some cases, are being enforced by local laws—to audit their algorithmic hiring assessments for discrimination to improve accountability.

How workers are challenging algorithmic management and surveillance

Workers are disrupting the harmful and inequitable effects of algorithmic management through legal challenges and collective action. Uber drivers who were deactivated from the app sued the company for racial discrimination, alleging the customer ratings system disproportionately leads to nonwhite drivers and those with non-English accents to be negatively reviewed and fired.

Some gig workers in the United Kingdom have sued employers to have their data and algorithms made transparent so workers can understand why and how managerial decisions are being made. LA Rideshare Drivers United has organized collective actions again Uber and Lyft, calling for greater transparency on metrics that “gamify”—or motivate through a system of incentives, rewards, and penalties—performance.

If unchecked, algorithmic tools can reproduce the same discriminatory hiring and practices by default and create security and safety risks for workers on the job, as well as prompt new legal questions regarding the protection of workers’ privacy rights. With careful implementation, regulatory action, and inclusive design with an emphasis on worker voice, algorithms could reduce bias and support worker well-being. More evidence building on the use of technology and artificial intelligence in the workplace can help ensure technology and algorithmic tools are improving equity and job quality.


Share your ideas for research, topics, or events to be featured on Working Knowledge by emailing workingknowledge@urban.org