Does AI Reduce Bias in Hiring?
Artificial intelligence (AI) has the potential to reduce bias in the hiring process, but if the technology isn’t trained and implemented correctly, bias can also be perpetuated. Developers and organizations must be attentive to the algorithms and data being used to ensure that they are not furthering bias or discrimination.
What is a problem with using artificial intelligence to do recruiting tasks?
AI technology can save recruiters and hiring managers a lot of time and help them to work more efficiently, and, when done correctly, AI can reduce human bias throughout the recruiting process. However, because AI is programmed and implemented by humans, sometimes using AI does lead to discrimination. AI software that screens resumes for specific keywords, skills, or qualifications may exclude competent candidates that don’t fit the exact parameters. This can result in certain demographics being scored lower or screened out altogether. For AI software that sources people using their online profiles, qualified individuals who don’t have a large digital footprint could be overlooked. Personality and behavioral assessments, gamification tasks, and AI-scored video interviews are all ways that AI is used in the interview process. These kinds of AI don’t necessarily take cultural differences into account or include accommodations for people with disabilities. This technology could also be detrimental to older adults, who may not be as technologically savvy as younger applicants. It is essential that organizations evaluate the AI they are using and the decisions being made to ensure that they are free from bias and discrimination and that qualified candidates aren’t being passed over.
What is the best way of eliminating bias in machine learning?
Although bias will probably never be completely eliminated, there are steps that can be taken to reduce it. One of the best ways is to make certain that the data sets being used to train the algorithms are comprehensive and representative of the entire population. There have been several examples of AI bias in healthcare, including the healthcare algorithm used by many hospitals to determine if patients needed extra care. Originally, only data related to previous healthcare costs were used to make these predictions, which resulted in black patients receiving extra care less frequently than white patients. After reevaluating the algorithm and changing the type of data used, AI bias was reduced by over 80%.
How can AI overcome gender bias in recruitment?
Gender bias in the workplace has long been a hot topic, so it’s no surprise that many questions whether or not AI will reduce gender bias in hiring. An article about gender bias in recruitment and how AI hiring tools are hindering women’s careers explained how identifying one’s gender as “female,” as opposed to “male,” in a job search returned fewer ads for higher-paying jobs and a smaller number of results overall. On the hiring side, algorithmic bias resulted in recruiters being shown fewer female candidates. These algorithms are typically trained using previous hiring data, which is historically male-dominated, especially for certain fields. It can also result in women being offered lower-skilled, lesser paying jobs, even if they are as equally qualified as their male counterparts. However, when properly coded and trained, AI can be a tool for mitigating gender bias in recruitment. It can identify and alter biased language or gender coding in job descriptions and advertisements. AI can also be trained to focus more on specific skills and abilities rather than on gender identifiers such as names, titles, or education.
Share this article