How Will Anti-Bias Regulation Laws In AI Affect Recruitment?

Share:

By Emmanuel Marboeuf, CTO of Visage

Effective January 1, 2023, New York City employers will be limited in using artificial intelligence machine-learning products in hiring and promotion decisions. This is only one of many steps toward regulating artificial intelligence to mitigate bias in AI systems. Diversity and inclusion have always been core values for Visage since 2016 and it is with great satisfaction that we see more and more initiatives going in the direction that we took 6 years ago when we founded Visage.

Visage combines crowdsourcing, artificial intelligence, and the use of a collaborative application to arrange passive candidate sourcing amongst all parties involved in the recruitment process. The evolution of the regulation is the perfect opportunity to give more details about how Visage leverages AI to optimize sourcing in recruitment processes without any bias that could have a negative impact on natural persons (prospective candidates).

The problem with AI in recruitment

As Amazon’s experience for selecting job applicants showed, the use of AI in recruitment can potentially have negative impacts on certain categories such as females or minorities. In 2018, Amazon had to shut down its entire program because there were clear indications that women’s profiles were downgraded in favor of their male counterparts.

The culprit is clear, the training data set that was used to create this model is heavily biased. It’s the result of 10 years of candidate selection that was made by an overwhelming majority of people sharing the same characteristics in tech that would unconsciously or consciously favor candidates coming from similar backgrounds or of similar cultures.

The model was later amended to try to mitigate this bias but without success because at the end of the day, a model can only be as good as the data set it was fed with. No matter what you do, if the problem you are trying to solve is to select or rank candidates, you can only hope that it will reproduce accurately the behavior of human decisions made in the past. If these decisions are morally wrong by today’s standards then maybe this is not a problem that should be solved directly using AI’s most common techniques.

Can AI do more good than harm in talent acquisition?

I believe that there are still ways AI can be helpful in the recruitment process. At Visage, we got to the root of the problem. Instead of using a black box that would select or rank profiles based on job requirements we instead rely on a few mechanisms, a combination of algorithms, and models with humans in the loop to help all parties involved in the recruitment process. No automated decisions based on pictures, race, ethnicity or gender are ever made.

  • We rely on a community of trained professionals to find and select talents for our clients.
  • Our sourcers are trained in the Visage Academy to source diverse candidates and QA’d randomly by talent acquisition and diversity experts to make sure they match our standards.
  • Visage AI is essentially helping sourcers to find the right jobs where they are most likely to have connections with skilled professionals and source the right number of profiles in order to reach our clients’ goals, it only gives indications on the most important professional skills required for the job, it doesn’t rank or promote specific individuals.
  • We rely on a lot of data analysis and statistical models to find patterns that help us understand who is or isn’t a good match for our clients. These patterns are transparent and known and if they don’t fit our values regarding diversity and inclusion they are not implemented.
  • We never collect and/or show any data such as gender, race, ethnicity, or pictures that could lead our clients to make a biased decision.

How to make sure your algorithms and models are compliant?

Society is still at a very early stage in its understanding of unconscious biases in technology. However, there are already a few companies that offer an external audit of your algorithms and models to ensure compliance with the laws. I am not going to recommend any of those companies yet because there is too little data to know if auditors are really abiding by the law and if the audits are thorough. This domain should expand quite a bit in the next few years with the deployment of the laws and hopefully, we will see certifications emerging soon. We, at Visage recently committed to having our algorithms and models audited by an external third party before January 2023.

It’s time you ease the burden on yourself and your recruiters. Take the first step by expanding your sourcing and engagement capabilities with our intelligent global platform. To learn more about how Visage can help you to reduce the likelihood of recruiter burnout impacting your team, reach out to us for a demo.