Oct 12th 2021
AI powered analytics is used by 89% of HR departments in the US to pre-screen candidates in some form. But does it’s use of emotion and vocal analytics and the inherent risk of racial and gender bias make it fit for purpose.
Facial analytics models the 42 muscles in the human face by using up to 90 nodal points to digitally map the unique contours into a faceprint and store it in a database.
Once the sole preserve of security systems for police services, controlled areas such as airports, data centres and the like, the role of AI & facial/vocal recognition has now firmly embedded itself into HR departments as firms seek to streamline the job pre-screening process. But as sexy as the technology sounds, there’s controversy in the form of privacy risks, accusations of racial/gender bias and even junk science.
One such HR tech company HireVue became a media spectacle when it introduced emotion analytics as an algorithm in it’s video based candidate pre-screening product in 2019. HireVue touted it’s ability to collate non-verbal clues including eye movements, body language, clothing detail & voice intonations to profile a candidate. The algorithm it used was proprietary and secret hence could not be independently validated but that didn’t stop the rush of big employers like Unilever, Dunkin Donuts and IBM from scurrying to use it.
The scurry continued until EPIC (Electronic Privacy Information Center) filed a complaint with the FTC challenging the fairness and transparency of emotion based analytics. This along with a report by the AI Now research institute which condemned the “systemic racism, misogyny, & lack of diversity” in the AI industry as a whole, all led HireVue to abandon the product earlier this year. On top of all of the aforementioned problems, AI also posed non trivial privacy risks by it’s use of large data sets of personal information needed for learning and storage.
But perhaps the greatest injustice is AI’s propensity for unconcious bias in candidate screening which was perhaps best exemplified by Amazon in 2018 when their AI recruiting system was marking down hiring scores for female candidates. In an example case, the algorithm listened for gender signals on a CV like ‘Female chess champion’ or ‘female sports award receipient’ & made automated adverse decisions. Amazon like HireVue, had to roll back on using the system.
Artificial Intelligence Act
The EU has responsed to AI concerns with an artificial intelligence act draft proposal this year with 7 key requirements that AI systems should meet to be deemed trustworthy including; human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination & fairness; societal and environmental well-being; and accountability.
Critics of the act say that the definition of AI is too broad and that the current definitions were too technical and therefore, likely to become quickly outdated. Experts expect the challenges to be a drawn out process as industries fight back.
The message to HR depts and other AI consumers is tha buyer beware when using AI & it’s potential for bias, GDPR warns us of the risk of automated profiling but theres also a significant risk of reputational harm to a companies image as a good corporate citizen.
Conduct HR tech tool pre purchase assessments with consideration to the AI Act and stay clear of propietary algorithms if you can. AI is still in its infancy and hence a potential law suit magnet.
#dposolutions #dataprivacy #hiringtech
Looking for Data Protection Training Material?
Check out our Online Shop page
Subscribe to our Newsletter
-Get Notified of New Posts Like These-
Subscribe to our Newsletter
-Get Notified of New Posts Like These-