New York adopts law for the use of AI in hiring – 07/04/2023 – Market

New York adopts law for the use of AI in hiring – 07/04/2023 – Market

[ad_1]

This Wednesday (5), in New York City, regulation that requires “bias audits” of companies that use artificial intelligence to screen candidate resumes and determine employee promotions.

The law, passed by the city’s consumer and worker protection department, directs all companies and employment agencies that use “automated tools” for hiring to conduct annual bias audits of their algorithms and make them public. The reviews, which must be conducted by independent auditors, establish the degree of impact of the use of AI in the selection of candidates of a certain race, ethnicity or gender and possible discrimination.

Many companies use artificial intelligence for hiring processes, using software that filters resumes by keywords and chatbots that grade candidate interviews.

There have been reports of several episodes where the algorithms end up undermining certain candidates. In 2018, for example, Amazon suspended the use of a tool that filtered job applicants by giving scores from zero to five stars to submitted resumes. The ecommerce giant realized that the algorithm was hurting women applying for software development or other technical jobs. This was because the algorithm was based on patterns of resumes submitted to the company over the previous ten years. Most came from men, and the algorithm “learned” that male candidates were preferable. Resumes that had the words “female” or “women” were penalized.

“There are many cases of algorithmic discrimination, models that were used on a large scale and after eight, ten years, when they were audited by civil society organizations or academic researchers, the intrinsic discrimination became evident”, says Dora Kauffman, professor of the Program of Technologies of Intelligence and Digital Design at PUCSP.

The New York regulation is the first in the US to address algorithmic bias in hiring. The few AI or social media regulations in the US have been passed at the city or state level. Federal legislation has faced great resistance because of the Silicon Valley-linked congressional caucus. California, New Jersey, Vermont and the District of Columbia are other states discussing regulation of AI in hiring.

“The New York law comes at a time when there is a consensus that it is necessary to regulate the impacts of AI; in Brazil, we have PL 2338, we have the AI ​​Act, the European proposal in the final stages of negotiation, and several projects under discussion around the world”, says Laura Schertel, president of the Digital Law Commission of the Federal OAB and professor at UnB (University of Brasília).

In Brazil, bill 2338, by senator Rodrigo Pacheco (PSD-MG), also provides for transparency, the right to appeal and audits in the context of the use of AI for hiring and employment decisions. He classifies as “high risk” artificial intelligence systems used for “recruiting, screening, filtering, evaluating candidates, making decisions about promotions”. And the project, which is in progress, establishes that high-risk systems need to undergo evaluation of the data used “with appropriate measures to control human cognitive biases” to avoid “the generation of biases due to problems in the classification, failures or lack of information in relation to affected groups, lack of coverage or distortions in representativeness”.

The text was drawn up from a draft by a commission of jurists chaired by the Minister of the STJ (Superior Court of Justice) Ricardo Cueva and reported by lawyer Laura Schertel. Rather than being sectoral like US law, it is horizontal and applies across multiple sectors, based on the risks of each AI use. “In Brazil, being an extremely unequal country, it is very important to carefully examine AI biases in this hiring context, because historical data that feed algorithms often reflect an unequal reality.”

While hailing the city’s adoption of regulation, critics believe New York’s rules will not be enough to curb algorithmic discrimination. The law does not include, for example, biases that harm elderly or disabled people. And the rules of audits, still according to critics, could be easily circumvented, because they define “automation” in a very strict sense, only when AI is the main responsible for the decision. Typically, the algorithm is used to filter the best candidates from thousands of resumes, and the final decision is made by a human.

New York law, passed in 2021, also requires transparency: all companies that use artificial intelligence in hiring and promotions must inform candidates.

Among the AI ​​modalities used in this area are software that filters resumes and recommends the best candidates for an open position; algorithms that find job postings and send the most suitable ones to candidates; programs that collect information from social networks to establish personality profiles for candidates, and chatbots that ask questions to determine whether applicants should be selected for an interview.

“The challenge is who is going to audit. Regulatory authorities in general have little knowledge about the technology, and given the complexity and protection of the algorithms on commercial secrecy, the professionals capable of exercising this role are restricted”, says Kauffman. “Discrimination is not always obvious, and poor auditing can do harm.”

[ad_2]

Source link