Office jobs are at greater risk due to AI – 08/27/2023 – Tech

Office jobs are at greater risk due to AI – 08/27/2023 – Tech

[ad_1]

The American workers who have had their careers turned upside down in recent decades because of automation have mostly been low-skilled — mostly men who worked in the manufacturing sector.

But that’s changing with the new kind of automation: artificial intelligence systems called big language models, like ChatGPT or Google’s Bard. These tools can process and synthesize information quickly and generate new content.

The jobs most exposed to automation today are office jobs, which require more cognitive skills, creativity and higher education. The professionals involved are more likely to be well paid and slightly more likely to be women, as various surveys have found.

“This took most people by surprise, myself included,” commented Erik Brynjolfsson, a professor at the Stanford Institute for Human-Centered AI, who had predicted that creativity and technological skills would protect people from the effects of automation. “To be brutally blunt, we had a hierarchy of things technology could do, and we felt comfortable saying that things like creative work, professional work, emotional intelligence, are unlikely to ever be done by machines. Now that’s been subverted. “

Several new studies have analyzed the tasks of American workers using the US Department of Labor’s O*Net database and hypothesized which of the tasks could be performed by large language models. Research has concluded that these models can significantly help with tasks in a fifth to a quarter of professions. Reviews, including some from the Pew Research Center and Goldman Sachs, concluded that models could perform some of the tasks in most jobs.

For now, the models still occasionally produce incorrect information, and are more likely to help practitioners than replace them, said Pamela Mishkin and Tyna Eloundou, researchers at OpenAI, the company and research lab responsible for ChatGPT. They did a similar study, looking at 19,265 tasks performed in 923 professions, and concluded that large language models can perform some of the tasks that 80% of American workers perform.

But they also found reason for some workers to fear that the big language models could take their place, as OpenAI CEO Sam Altman told The Atlantic magazine last month: “Jobs are going to disappear, without a doubt.”

The researchers asked an advanced ChatGPT model to analyze O*Net data and determine what tasks can be done by large language models. The model responded that 86 professions are fully exposed (ie, all tasks involved could be assisted by the tool). The human researchers said 15 professions are at risk. Humans and AI agreed on one point: the most exposed profession is that of mathematicians.

The analysis concluded that only 4% of professions include 0% of tasks that could be assisted by technology. Among them are: athlete, dishwasher and carpenters’ helpers, wall painters and roofers. But even professionals like plumbers and the like can use AI for some of their jobs, like scheduling jobs, serving customers and optimizing routes, said Mike Bidwell, CEP at Neighborly, a residential services firm.

OpenAI has a commercial interest in promoting its technology as being very useful to professionals, but other researchers say there are still uniquely human skills that cannot be automated — such as social skills, teamwork, caring for the sick and elderly, and manual or mechanical work. . “There will be no shortage of things for humans to do, not in the foreseeable future,” Brynjolfsson said. “But things are different: learning to ask the right questions, really interacting with people, physical work that requires dexterity.”

For now, the big language models are likely to help many professionals be more productive in their existing jobs, researchers said — something like giving office professionals, even entry-level ones, a chief of staff or research assistant (though this can be a sign of trouble for human assistants).

Take the case of those who write computer code: a study by Github’s Copilot, an AI program that helps programmers by suggesting code and functions, found that professionals who use the program work 56% faster than those who perform the same task without it .

“There is a misconception that exposure is necessarily a negative thing,” Mishkin commented. After reading descriptions of all the occupations, she and her colleagues learned an important lesson: “No model is ever going to do it all.”

Big language models might help write laws, for example, but they couldn’t pass laws. They could act as therapists — people could share their thoughts, and models could respond by offering ideas based on cases of proven value — but they lack human empathy and the ability to interpret situations with nuance.

The version of ChatGPT open to the public carries risks for workers: it makes frequent errors, may reflect human biases and is not secure enough for companies to trust confidential information. The companies that use it circumvent these obstacles with tools that employ the technology in a so-called closed domain – that is, they train the model using only certain content, and keep the information they feed the model confidential.

Morgan Stanley uses a version of the OpenAI model that was made for its business and which has been fed with 100,000 internal documents, over 1 million pages. Financial advisors use it to help them find information to promptly answer client questions, for example, whether or not to invest in a particular company (previously, answering that question required locating and reading numerous reports).

That gives advisors more time to talk to clients, said Jeff McMillan, director of data analytics and wealth management at the firm. The tool does not know about individual customers or any human touch that might be needed, for example if the customer is sick or going through a divorce.

Professional recruitment firm Aquent Talent is using a corporate version of Bard. Generally, humans scan resumes and portfolios of professionals to find someone whose qualifications match what is sought in a job opening. The tool can do this much more efficiently. But the job still requires human review, especially in hiring, because human biases are built in, said Rohshann Pilla, president of Aquent Talento.

Harvey, funded by OpenAI, is a startup that is selling such a tool to law firms. Directors use the tool to strategize, for example, devising ten questions to ask in a court deposition or summarizing how the firm negotiated similar agreements.

“It’s not a question of ‘this is the advice I would give a client,'” said one of Harvey’s co-founders, Winston Weinberg. “It’s ‘how can I sift through this information quickly to get to the point of defining a board?’ It still takes the person who makes the decision.”

He said the tool is especially useful for attorneys’ assistants. They use it to learn—asking questions like “what is this type of contract for” or “why was it written like that?”—or to write first drafts of texts, for example summaries of financial statements.

“Suddenly they have an assistant,” he said. “People will be able to do higher-level jobs in less time, moving them to a more advanced position in their careers in less time.”

Others who study how companies use large language models have found a similar pattern: The models help most junior employees. A study by Brynjolfsson and colleagues of customer service agents found that using AI increased their productivity by 14% overall and 35% for lower-skilled workers who moved up the learning curve faster with help. of the models.

“AI bridges the gap between entry-level and highly competent professionals,” commented Robert Seamans of the Stern School of Business at New York University, who co-wrote a paper saying that the professions most exposed to great language models are those in field of telemarketing and certain school teachers.

The most recent wave of automation, affecting jobs in the manufacturing sector, has deepened income inequality, research shows, by depriving workers without college degrees of well-paying jobs. But some scholars say the big language models can do the opposite, narrowing the disparity between the highest-paid professionals and everyone else.

“My hope is that AI will help people with less formal education get more done,” said David Autor, a labor economist at MIT, “by lowering the barriers to entry into high-paying elite jobs.”

Translated by Clara Allain

[ad_2]

Source link