Oppenheimer: Nolan fails to warn Silicon Valley – 07/21/2023 – Market

Oppenheimer: Nolan fails to warn Silicon Valley – 07/21/2023 – Market

[ad_1]

Christopher Nolan’s new film “Oppenheimer” follows the life of the physicist who led efforts to create the first atomic bomb, from university to public recognition, after political persecution. It also addresses the remorse caused by the creature, the bombs that killed at least 220,000 people in Hiroshima and Nagasaki.

In conversation with scientists, writers and journalists after the work was shown, Nolan said he hoped the story would serve as a moral lesson for Silicon Valley. The comment mentions the discussion about the risks of artificial intelligence.

The filmmaker resorted to the hyperbole of the example of AIs with control over nuclear weapons. “If we allow people to separate the technology from the person behind the mechanism, whether it’s programming or application, we’re screwed. There has to be accountability.”

The film, however, pays little attention to this aspect. Adaptation of the book “The American Prometheus” (American Prometheus, in free translation), from 2005, the work tries to address the pain of the natural person Robert Oppenheimer due to the ethical conflicts in the creation of the bomb. The comparison is with Prometheus, the titan who gave fire to humanity and ended up being punished by Zeus.

In the end, the film pays more attention to the political persecution suffered by the scientist for links with leftist movements. Oppenheimer was one of the voices critical of the development of the hydrogen bomb and was defeated in the discussion.

The hottest name in artificial intelligence, Sam Altman, chief executive of OpenAI, showed an affinity with Oppenheimer’s story in 2019.

Days after receiving the first investment of $ 1 billion from Microsoft, Altman said that the Manhattan Project – the program that developed the first atomic bombs – was the same proportion as OpenAI, “at least in terms of ambition.” The statement was made in an interview with The New York Times.

The company’s mission, he said, is “to develop artificial general intelligence that benefits all of society.” At this theoretical stage, algorithms could generalize responses to different situations, just as humans do.

In recent times, Altman wrote that artificial general intelligence can lower the price of education and healthcare by decreasing the demand for expensive professionals, which would make the world more equal.

Eighty years ago, Oppenheimer seemed convinced, during his work at Los Alamos, that the atomic bomb would be the end of all wars, minimizing his moral conflict by thinking of the Nazi target, as shown in Nolan’s film.

Just as the scientist recognized the risks of the project he led, Altman also speaks frequently of the high risks of AI causing economic and social disruption and empowering bad people. He even said so in an article published on OpenAI’s own blog.

In this way, the AI ​​startup executive places the topic of the business he leads at the center of the global debate. Either because of the chance of generating economic development, or because of the fear of making humanity obsolete and subservient to machines.

It is this exaggeration of benefits and risks that the neuroscientist Miguel Nicolelis called marketing, in a recent interview with Sheet.

Artificial intelligence has the potential to transform a lot of things, but it is not known what and to what extent. Data from McKinsey consultancy shows that only 11% of ideas involving AI become viable business or resources in the production chain – the number is even lower for language models, such as ChatGPT.

Leaving a moral to Silicon Valley billionaires misses the point, since these actors are already in control. Microsoft, Facebook, Google, the United States and China chose the attention economy and then artificial intelligence as strategic themes.

With these companies in charge, there are already tragedies, such as the dehumanization of the Rohingya minority in Myanmar driven by Facebook’s curation algorithm – which works based on AI to retain users’ attention for longer.

Considering that “technology happens because it is possible”, as Oppenheimer said when asked about the bomb in 1945, alerting users has more effect than already involved bosses. If the effect is public, everyone should have a voice.

The trauma of the atomic bomb was not just the subject of debate by physicists based in the US West or governments involved in the Cold War. Books like “Hiroshima” by John Hersey touched the world in 1946.

It was indignant people who took to the streets from Tokyo to São Paulo in 1968, the year in which the five major nuclear powers agreed to sign the Treaty on the Non-Proliferation of Nuclear Weapons.

A scene from “Oppenheimer” shows how the silence of people in the face of the American bombings in Tokyo, following the attack on the Pearl Harbor base, created a political environment for the explosion of the Fat Man and Little Boy bombs. These offensives prior to the nuclear episode left around 100,000 dead, historians estimate.

The problem is still unresolved, as is the case with social networks. The Black Lives Matter anti-racist movement shows, on the other hand, that the population can operate platforms in their favor, to project the voice of previously invisible groups.

Stories like Oppenheimer’s have long circulated in the public sphere. He is a western white man with academic life in California — where he went to study Sam Altman and went to found businesses Mark Zuckerberg, Steve Jobs, Larry Page, from Google, and others.

A revisit of the trajectory of a persecuted and repentant hero will not make these executives and founders, full of power in their hands, aware.

[ad_2]

Source link