Sam Altman consolidates power at OpenAI – 09/29/2024 – Tec
“OpenAI is nothing without its people.” That was the phrase echoed by dozens of employees on social media in November to pressure the board, which had fired CEO Sam Altman, to reinstate him.
These words were repeated again on Wednesday (25), when prominent chief technology officer (CTO), Mira Murati, announced her departure along with two others — Bob McGrew, director of research, and Barret Zoph, vice president of research .
Murati’s decision shocked the team and pointed to a new direction for the nine-year-old company that has grown from a makeshift artificial intelligence research organization to a commercial giant. Altman was only notified in the morning, just hours before Murati sent a company-wide message.
Altman said on
In the months since the board battle, Altman has surrounded himself with allies as the fast-growing startup moves forward with plans to restructure itself as a for-profit company.
It also emerged this week that Altman discussed the possibility of acquiring an equity stake with the board, as the San Francisco-based company seeks to raise more than $6 billion at a valuation of $150 billion.
These conversations come after Altman, who is already a billionaire due to his previous technological ventures and investments, previously said that he had chosen not to acquire a stake in OpenAI to remain neutral in the company.
This account of how Altman consolidated his power and loyalties at the creator of ChatGPT is based on conversations with seven former employees and employees, as well as advisors and executives close to the company’s leadership.
They said OpenAI plans to rely on existing technical talent and new hires to take over Murati’s responsibilities and use his departure to “flatten” the organization.
Altman will have greater technical involvement as the company seeks to maintain its lead over Google and other competitors.
Despite its dramas, OpenAI is still a leader in AI. The startup launched the o1 model earlier this month, which it said is capable of reasoning — a feat its rivals Meta and Anthropic are also tackling.
A person familiar with the matter said Mira is focused on successfully transitioning her teams before turning her attention to what comes next.
With Murati’s departure, Altman promoted Mark Chen to lead the research. Jakub Pachocki, who took over as chief scientist in May, replaced Ilya Sutskever.
In an interview with the Financial Times earlier this month, in which Murati introduced Chen as the main leader of the o1 project, he said that AI systems’ ability to reason “would improve our offerings and help drive improvements across all of our programs.” .
There will likely be more changes in the coming days, as Altman cuts short a trip to Europe this week to return to the company’s headquarters in San Francisco.
Executives remaining at OpenAI include Brad Lightcap, the company’s chief operating officer who leads enterprise deals, and Jason Kwon, chief strategy officer, both longtime allies of Altman and who worked at startup incubator Y Combinator under Altman.
In June, Altman hired Kevin Weil, a chief product officer who previously worked at Twitter, Instagram and Facebook, and Sarah Friar, a chief financial officer, the former CEO of Nextdoor, a neighborhood-focused social network.
Both come from consumer technology companies, focusing on products and user growth rather than science or engineering.
Their roles are new to OpenAI but familiar to most Silicon Valley startups, marking the company’s shift to becoming a more traditional technology group focused on building products that attract consumers and generate revenue.
OpenAI said these efforts are not incompatible with ensuring that AI benefits everyone.
“As we evolve from a research lab to a global company delivering advanced AI research to hundreds of millions of people, we remain true to our mission and are proud to launch the industry’s most capable and secure models to help people solve difficult problems,” said an OpenAI spokesperson.
Friar sought to boost morale this week, telling staff that the $6 billion funding round, which is expected to close next week, was oversubscribed, arguing that its high valuation was a testament to their hard work.
Another prominent newcomer is Chris Lehane, a former adviser to then-US President Bill Clinton and vice president of Airbnb, who worked for Altman as an adviser during the coup and joined the company earlier this year.
He recently took over the role of vice president of global affairs from Anna Makanju, OpenAI’s first policy hire, who has moved to a newly created role as vice president of global impact.
With the latest departures, Altman said goodbye to two of the senior executives who had raised concerns about him to the board last October — Sutskever and Murati, who said she was approached by the board and was perplexed by the decision to fire him.
Concerns included Altman’s leadership style of undermining and pitting people against each other, creating a toxic environment, said several people with knowledge of the decision to fire him.
Within a day, as investors and employees rallied behind Altman, Murati and Sutskever joined the calls for his return and remained with the company, wanting to steady the ship and keep it sailing toward the mission: building artificial general intelligence (AGI) — systems that could rival or surpass human intelligence—to benefit humanity.
That was the mantra under which OpenAI was founded in 2015 by Elon Musk, Altman and nine others. It was initially a non-profit organization, then transformed into a limited-profit entity in 2019.
Now, as it seeks to close its latest multibillion-dollar financing round, the company is rethinking its corporate structure to attract investors and generate greater returns. Only two co-founders, Altman and Wojciech Zaremba, remain with the company. Board Chairman Greg Brockman is on sabbatical until the end of the year.
For many OpenAI employees, there is a desire to work at AGI and achieve this goal before competitors like Meta or Musk’s xAI. They buy into the so-called “cult of Sam” and believe it will lead them to this discovery.
However, several employees have expressed concerns about achieving this goal, suggesting that product creation is being prioritized over security.
Daniel Kokotajlo, a former AI governance researcher, said that when he left the company in March the closest OpenAI had come to a plan to ensure the security of AGI was the final appendix to a December paper written by Jan Leike, a security researcher, along with Sutskever.
“You would expect a company with over 1,000 people building this to have a comprehensive written plan to ensure AGI is safe, which would be published so it could be critiqued and evaluated,” he said.
“OpenAI knows that any such detail would not stand up to scrutiny, but this is the minimum acceptable for an institution that is building the most powerful and dangerous technology of all time.”
OpenAI pointed to its preparedness model as an example of its transparency and planning, adding that the technology could also bring many positive aspects.
“OpenAI continues to invest significantly in security research, security measures, and third-party collaborations, and we will continue to oversee and evaluate its efforts,” said Zico Kolter and Paul Nakasone, members of the independent board’s Safety and Security oversight committee.