Inside the Chaos at OpenAI

Sam Altman’s weekend of shock and drama began a year ago, with the release of ChatGPT.

OpenAI's logo
Illustration by Joanne Imperio / The Atlantic
OpenAI's logo

Listen to this article

Listen to more stories on curio

Updated at 8:15 a.m. ET on November 20, 2023

To truly understand the events of this past weekend—the shocking, sudden ousting of OpenAI’s CEO, Sam Altman, arguably the avatar of the generative-AI revolution, followed by reports that the company was in talks to bring him back, and then yet another shocking revelation that he would start a new AI team at Microsoft instead—one must understand that OpenAI is not a technology company. At least, not like other epochal companies of the internet age, such as Meta and Google.

OpenAI was deliberately structured to resist the values that drive much of the tech industry—a relentless pursuit of scale, a build-first-ask-questions-later approach to launching consumer products. It was founded in 2015 as a nonprofit dedicated to the creation of artificial general intelligence, or AGI, that should benefit “humanity as a whole.” (AGI, in the company’s telling, would be advanced enough to outperform any person at “most economically valuable work”—just the kind of cataclysmically powerful tech that demands a responsible steward.) In this conception, OpenAI would operate more like a research facility or a think tank. The company’s charter bluntly states that OpenAI’s “primary fiduciary duty is to humanity,” not to investors or even employees.

That model didn’t exactly last. In 2019, OpenAI launched a subsidiary with a “capped profit” model that could raise money, attract top talent, and inevitably build commercial products. But the nonprofit board maintained total control. This corporate minutiae is central to the story of OpenAI’s meteoric rise and Altman’s shocking fall. Altman’s dismissal by OpenAI’s board on Friday was the culmination of a power struggle between the company’s two ideological extremes—one group born from Silicon Valley techno-optimism, energized by rapid commercialization; the other steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution. For years, the two sides managed to coexist, with some bumps along the way.

This tenuous equilibrium broke one year ago almost to the day, according to current and former employees, thanks to the release of the very thing that brought OpenAI to global prominence: ChatGPT. From the outside, ChatGPT looked like one of the most successful product launches of all time. It grew faster than any other consumer app in history, and it seemed to single-handedly redefine how millions of people understood the threat—and promise—of automation. But it sent OpenAI in polar-opposite directions, widening and worsening the already present ideological rifts. ChatGPT supercharged the race to create products for profit as it simultaneously heaped unprecedented pressure on the company’s infrastructure and on the employees focused on assessing and mitigating the technology’s risks. This strained the already tense relationship between OpenAI’s factions—which Altman referred to, in a 2019 staff email, as “tribes.”

In conversations between The Atlantic and 10 current and former employees at OpenAI, a picture emerged of a transformation at the company that created an unsustainable division among leadership. (We agreed not to name any of the employees—all told us they fear repercussions for speaking candidly to the press about OpenAI’s inner workings.) Together, their accounts illustrate how the pressure on the for-profit arm to commercialize grew by the day, and clashed with the company’s stated mission, until everything came to a head with ChatGPT and other product launches that rapidly followed. “After ChatGPT, there was a clear path to revenue and profit,” one source told us. “You could no longer make a case for being an idealistic research lab. There were customers looking to be served here and now.”

We still do not know exactly why Altman was fired. He has not responded to our requests for comment. The board announced on Friday that “a deliberative review process” had found “he was not consistently candid in his communications with the board,” leading it to lose confidence in his ability to be OpenAI’s CEO. An internal memo from the COO to employees, confirmed by an OpenAI spokesperson, subsequently said that the firing had resulted from a "breakdown in communications” between Altman and the board rather than “malfeasance or anything related to our financial, business, safety, or security/privacy practices.” But no concrete, specific details have been given. What we do know is that the past year at OpenAI was chaotic and defined largely by a stark divide in the company’s direction.


In the fall of 2022, before the launch of ChatGPT, all hands were on deck at OpenAI to prepare for the release of its most powerful large language model to date, GPT-4. Teams scrambled to refine the technology, which could write fluid prose and code, and describe the content of images. They worked to prepare the necessary infrastructure to support the product and refine policies that would determine which user behaviors OpenAI would and would not tolerate.

In the midst of it all, rumors began to spread within OpenAI that its competitors at Anthropic were developing a chatbot of their own. The rivalry was personal: Anthropic had formed after a faction of employees left OpenAI in 2020, reportedly because of concerns over how fast the company was releasing its products. In November, OpenAI leadership told employees that they would need to launch a chatbot in a matter of weeks, according to three people who were at the company. To accomplish this task, they instructed employees to publish an existing model, GPT-3.5, with a chat-based interface. Leadership was careful to frame the effort not as a product launch but as a “low-key research preview.” By putting GPT-3.5 into people’s hands, Altman and other executives said, OpenAI could gather more data on how people would use and interact with AI, which would help the company inform GPT-4’s development. The approach also aligned with the company’s broader deployment strategy, to gradually release technologies into the world for people to get used to them. Some executives, including Altman, started to parrot the same line: OpenAI needed to get the “data flywheel” going.

A few employees expressed discomfort about rushing out this new conversational model. The company was already stretched thin by preparation for GPT-4 and ill-equipped to handle a chatbot that could change the risk landscape. Just months before, OpenAI had brought online a new traffic-monitoring tool to track basic user behaviors. It was still in the middle of fleshing out the tool’s capabilities to understand how people were using the company’s products, which would then inform how it approached mitigating the technology’s possible dangers and abuses. Other employees felt that turning GPT-3.5 into a chatbot would likely pose minimal challenges, because the model itself had already been sufficiently tested and refined.

The company pressed forward and launched ChatGPT on November 30. It was such a low-key event that many employees who weren’t directly involved, including those in safety functions, didn’t even realize it had happened. Some of those who were aware, according to one employee, had started a betting pool, wagering how many people might use the tool during its first week. The highest guess was 100,000 users. OpenAI’s president tweeted that the tool hit 1 million within the first five days. The phrase low-key research preview became an instant meme within OpenAI; employees turned it into laptop stickers.

ChatGPT’s runaway success placed extraordinary strain on the company. Computing power from research teams was redirected to handle the flow of traffic. As traffic continued to surge, OpenAI’s servers crashed repeatedly; the traffic-monitoring tool also repeatedly failed. Even when the tool was online, employees struggled with its limited functionality to gain a detailed understanding of user behaviors.

Safety teams within the company pushed to slow things down. These teams worked to refine ChatGPT to refuse certain types of abusive requests and to respond to other queries with more appropriate answers. But they struggled to build features such as an automated function that would ban users who repeatedly abused ChatGPT. In contrast, the company’s product side wanted to build on the momentum and double down on commercialization. Hundreds more employees were hired to aggressively grow the company’s offerings. In February, OpenAI released a paid version of ChatGPT; in March, it quickly followed with an API tool, or application programming interface, that would help businesses integrate ChatGPT into their products. Two weeks later, it finally launched GPT-4.

The slew of new products made things worse, according to three employees who were at the company at that time. Functionality on the traffic-monitoring tool continued to lag severely, providing limited visibility into what traffic was coming from which products that ChatGPT and GPT-4 were being integrated into via the new API tool, which made understanding and stopping abuse even more difficult. At the same time, fraud began surging on the API platform as users created accounts at scale, allowing them to cash in on a $20 credit for the pay-as-you-go service that came with each new account. Stopping the fraud became a top priority to stem the loss of revenue and prevent users from evading abuse enforcement by spinning up new accounts: Employees from an already small trust-and-safety staff were reassigned from other abuse areas to focus on this issue. Under the increasing strain, some employees struggled with mental-health issues. Communication was poor. Co-workers would find out that colleagues had been fired only after noticing them disappear on Slack.

The release of GPT-4 also frustrated the alignment team, which was focused on further-upstream AI-safety challenges, such as developing various techniques to get the model to follow user instructions and prevent it from spewing toxic speech or “hallucinating”—confidently presenting misinformation as fact. Many members of the team, including a growing contingent fearful of the existential risk of more-advanced AI models, felt uncomfortable with how quickly GPT-4 had been launched and integrated widely into other products. They believed that the AI safety work they had done was insufficient.


The tensions boiled over at the top. As Altman and OpenAI President Greg Brockman encouraged more commercialization, the company’s chief scientist, Ilya Sutskever, grew more concerned about whether OpenAI was upholding the governing nonprofit’s mission to create beneficial AGI. Over the past few years, the rapid progress of OpenAI’s large language models had made Sutskever more confident that AGI would arrive soon and thus more focused on preventing its possible dangers, according to Geoffrey Hinton, an AI pioneer who served as Sutskever’s doctoral adviser at the University of Toronto and has remained close with him over the years. (Sutskever did not respond to a request for comment.)

Anticipating the arrival of this all-powerful technology, Sutskever began to behave like a spiritual leader, three employees who worked with him told us. His constant, enthusiastic refrain was “feel the AGI,” a reference to the idea that the company was on the cusp of its ultimate goal. At OpenAI’s 2022 holiday party, held at the California Academy of Sciences, Sutskever led employees in a chant: “Feel the AGI! Feel the AGI!” The phrase itself was popular enough that OpenAI employees created a special “Feel the AGI” reaction emoji in Slack.

The more confident Sutskever grew about the power of OpenAI’s technology, the more he also allied himself with the existential-risk faction within the company. For a leadership offsite this year, according to two people familiar with the event, Sutskever commissioned a wooden effigy from a local artist that was intended to represent an “unaligned” AI—that is, one that does not meet a human’s objectives. He set it on fire to symbolize OpenAI’s commitment to its founding principles. In July, OpenAI announced the creation of a so-called superalignment team with Sutskever co-leading the research. OpenAI would expand the alignment team’s research to develop more upstream AI-safety techniques with a dedicated 20 percent of the company’s existing computer chips, in preparation for the possibility of AGI arriving in this decade, the company said.

Meanwhile, the rest of the company kept pushing out new products. Shortly after the formation of the superalignment team, OpenAI released the powerful image generator DALL-E 3. Then, earlier this month, the company held its first “developer conference,” where Altman launched GPTs, custom versions of ChatGPT that can be built without coding. These once again had major problems: OpenAI experienced a series of outages, including a massive one across ChatGPT and its APIs, according to company updates. Three days after the developer conference, Microsoft briefly restricted employee access to ChatGPT over security concerns, according to CNBC.

Through it all, Altman pressed onward. In the days before his firing, he was drumming up hype about OpenAI’s continued advances. The company had begun to work on GPT-5, he told the Financial Times, before alluding days later to something incredible in store at the APEC summit. “Just in the last couple of weeks, I have gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward,” he said. “Getting to do that is a professional honor of a lifetime.” According to reports, Altman was also looking to raise billions of dollars from Softbank and Middle Eastern investors to build a chip company to compete with Nvidia and other semiconductor manufacturers, as well as lower costs for OpenAI. In a year, Altman had helped transform OpenAI from a hybrid research company into a Silicon Valley tech company in full-growth mode.


In this context, it is easy to understand how tensions boiled over. OpenAI’s charter placed principle ahead of profit, shareholders, and any individual. The company was founded in part by the very contingent that Sutskever now represents—those fearful of AI’s potential, with beliefs at times seemingly rooted in the realm of science fiction—and that also makes up a portion of OpenAI’s current board. But Altman, too, positioned OpenAI’s commercial products and fundraising efforts as a means to the company’s ultimate goal. He told employees that the company’s models were still early enough in development that OpenAI ought to commercialize and generate enough revenue to ensure that it could spend without limits on alignment and safety concerns; ChatGPT is reportedly on pace to generate more than $1 billion a year.

Altman’s firing can be seen as a stunning experiment in OpenAI’s unusual structure. It’s possible this experiment is now unraveling the company as we’ve known it, and shaking up the direction of AI along with it. If Altman had returned to the company via pressure from investors and an outcry from current employees, the move would have been a massive consolidation of power. It would have suggested that, despite its charters and lofty credos, OpenAI was just a traditional tech company after all.

Even with Altman out, this tumultuous weekend showed just how few people have a say in the progression of what might be the most consequential technology of our age. AI’s future is being determined by an ideological fight between wealthy techno-optimists, zealous doomers, and multibillion-dollar companies. The fate of OpenAI might hang in the balance, but the company’s conceit—the openness it is named after—showed its limits. The future, it seems, will be decided behind closed doors.


This article previously stated that GPT-4 can create images. It cannot.

Karen Hao is a contributing writer at The Atlantic.
Charlie Warzel is a staff writer at The Atlantic and the author of its newsletter Galaxy Brain, about technology, media, and big ideas. He can be reached via email.