Enlarge / Sam Altman, president of Y Combinator and co-chair of OpenAI, seen here in July 2016.
Drew Angerer/Getty Images News
When Sam Altman was suddenly removed as OpenAI’s CEO – before being reinstated days later – the company’s board publicly justified the move by saying that Altman “was not consistently open in his communications with the board about his ability to fulfill his responsibilities.” to perceive, disabled people”. Since then, there have been some reports about possible reasons for the attempted coup on the board, but not much more information on the specific information that Altman was reportedly less “open” about.
Well, in a lengthy article for The New Yorker, writer Charles Duhigg – who was embedded in OpenAI for months in another story — suggests that some board members found Altman “manipulative and conniving” and were particularly annoyed by how Altman allegedly tried to manipulate the board into firing fellow board member Helen Toner.
Board manipulation or clumsy maneuvering?
Toner, who serves as director of strategy and basic research grants at Georgetown University’s Center for Security and Emerging Technology, is said to have drawn Altman’s negative attention by co-authoring a paper on how AI companies are increasing their commitment to security can “signal” through “costly” measures. Words and actions. In the article, Toner contrasts OpenAI’s public launch of ChatGPT last year with Anthropic’s “deliberate decision.”[sion] not to produce its technology in order not to fuel the AI hype.”
She also wrote: “By delaying the release of [Anthropic chatbot] Claude, until another company released a similarly powerful product, Anthropic demonstrated a willingness to avoid exactly the kind of frenetic corner-cutting that the release of ChatGPT seemed to have provoked.”
Advertising
Although Toner reportedly apologized to the board for the paper, Duhigg writes that Altman still began reaching out to individual board members and pushing for their removal. Duhigg says Altman “misrepresented” in those conversations how other board members felt about the proposed removal, “game[ing] “Pitting them against each other by lying about what other people thought,” said a source “familiar with board discussions.” Another “person familiar with Altman’s perspective” instead suggests that Altman’s actions were just a “clumsy” attempt at removing toner, not manipulating it.
This statement would be consistent with OpenAI COO Brad Lightcap’s statement shortly after the firing that the decision was “not made in response to any misconduct or anything related to our financial, business, security or security/privacy practices.” This was a breakdown in communication between Sam and the board. It may also explain why the board was unwilling to go into public detail about obscure discussions of board policy for which there was little substantive evidence.
At the same time, Duhigg’s article also lends some credence to the idea that the OpenAI board felt it needed to be able to hold Altman “accountable” in order to fulfill its mission of “ensuring that AI benefits all of humanity.” benefits,” an unnamed source said. If that was their goal, it appears to have completely backfired, with the result that Altman is now as close as possible to a completely untouchable Silicon Valley CEO.
“It’s hard to say whether board members were more afraid of sentient computers or of Altman going rogue,” Duhigg writes.
It’s worth reading the full New Yorker article to learn more about the history of Microsoft’s involvement in OpenAI and the development of ChatGPT, as well as Microsoft’s own Copilot systems. The article also offers a behind-the-scenes look at Microsoft’s three-pronged response to the OpenAI drama and the way the Redmond-based tech giant reportedly found the board’s moves “mind-bogglingly stupid.”