- How to look at what happened with OpenAI, and what questions to answer next if you lead a company.
- What’s AI Doing for Companies Like Mine?
- Learn what Koch Engineered Solutions, Ednovate, and the Israel Defense Forces (IDF) are doing with AI.
- A Chart to Look Smart
- OpenAI GPT-4-Turbo vs Anthropic Claude 2.1. Which one is more reliable when analyzing long documents?
Today, by popular demand, we talk about the OpenAI drama that unfolded over the last two weeks. There’s no value in offering you the 700th summary of everything that happened. In fact, if you don’t have all the details of what happened, I suggest you stop reading this newsletter, go check one of those summaries, and then come back here for what follows.
Where there’s more value, in my opinion, is in offering you a different perspective to use to look at what happened.
The most important thing to say is this:
Nothing, absolutely nothing, of what has been said, matters. Except for one thing. Just one thing. On which public attention has not focused at all. The thing on which everything else depends.
And the thing that almost no one has focused on, to my great surprise, is this:
What did OpenAI’s board see to justify the abrupt dismissal of CEO, Sam Altman?
What is it that the board members saw to justify the decision to inform the CEO of Microsoft only 60 seconds before releasing a press release?
The public immediately labeled OpenAI’s board as a bunch of incompetents, and so this crucial question was first sidelined, and then forgotten. And this is a very serious mistake in analyzing a company’s strategy.
I repeat the question:
What did the OpenAI board see to justify the abrupt dismissal of the CEO and, despite this, categorically refuse to explain the reasons for the decision to the company’s employees, to Microsoft, to partners, and to customers?
The board members had a balance scale in front of them. Here’s what was on one plate of that scale.
First element on the plate of the balance scale in front of the board
The long-planned, imminent secondary sale of shares held by OpenAI employees, estimated at $86 billion, is almost certainly canceled.
It’s not so much the absolute value that the board risked destroying, but the fact that many of the employees working for OpenAI have been there for years also in the hope of getting rich and having a better life. Once that possibility had been eliminated, there would have been much less motivation to stay in the company, and the employees would have harbored boundless resentment towards the board and any executive who facilitated the decision. Who, in this case, is the Chief Scientist of OpenAI.
So, don’t focus too much on a situation where $86 billion went up in smoke instead of into the pockets of the employees. Rather, focus on a situation where all the employees of an R&D company are furious with the head of research for having precluded them from the possibility of becoming rich or at least well-off. What chance does an R&D company have of surviving in such a climate?
Second element on the plate of the balance scale in front of the board
Microsoft, which has committed to investing $10 billion in OpenAI, suddenly finds itself with a company without its charismatic leader, in revolt against its head of research, against the board and, most certainly, against any new CEO the board decides to hire. And, in all this, Microsoft, who didn’t even have the chance to say a word, still has to pay most of the investment into OpenAI’s coffers.
So, don’t focus too much on a situation where the primary investor of an R&D company made a fool of themselves and is left with the ghost of the company they invested in. Rather, focus on a situation where the primary investor has a thousand ways to delay the release of the promised funds, or perhaps even cancel the agreement legally, reducing to zero the company’s chance of obtaining the enormous computational resources needed for the development of new AI models. Effectively, hibernating the future of the company until it appropriately returns on the investment made up to that point.
Third element on the plate of the balance scale in front of the board
Altman, just fired, potentially creates a new company in just a weekend and, within 9-12 months, is able to release AI models competitive with OpenAI ones, while OpenAI slows down or completely stops the development of new technology.
So, don’t focus too much on a situation where the powerful and influential Altman sets up a formidable competitor in less than a year. Rather, focus on a situation where the vast majority of OpenAI employees, who hate the head of research, the board, and the future CEO, leave the company en masse to join their former boss, having no more economic incentive to stay, and no legal constraints according to the laws of California.
Fourth element on the plate of the balance scale in front of the board
The hundreds of enterprise clients which have signed multi-year binding agreements with OpenAI for the provision of increasingly competitive AI models over the years, find themselves trapped in a partnership with a company that will inevitably be drained of the talent necessary for research, and the financial and computational resources to conduct that research.
So, don’t focus too much on a situation where a top-tier enterprise organization, like Citadel in the Financial Services industry, or Allen & Overy in the Legal industry, suddenly finds itself paying for technology that potentially will never evolve beyond what GPT-4-Turbo does today. Rather, focus on a situation where that same organization, trapped in the partnership with OpenAI, sees the new startup created by Altman producing new, more competitive and high-performing AI models adopted en masse by its competitors.
Fifth element on the plate of the balance scale in front of the board
The thousands of startups around the world that have built and are building their products on OpenAI technology suddenly find themselves with a technological foundation that is most doomed to become obsolete.
So, don’t focus too much on a situation where this ocean of startups abandons OpenAI in droves because it is forced to scale down its expectations in terms of functionalities, accuracy, and release speed. Rather, focus on a situation where this ocean of startups, by abandoning OpenAI, effectively disintegrates the ecosystem that makes OpenAI’s artificial intelligence omnipresent, embedded in a myriad of different software available to clients around the world.
I could go on.
There are other elements on that balance scale, but these five are the most important. And each one is very, very heavy.
On the other plate of the balance scale is what the board saw before firing Sam Altman.
What is on that scale?
This is the only question that matters.
The OpenAI board had very clear in front of them the elements on the first plate of the scale. And yet, despite the magnitude of the destruction that the elements on that plate would have caused, they still chose to go ahead with the dismissal of the CEO.
What is on the other scale that is riskier than the five elements (and many others) we have talked about so far?
To this question, the public opinion responded in a second, without thinking too much:
“There is nothing on the other plate. Simply, the board is composed of a bunch of incompetent people with little experience who acted impulsively.”
This is a rather arrogant and presumptuous stance that does not take into account at least four critical considerations.
OpenAI’s board was chosen by Altman. Therefore, the public must decide whether Altman is a genius or not. It’s too easy to selectively decide that Altman is a genius when it comes to choosing and convincing the best people in the world to develop the greatest invention in human history, but he’s inept when it comes to choosing the board members that oversee that invention. It’s possible, but improbable compared to the alternatives.
The public has considered the actions of the board as a series of impulsive actions but if you have worked as an executive in a large company, as I have for over a decade, you know that there is nothing impulsive in the release of a press announcement. On the contrary, the process is laborious, involves multiple people, and requires careful planning. The announcement of Altman’s dismissal, released 60 seconds after informing the CEO of Microsoft, does not sound at all like an impulsive act, but more like a premeditated action to leave no room for maneuver to a powerful and influential investor like Microsoft.
If the board had simply been incompetent, it would not have categorically refused to disclose the reasons for its decision even after all the employees asked for explanations in the all-hands meeting that followed Altman’s departure. The board members would have not remained absolutely silent, leaving nothing in writing, even after the entire IT industry loudly called for the board to step down. Incompetents can make clumsy choices, but they don’t protect the reasons for a drastic choice at all costs, knowing full well that it will cost them their career forever.
Board members usually are seasoned experts in industry or academia, but they are often called to collectively decide on something that goes well beyond their domain of expertise. To do so, they consult with subject matter experts and perform due diligence to the best of their abilities, legally duty-bound by their mandate. Whatever was on the other plate of the balance scale was presented to the board by subject matter experts, who expressed their opinions and answered more or less honestly the questions of the board.
When the board fired Altman, the opinions and answers of the subject matter experts were most certainly taken into consideration.
With these four considerations in mind (but even without them), and knowing full well that the second plate on the balance scale contains an unknown element, it’s rather superficial to accuse the board of incompetence.
Quite the opposite, this asymmetry of information between the board and the public opinion is the clue that should have pushed for a more cautious and in-depth scrutiny of the events. And yet it was not so.
This asymmetry of information should have allowed, at the very least, to contemplate alternative realities where the board was not incompetent, but ready for an extreme sacrifice (justified or not is an entirely different matter).
If public attention had contemplated alternative realities, it would have paid more attention to the cryptic comments made by Altman during a series of public events days before being fired.
Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime.
If public attention had contemplated alternative realities, it would have put in a very different context Altman’s response to the last question he received in the video below, published just two days before being fired:
What is the thing that the board saw, hidden behind the veil of ignorance, that led its members to sacrifice the entire company and their professional careers?
Reuters is one of the very few that contemplated alternative realities, leading them to report about Q*.
But the report of a new technical breakthrough is not enough to justify the board’s decision.
To justify the board’s decision, such a report must come with a demonstration, or an impossible-to-ignore opinion, issued by subject matter experts, that the the technical breakthrough has very tangible, planetary-scale implications in the real world.
So, what exactly is on the other plate of the balance scale that is enabled by Q* and justified the board’s decision?
What does Q* do to scare the chief scientist of OpenAI to the point that he backed the board’s decision to fire Altman?
As you seek the answer to the only question that matters, as a side quest, also ask yourself why OpenAI has registered the trademarks for GPT-5, GPT-6, and GPT-7, but not beyond.
Assuming that each new model is released every two years, what does OpenAI estimate will happen by 2030? Why not also register GPT-8 and beyond?
Is there a 5-year bomb ticking on the head of every private and public company in the world, before the economy that we know today is completely disrupted?
And if so, no matter how slim the probability, what are you doing to prepare your company for that eventuality?
Let’s start asking ourselves more difficult questions.