OpenAI’s leadership crisis: A catalyst for a smarter AI strategy
So… what happened this weekend?
On Friday, OpenAI announced that Sam Altman, the darling of Silicon Valley AI had been unceremoniously fired as the CEO of OpenAI as it was claimed he was “not consistently candid”. Right on queue the twitter-sphere exploded with conspiracy theories ranging from sexual misconduct to arguments over AGI. All of this happened with no warning to senior leadership, investors (including 49% owner Microsoft), or customers. Immediately after the firing, customers reported getting emails informing them that their payment terms to use OpenAI would change. At the time of writing (Sunday morning), it’s not clear what happened or whether he will be asked back.
This isn’t an essay about what happened this weekend, I personally don’t really care, but rather it’s an opportunity to think about what we can learn from it. So all we need to take from this weekend is that it was an unimaginable mess.
Why does what happened this weekend matter for enterprises?
This weekend was an abrupt wake up call for all those building with AI. It was a reminder of two things:
- The field is still incredibly early
- Current leaders in these companies don’t necessarily have aligned goals with enterprises building with AI
Most of the AI foundation model companies that we know (eg, OpenAI, Anthropic, Cohere, Stability), until a year ago (if they even existed) were nothing more than well-funded research labs. This is not meant to diminish the phenomenal work that they have done in that time (and it really is phenomenal) — but rather to give context to the maturity of the field.
These groups were not created to help enterprises build applications with AI, most of them have the stated aim to create AGI (Artificial General Intelligence) or save humanity from that very same AGI. Given that these groups have pivoted to attempt to service enterprises over the last year (some would say pushed by investor pressure) — we should not forget that this is new for them. Helping enterprises create value and build amazing products from AI is fundamentally different from saving the world from AGI — it takes a large internal cultural shift for this to take place. Altman was working hard to create this internal shift (just look at the perceived success of Dev day) — But it was tension in this internal shift that led to Altman’s ousting from OpenAI.
Those in leadership positions at these organisations are the same people who were with these companies at their founding — these are the people whose ultimate motivation is AGI / protecting us from AGI. Hence, they make decisions which are in line with this, not in line with helping builders create value. Altman’s exit from OpenAI is a prime example of this.
These are not the same people that sold you cloud or databases
The companies selling you AI from these groups are NOT the same people that sold you database services, they are not the same people that sold you cloud. OpenAI’s affiliation with Microsoft did not make OpenAI behave sensibly this weekend. These AI companies are largely led by academics and “philosophers”, not by business people — and they act like it. These companies will ‘grow up’ — they have to.
But in growing up, comes growing pains. OpenAI has already demonstrated multiple instances of these growing pains including sleuth changing model quality, a ChatGPT data breach, and numerous copyright arguments. The growing pains are nowhere close to being over — Altman’s ousting is the biggest one yet.
So what does that mean for how I should build my AI strategy?
In the meantime, no business should put themselves as potential collateral damage in these growing pains — what they are building is too important for that. So what does that mean for how businesses should build their AI strategy?
Diversify your model sources
The AI game is too early and too important to wed ourselves to a single vendor. As we saw this weekend (and with previous OpenAI outages), these companies are young and get things wrong. In the same way that we would never build software with a single point of failure (especially if we don’t control it) we definitely shouldn’t be doing the same with our vital AI systems. AutogenAI for instance has done this with their product — so that when OpenAI suffered their 90 min long outage, they were able to seamlessly switch to another model API provider.
The AIOps/MLOps leaders in this space have already built their AI platforms with Interoperability in mind so changing models is as simple as changing the API calls — there are plenty of examples of this being done well: AWS did this with Bedrock, IBM have done it with WatsonX, Dataiku are doing this with LLM Mesh, we at TitanML have done this with the Titan Takeoff Inference Server for self-hosted models.
Build with trusted partners while avoiding vendor lock-in
If you are going to build with API based models instead of using open-source / self-hosted models, then be sure that you use them with a trusted 3rd party rather than using the API itself. When OpenAI went down the equivalent Azure OpenAIs stayed up because Azure owned that infrastructure. Additionally, when it comes to privacy and data security, using these models through third parties provides additional promises that aren’t given when dealing with the AI creator itself.
Own your applications
AI is too important for the future of our businesses for us to give over control to 3rd parties. Until recently, self-hosting LLMs has been too operationally expensive and difficult to be reasonably done at scale. This has changed. Open-source models are better than ever (and improving rapidly) — making it easy to build high quality applications. The infrastructure to host these models in VPC or on-prem has been solved — so deploying them to production at scale is easier than ever. It used to be that self-hosting took months per project and required incredibly powerful GPUs — that is no longer the case with inference infrastructure solutions like the TitanML Takeoff Inference Server. Open-source models and self-hosting models is a good horse to bet on, especially if built in a way that is interoperable with the latest model advancements. Self-hosting improving rapidly and it is the best way to insulate yourself from the madness going on in Silicon Valley.
So…
This weekend’s events are a stark reminder of the fragility and unpredictability inherent in the rapidly evolving AI industry. Enterprises looking to build a robust AI strategy must be aware of this and build accordingly. If enterprises want to insulate themselves from the growing pains in the industry while reaping the benefits of AI they must prioritise interoperability and build up the capabilities to fully own their AI applications.
Deploying Enterprise-Grade AI in Your Environment?
Unlock unparalleled performance, security, and customization with the TitanML Enterprise Stack