Why are AI’s top people leaving?

Steven Vaughan-Nichols
8 Min Read

Individuals making prominent exits from leading AI firms are driven less by desires for higher salaries or increased equity, and more by concerns that these AI enterprises are prioritizing financial gains over ethical considerations and user well-being.

Red flag warning button [alert / danger / disaster]
Credit: Matejmo / Getty Images

Typically, when prominent individuals depart major Silicon Valley corporations, their public statements are bland, citing “new chapters” or expressing “gratitude for the experience,” perhaps with subtle allusions to an upcoming venture. However, recent resignations within the AI sector resemble more of a series of cautionary alerts.

In recent weeks, numerous senior researchers and heads of safety from key AI players like OpenAI, Anthropic, and xAI have publicly stepped down, making their departures anything but discreet or conventional.

Consider OpenAI researcher Zoë Hitzig, for instance. Rather than subtly updating her LinkedIn, she declared her resignation via a New York Times guest essay titled, “OpenAI Is Making the Mistakes Facebook Made. I Quit.

Who opts for such a high-profile resignation — especially in the Times?

Her frustration stemmed from OpenAI’s choice to implement ad testing within ChatGPT. It’s ironic, considering that in 2024, OpenAI CEO Sam Altman stated, “I hate ads,” contending that “ads plus AI” present a “uniquely unsettling” scenario, compelling users to discern the source of influence behind the information they receive. Yet, with even OpenAI’s own financial projections anticipating a $14 billion loss in 2026 alone, Altman apparently overcame his reservations.

Hitzig, however, disagreed. She asserted, “Users confide in chatbots about their health anxieties, relationship struggles, and spiritual convictions. Advertisements leveraging this data trove introduce a risk of user manipulation that we lack the means to comprehend, much less counteract.” (Her observation is, undeniably, accurate.)

Frankly, such a perspective strikes me as naive. Facebook didn’t err; it amassed billions by leveraging user data shared among family and friends online. A fundamental principle of internet business since the late 2000s has been: “If you’re not paying for the service, you are the product.”

Indeed, combining personal revelations with AI-driven advertising is a disturbing prospect. Equally concerning is how platforms like Facebook and X capitalize on user engagement, fueling outrage and detailed behavioral profiling. Yet, there’s no effective restraint on these practices. For instance, in 2016, Facebook notoriously shared data with Cambridge Analytica, enabling the Trump campaign to tailor advertisements with near-individual precision, which contributed to Donald Trump’s victory.

This ultimately led to Facebook incurring approximately $6 billion in penalties and legal costs. While that figure seems substantial, it pales in comparison to its parent company, Meta’s, GAAP revenue exceeding $200 billion in 2025, with advertising accounting for nearly all of it.

My intuition suggests Altman will readily overcome any moral discomfort regarding such lucrative revenue streams.

Concurrently, Mrinank Sharma, the outgoing head of Anthropic’s Safeguards research team, was notably more outspoken. In a resignation letter publicized on X, he declared that “the world is in peril.” Using language that was both civil and strategically critical, he detailed the practical challenges a company faces in adhering to its stated values when financial incentives, market demands, and internal recognition invariably push for the expedited deployment of more advanced models. This illustrates another instance where ethical considerations are overshadowed by the pursuit of profit.

It’s worth noting that Anthropic specifically built its brand around “constitutional AI” and a commitment to cautious deployment. Should even its senior safety personnel feel unable to prioritize ethical principles over financial gain, it signals a significant warning.

Were this merely the actions of a couple of principled researchers, it might be dismissed as personal grievances or internal power plays. However, that’s not the case. OpenAI recently dissolved its “mission alignment” team, whose mandate was to ensure AI safety. (It’s important to recall that OpenAI originated as a nonprofit, sustained by donations and pledges rather than equity, with a stated goal of ensuring artificial general intelligence (AGI) would benefit “all of humanity.”)

Currently, both OpenAI and Anthropic are gearing up for Initial Public Offerings (IPOs), aiming to secure billions for their founders and prospective shareholders. Since ChatGPT 3.5’s emergence in 2023, the stock market, as keen observers know, has been heavily influenced by the AI-powered “Magnificent Seven,” collectively boasting a market capitalization of $20.2 trillion. Is it any surprise, then, that concerns about an impending AI bubble burst are escalating?

Additional AI sector leaders are also making their departures. From Elon Musk’s xAI—recently integrated into SpaceX through an all-stock transaction—co-founders Tony Wu and Jimmy Ba sought exits, even as Musk attributed their departures to “reorgs” and a supposed mismatch where “some people who are better suited for the early stages of a company and less suited for the later stages.” Right, Elon, of course.

Concurrently, VERSES AI has seen its founders and CEO step down, with an interim leader appointed by the board to pursue a more aggressive commercial strategy. Even Apple is experiencing a “brain drain” in AI, with Senior Vice President John Giannandrea and Siri head Robby Walker having moved to Meta.

While each situation is unique, a common theme emerges: AI professionals focused on “what to build and how to ensure its safety” are exiting. Their roles are being filled by individuals whose primary, if not sole, concern is “how rapidly can we transform this into a lucrative venture?” Moreover, mere profitability isn’t sufficient; even a “unicorn” valuation of $1 billion falls short for them. Unless a business achieves “decacorn” status, a privately held startup valued over $10 billion, it holds little interest.

It is quite revealing that Peter Steinberger, the developer of the wildly — and alarmingly — popular OpenClaw AI bot, has already been recruited by OpenAI. Altman praises him as a “genius” and anticipates his concepts will “quickly become core to our product offerings.”

In reality, OpenClaw presents a significant security vulnerability poised for disaster. In the near future, unsuspecting individuals or corporations are likely to face severe financial repercussions for entrusting sensitive data to it. And this is the inventor Altman wishes to place at the core of OpenAI’s operations?!

Gartner ought to revise its hype cycle. For AI, we’ve surpassed the “Peak of Inflated Expectations” and are now accelerating towards the “Pinnacle of Hysterical Financial Fantasies.”

Those departing before the impending chaos? They are the prudent ones.

Generative AIArtificial IntelligenceTechnology IndustryIndustrySecurity
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *