xAI, OpenAI, Grammarly, Character.ai's Litigation Risk Is Becoming A Private Markets Problem
Private markets helped accelerate the artificial intelligence (AI) boom by pouring capital into generative AI companies such as OpenAI, xAI, Grammarly and Character.ai under the assumption that scale, speed and product adoption would be the dominant risks.
But that view is starting to shift.
A growing wave of lawsuits tied to copyright claims, training data practices, misinformation and user safety is creating a new layer of uncertainty for investors backing the AI sector. What was once viewed as a manageable legal overhang is increasingly becoming a material financial risk that could ripple through private markets.
The growing legal risks come as investors are already struggling to assess how quickly AI is reshaping the broader technology landscape.
Axios recently reported that private equity firms are struggling to assess the long-term impact of AI on investments, particularly in enterprise software, as rapid advances from tools like ChatGPT, Claude and Gemini make it harder to predict company performance and future valuations. One investor described forecasting exits in the current environment as "throwing at a dartboard blindfolded."
While firms still have significant amounts of dry powder to deploy, the pace of AI development is colliding with the long holding periods typical in private markets, creating a mismatch between investment timelines and technological change.
Fortune recently wrote that the rush into AI investing has, in some cases, outpaced investors' ability to fully diligence both technical claims and legal risks.
"In such conditions, the pressure to deploy capital and maintain relevance with limited partners can create incentives to accept ambitious technological narratives with less rigorous diligence than would normally be applied. Without careful scrutiny, investors risk paying premium valuations for technological capabilities that are still experimental, limited in scope, or economically immaterial," the article continued
At the same time, legal challenges surrounding AI companies are accelerating.
Here are just some of the AI-related lawsuits that have been reported on in recent months.
- Pennsylvania Gov. Josh Shapiro's administration filed a lawsuit against Character.ai after its AI chatbot allegedly presented itself as a licensed psychiatrist in Pennsylvania.
- Grammarly’s artificial intelligence tool Expert Review is under fire after writing experts claim they did not give the company permission to use their names or provide expert feedback on their behalf. In a class action lawsuit filed in the U.S. District Court for the Southern District of New York, Julia Angwin — a contributing opinion editor at The New York Times — alleges that the Expert Review tool used her name and others’ without prior consent.
- Anthropic, the AI company behind the Claude chatbot, is facing a lawsuit from music rights management company BMG.
- Families of victims involved in a mass shooting on Feb. 10 in Tumbler Ridge, British Columbia, sued Sam Altman and OpenAI in San Francisco federal court.
- Three Tennessee teenagers filed a federal class-action lawsuit against Elon Musk’s xAI, claiming its AI chatbot Grok created and spread sexualized images of them without consent.
For private market investors, the concern is that AI litigation may eventually do more than generate headlines. Large legal settlements, regulatory action or tighter rules around training data and platform safety could materially impact company valuations, fundraising prospects and eventual exits.
Photo: Shutterstock
