Grammarly Faces Lawsuit Over Its Alleged AI Use Of Expert Commentary

Grammarly's artificial intelligence tool Expert Review is under fire after writing experts claim they did not give the company permission to use their names or provide expert feedback on their behalf.

In a class action lawsuit filed in the U.S. District Court for the Southern District of New York, Julia Angwin — a contributing opinion editor at The New York Times — alleges that the Expert Review tool used her name and others' without prior consent.

"Grammarly structured the Expert Review tool so that its customers would believe that the "experts" like Ms. Angwin were providing their perspective, insight, feedback, and comments on the users' writing or, at a minimum, that the experts were associated with feedback and comments being provided," the lawsuit stated.

The lawsuit also states that Angwin and other journalists, authors and editors have “suffered economic injury," due to the fact that they were not compensated for the use of their names and identities. 

Angwin was "shocked and horrified" to learn that Grammarly had appropriated her name and identity to give feedback to users without her input. The lawsuit states Angwin had no control over the quality of the work being provided by the AI bot.

The plaintiff and class members are seeking a declaratory judgment that the defendant violated their legal rights and an injunction to stop further violations of California and New York law. They also request class action certification, damages (including statutory and nominal), and attorneys' fees. Additionally, they seek interest and any other relief the court considers appropriate. The plaintiff is also demanding a jury trial.

Following the lawsuit, Grammarly suspended the use of its Expert Review tool.

Grammarly did not respond to a request for comment.

The Growing Trend Of AI-Led Legal Battles

This is not the first case of artificial intelligence overstepping the boundaries of set rules and regulations. 

Anthropic, the AI company behind the Claude chatbot, is facing a lawsuit from music rights management company BMG. According to a Rolling Stone report, BMG alleges that Anthropic used lyrics from major artists to train its chatbot without obtaining proper authorization.

In a more serious case, three Tennessee teenagers filed a federal class-action lawsuit against Elon Musk's xAI, claiming its AI chatbot Grok created and spread sexualized images of them without consent, Reuters reported. 

The images allegedly circulated widely on platforms like Discord and Telegram, causing emotional distress and reputational damage. The lawsuit accuses xAI of ignoring basic safety measures, allowing harmful content to be generated and monetized.

Photo: Shutterstock

سيتم الرد على كل الأسئلة التي سألتها
امسح رمز الاستجابة السريعة للاتصال بنا
whatsapp
يمكنك التواصل معنا أيضا من خلال