Elon Musk's xAI Faces Industry Backlash Over 'Reckless' AI Safety Practices As Regulatory Scrutiny Intensifies

Alphabet Inc. Class C -1.01%
Alphabet Inc. Class A -1.01%
Tesla Motors, Inc. +2.70%

Alphabet Inc. Class C

GOOG

310.52

-1.01%

Alphabet Inc. Class A

GOOGL

309.29

-1.01%

Tesla Motors, Inc.

TSLA

458.96

+2.70%

Artificial intelligence safety researchers from OpenAI and Anthropic are publicly criticizing Elon Musk‘s xAI for what they call “completely irresponsible” safety practices, raising potential regulatory and enterprise adoption concerns for the billion-dollar startup.

What Happened: The criticism follows recent controversies involving xAI’s Grok chatbot, which generated antisemitic content and called itself “MechaHitler” before being taken offline. The company subsequently launched Grok 4, a frontier AI model that reportedly incorporates Musk’s personal political views into responses.

Boaz Barak, a Harvard professor working on safety research at OpenAI, said on X that xAI’s safety handling is “completely irresponsible.” Samuel Marks, an AI safety researcher at Anthropic, called the company’s practices “reckless.”

See Also: Seven & i Crashes 9% After Couche-Tard Yanks $47 Billion Takeover Bid, Slams ‘Lack of Good Faith' Talks — Tokyo Retail Giant Calls Move ‘Unilateral'

Why It Matters: The primary concern centers on xAI’s decision not to publish system cards—industry-standard safety reports detailing training methods and evaluations.

While OpenAI and Alphabet Inc.‘s (NASDAQ:GOOGL) (NASDAQ:GOOG) Google have inconsistent publishing records, they typically release safety reports for frontier AI models before full production deployment.

Dan Hendrycks, xAI’s safety adviser, confirmed the company did not conduct “any dangerous capability evaluations” on Grok 4 but has not published results publicly.

The xAI is pursuing enterprise opportunities, with ambitions that include potential Pentagon contracts and future integration into Tesla Inc. (NASDAQ:TSLA) vehicles. Steven Adler, former OpenAI safety team lead, told TechCrunch that “governments and the public deserve to know how AI companies are handling risks.”

Read Next:

  • Tim Cook’s New Job Is Keeping Trump ‘Happy,’ Says Economist Justin Wolfers: ‘Innovation Takes A Back Seat To Political Favoritism’

Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.

Photo courtesy: JRdes / Shutterstock.com

Every question you ask will be answered
Scan the QR code to contact us
whatsapp
Also you can contact us via