Data leak

Hundreds of thousands of private Grok chats can be found on Google

Grok
Image source: Mamun_Sheikh/Shutterstock.com

Over 370,000 conversations with Elon Musk’s AI chatbot have inadvertently become accessible via search engines. Users unknowingly shared sensitive data – from passwords to illegal instructions.

Anyone who thought their conversations with Elon Musk’s AI chatbot Grok were private may have been mistaken. As research by Forbes reveals, more than 370,000 user conversations with the xAI chatbot can be found publicly via Google and other search engines. The data leak is due to an unfortunate malfunction of Grok’s “share” function.

Ad

A share button with devastating consequences

The problem lies in Grok’s “Share” feature: when a user clicks the button to share a conversation, the application creates a unique URL. However, these links are not only accessible to the intended recipient, but are automatically indexed by search engines such as Google, Bing and DuckDuckGo. Users were not informed that their conversations could become public as a result.

The affected conversations cover a broad spectrum: from harmless business tasks such as writing tweets to problematic content. According to the Forbes report, these include conversations with personal details, names and at least one password. In addition, uploaded image files, tables and text documents became publicly accessible.

Violations of own guidelines

Some of the indexed conversations violated xAI’s own rules of use. The chatbot provided instructions on how to make illegal drugs such as fentanyl and methamphetamine, gave tips on how to make bombs and even listed suicide methods. In one case, Grok drew up a detailed plan for an assassination attempt on its own creator, Elon Musk.

Ad

Repetition of a known problem

As recently as July, similar problems with OpenAI’s ChatGPT made headlines when private chats became discoverable via Google. OpenAI responded quickly, calling the feature a “short-lived experiment” that created too many opportunities for “accidental oversharing”.

The problem is not just of a technical nature. Many users treat AI chatbots like personal confidants and share intimate details about health, finances or private problems. Once online, this information is difficult to remove completely.

xAI has not yet issued an official statement on the incident. It also remains unclear when exactly Grok introduced the problematic sharing function. Some users affected by the leak only found out about the public availability of their chats through the Forbes report.

Lars

Becker

Redakteur

IT Verlag GmbH

Ad

Weitere Artikel