External entries were visible

Meta AI leak granted access to AI prompts and answers from other users

Meta AI
Image source: QubixStudio / Shutterstock.com

A serious vulnerability in Meta’s AI chatbot temporarily allowed users to view other people’s entries and responses – a problem that has now been fixed.

The vulnerability was discovered by Sandeep Hodkasia, founder of the security company AppSecure, which specializes in vulnerability testing.

Ad

Hodkasia discovered the vulnerability at the end of December 2024 when he was analyzing how Meta AI allows logged-in users to subsequently edit entries and generate new texts or images. He noticed that Meta assigns a unique number to each input and the corresponding AI response in the background.

By examining the data traffic in the browser, Hodkasia discovered that if this number is changed manually, the server returns entries and responses from other users – without any access restrictions. The sequence of numbers was easy to guess.

Meta responded with bug fix and reward

On December 26, 2024, Hodkasia reported the discovery to Meta. Around a month later – on January 24, 2025 – the problem was officially fixed. For his report, the security researcher received a reward of 10,000 US dollars as part of Meta’s bug bounty program.

Ad

According to a spokesperson for the company, there is no evidence to date that the vulnerability has been actively exploited. Nevertheless, the incident underlines the importance of consistently securing AI systems – especially for services that process sensitive data.

Data protection with AI remains a challenge

The discovered gap shows that even large technology companies such as Meta can overlook security checks, especially in the case of complex, new systems such as AI chatbots. Yet trust in the protection of privacy is a basic prerequisite for the use of such technologies.

The incident is one in a series of data protection problems at Meta. It had only recently become known that the user interface of the Meta AI app could inadvertently lead to private entries being made public. A warning message has since been issued to point this out.

The revelation by Hodkasia is a warning signal for the entire tech world: the security of AI applications must be treated with the same care as that of traditional IT systems. The privacy of users must not fall victim to the pressure to innovate.

Ad

Weitere Artikel