Snapchat’s AI chatbot might face closure in the UK due to concerns raised by the privacy regulator regarding its potential risks to children.
The UK’s Information Commissioner’s Office (ICO) pointed out deficiencies in the data protection evaluation Snap conducted before introducing the My AI chatbot.
If these initial findings are upheld, Snap might have to discontinue the chatbot service in the UK, where it boasts 22 million users.
This represents the UK’s most notable regulatory action against large language models to date, including AI platforms like ChatGPT and Google Bard.
John Edwards, the Information Commissioner, cautioned AI firms against haste in product launches, emphasizing the need to ensure safety standards. He likened the rush to market to the dangers of a “Wild West” and stressed the importance of compliance with existing regulations.
Edwards stated, “The rapid push to release products in the competitive tech market can lead to oversights. We are closely monitoring these advancements and will intervene if these technologies are launched prematurely. The message is clear: the tech industry isn’t an unregulated frontier. Existing data protection laws are very much applicable to these innovations.
Snapchat’s “My AI” bot, which incorporates ChatGPT technology with added safety measures for minors, has come under scrutiny.
Critics argue that the bot misinforms users about its location data collection practices and has suggested potentially harmful diets. Additionally, unlike ChatGPT, it embeds advertisements within chats.
The ICO’s preliminary investigation indicated that Snap’s prior risk assessment for the launch of ‘My AI’ fell short in evaluating the data protection threats posed by the AI technology, especially for minors. The regulator emphasized the importance of such assessments, especially when innovative technology processes the personal data of teenagers aged 13 to 17.
In response, Snap expressed its commitment to collaborating with the ICO. “We’re thoroughly examining the ICO’s preliminary findings,” they mentioned. “Our priority, like the ICO’s, is user privacy. ‘My AI’ underwent a comprehensive legal and privacy evaluation before its public release. We will engage actively with the ICO to ensure our risk evaluation measures meet their standards.”
The ICO has issued several advisories regarding the use of generative AI platforms like ChatGPT. One such advisory highlighted potential data protection law breaches when office employees utilize this AI for drafting emails that include personal data.
Furthermore, AI businesses have been cautioned that they might face penalties for the continuous unauthorized harvesting of individual’s private information.

