Concerns Over Children's Privacy in the UK: Snap's AI Chatbot Under Examination

Concerns Over Children's Privacy in the UK: Snap's AI Chatbot Under Examination

·

2 min read

Snapchat's AI chatbot has come under scrutiny from the UK's data protection authority, the Information Commissioner's Office (ICO), which has raised concerns about potential risks to children's privacy. The ICO has issued a preliminary enforcement notice to Snap, stating that the company may not have adequately assessed the privacy risks posed by its generative AI chatbot 'My AI'. While this action does not constitute a breach finding, it indicates the regulator's concerns about Snap's compliance with data protection rules, particularly the Children's Design Code introduced in 2021.

The ICO's investigation has found that Snap's risk assessment conducted before launching 'My AI' did not sufficiently evaluate the data protection risks associated with the generative AI technology, especially concerning children aged 13 to 17. Snap now has the opportunity to respond to these concerns before the ICO makes a final decision on whether the company has violated the rules.

Snap introduced the generative AI chatbot, powered by OpenAI's ChatGPT technology, in February. While initially available only to subscribers of Snapchat+, it was later made accessible to free users as well, with the AI capable of sending snaps back to users who interacted with it.

Snap has claimed that 'My AI' has additional moderation and safeguarding features, including age considerations and content filters to ensure appropriateness for users. However, there have been reports of the chatbot providing inappropriate recommendations, and some users have expressed frustration over the introduction of AI into their feeds.

The ICO's action reflects a broader trend of European privacy regulators scrutinizing AI chatbots. Italy's privacy authority ordered Replika and OpenAI's ChatGPT to stop processing data due to concerns about risks to minors, while Google's Bard chatbot faced delays in its regional launch due to privacy concerns. Poland's data protection authority is also investigating a complaint against ChatGPT.

Privacy and data protection regulators are increasingly focused on generative AI, emphasizing privacy-by-design principles and calling on developers to conduct thorough privacy impact assessments. The ICO has issued guidelines for developers working with generative AI, and regulators are taking a more public stance to encourage companies to prioritize data protection when deploying AI technologies.

While regulatory action is cautious and mainly focused on issuing preliminary notices and warnings at this stage, it underscores the growing importance of data protection in the development and deployment of AI chatbots and similar technologies.