Snapchat may have failed to properly assess privacy risks to children from its artificial intelligence chatbot, Britain’s data watchdog said on Friday, adding it would consider the company’s response before making any final enforcement decision.
The Information Commissioner’s Office (ICO) said if the U.S. company fails to adequately address the regulator’s concerns, “My AI”, launched in April, could be banned in the UK.
“The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI'”, Information Commissioner John Edwards said.
The findings do not necessarily mean the instant messaging app used largely by younger people has breached British data protection laws or that the ICO will end up issuing an enforcement notice, the regulator said.
Snapchat said it was reviewing the ICO’s notice and that it was committed to user privacy.
“My AI went through a robust legal and privacy review process before being made publicly available,” a Snap spokesperson said. “We will continue to work constructively with the ICO to ensure they’re comfortable with our risk assessment procedures.”
The ICO is investigating how “My AI” processes the personal data of Snapchat’s roughly 21 million UK users, including children aged 13-17.
“My AI” is powered by OpenAI’s ChatGPT, the most famous example of generative AI, which policymakers globally are looking to find ways to regulate in light of privacy and safety concerns.
Social media platforms, including Snapchat, require users to be 13 or over, but have had mixed success in keeping children off their platforms.
Reuters reported in August that the regulator was gathering information to establish whether Snapchat was doing enough to remove underage users from its platform.
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)
Waiting for response to load…