‘More harm than good’: Why hundreds of researchers want a pause on online age verification
An open letter signed by researchers worldwide calls for a moratorium on age-assurance tools until there is clear scientific consensus on their benefits and harms.
As several countries consider measures such as social media bans to address concerns about addiction, especially among teen users, a group of academics has warned that these efforts pave the way for mandatory age assurance systems online which, in turn, pose significant risks to user privacy and security.
“We believe that it is dangerous and socially unacceptable to introduce a large-scale access control mechanism without a clear understanding of the implications that different design decisions can have on security, privacy, equality, and ultimately on the freedom of decision and autonomy of individuals and nations,” read an open letter signed by over 371 security and privacy academics across 29 countries.
The signatories have called for a moratorium on the roll-out of age verification tools and age estimation features on platforms “until the scientific consensus settles on the benefits and harms that age-assurance technologies can bring, and on the technical feasibility of such a deployment,” as per the letter dated Monday, March 2.
Some of the names attached to the letter include Ronald Rivest, winner of the prestigious Turing Award in computing, and Bart Preneel, president of the International Association for Cryptologic Research.
The warning comes amid heightened concerns over the impact that social media can have on children’s mental health. Last year, Australia became the first country in history to implement a ban on the use of social media platforms such as YouTube, Instagram, Facebook and Snapchat for those under the age of 16. Since then, several other governments have signalled that they are considering similar restrictions. France is looking to ban children under 15 from social media platforms as soon as September this year, while Germany, Denmark, and Spain are also accelerating efforts.
During his keynote address at the India-AI Impact Summit 2026, French President Emmanuel Macron had also called on India to consider banning social media for children. The Economic Survey 2026-27 further urged the Indian government to implement age-based limits for social media usage for children and digital ads targeted at them. The state governments of Andhra Pradesh and Goa are reportedly eyeing a similar social media ban for children.
Additionally, tech companies such as OpenAI and Roblox have implemented age checks on their platforms in anticipation of regulatory action. After facing backlash, social platform Discord delayed initial plans to make age-verification mandatory on the social networking platform for all users globally.
However, several countries are yet to decide how such bans would be implemented or enforced. “We share the concerns about the negative effects that exposure to harmful content online has on children,” the academics wrote. But current plans “would require all users — minors and adults — to prove their age to converse with friends and family, read news, or search for information; well beyond what has ever happened in our offline lives,” they added.
The letter states that age assurance tech includes age verification systems, where users have proof of identity provided by a trusted party such as a government ID, and age estimation tools, where users’ age is estimated from online behaviour, browsing history, biometric data, face scans, video uploads, and more.
It identifies the following harms arising from the spread of age-assurance checks online:
– Diminishes online privacy: “The mandate to implement age assurance justifies new forms of data collection by online services, especially for age estimation and age inference. This in itself increases privacy risks, with data being potentially abused by the provider itself or its subcontractors, or third parties that get access to it, e.g., after a data breach, like the 70K users that had their government ID photos leaked after appealing age assessment errors on Discord.”
-Rise in inequality and discrimination: “Safeguarding privacy requires the use of certified age attributes, which requires users to have such a certification, a compatible device, and digital skills to prove their age. These requirements are not met by a significant portion of the population, such as the elderly, non-EU citizens (if age verification is based on upcoming EUDI), anybody who doesn’t hold a national digital ID credential, or simply those who do not want to own a smartphone (especially one supported by the verification system).”
-Age-checks are easy to bypass: “Age-assurance checks are easy to bypass, as evidenced by current deployments being circumvented using VPNs, bought or borrowed credentials, or props or AI-based tools (e.g., deepfakes or AI-generated profiles), to change the users’ appearance. Such checks also require the creation of Internet-wide trust infrastructures that do not exist today, whose technical deployment would be quite complex, and whose worldwide legal enforcement seems doubtful.”
-Exposure to security risks: “Age-assurance checks not only might be ineffective, but can actually diminish safety online by exposing users to malware and scams when they resort to alternative services that do not implement verification—and users will undoubtedly turn to such alternative sources.”
-PETs do not address harms: Privacy enhancing technologies (PETs) might “bolster discrimination if only some (smart)phones have the necessary capability, software, firmware, or hardware […] Moreover, when PETs require complex cryptographic protocols, likely only a few—potentially even a single—implementation will be available, often provided by a single party or company (e.g., Apple or Google) […] This not only creates a single point of failure but also immense centralization of power on those controlling the cryptographic libraries.”
As several countries consider measures such as social media bans to address concerns about addiction, especially among teen users, a group of academics has warned that these efforts pave the way for mandatory age assurance systems online which, in turn, pose significant risks to user privacy and security.
“We believe that it is dangerous and socially unacceptable to introduce a large-scale access control mechanism without a clear understanding of the implications that different design decisions can have on security, privacy, equality, and ultimately on the freedom of decision and autonomy of individuals and nations,” read an open letter signed by over 371 security and privacy academics across 29 countries.
The signatories have called for a moratorium on the roll-out of age verification tools and age estimation features on platforms “until the scientific consensus settles on the benefits and harms that age-assurance technologies can bring, and on the technical feasibility of such a deployment,” as per the letter dated Monday, March 2.
Some of the names attached to the letter include Ronald Rivest, winner of the prestigious Turing Award in computing, and Bart Preneel, president of the International Association for Cryptologic Research.
The warning comes amid heightened concerns over the impact that social media can have on children’s mental health. Last year, Australia became the first country in history to implement a ban on the use of social media platforms such as YouTube, Instagram, Facebook and Snapchat for those under the age of 16. Since then, several other governments have signalled that they are considering similar restrictions. France is looking to ban children under 15 from social media platforms as soon as September this year, while Germany, Denmark, and Spain are also accelerating efforts.
During his keynote address at the India-AI Impact Summit 2026, French President Emmanuel Macron had also called on India to consider banning social media for children. The Economic Survey 2026-27 further urged the Indian government to implement age-based limits for social media usage for children and digital ads targeted at them. The state governments of Andhra Pradesh and Goa are reportedly eyeing a similar social media ban for children.
Additionally, tech companies such as OpenAI and Roblox have implemented age checks on their platforms in anticipation of regulatory action. After facing backlash, social platform Discord delayed initial plans to make age-verification mandatory on the social networking platform for all users globally.
However, several countries are yet to decide how such bans would be implemented or enforced. “We share the concerns about the negative effects that exposure to harmful content online has on children,” the academics wrote. But current plans “would require all users — minors and adults — to prove their age to converse with friends and family, read news, or search for information; well beyond what has ever happened in our offline lives,” they added.
The letter states that age assurance tech includes age verification systems, where users have proof of identity provided by a trusted party such as a government ID, and age estimation tools, where users’ age is estimated from online behaviour, browsing history, biometric data, face scans, video uploads, and more.
It identifies the following harms arising from the spread of age-assurance checks online:
– Diminishes online privacy: “The mandate to implement age assurance justifies new forms of data collection by online services, especially for age estimation and age inference. This in itself increases privacy risks, with data being potentially abused by the provider itself or its subcontractors, or third parties that get access to it, e.g., after a data breach, like the 70K users that had their government ID photos leaked after appealing age assessment errors on Discord.”
-Rise in inequality and discrimination: “Safeguarding privacy requires the use of certified age attributes, which requires users to have such a certification, a compatible device, and digital skills to prove their age. These requirements are not met by a significant portion of the population, such as the elderly, non-EU citizens (if age verification is based on upcoming EUDI), anybody who doesn’t hold a national digital ID credential, or simply those who do not want to own a smartphone (especially one supported by the verification system).”
-Age-checks are easy to bypass: “Age-assurance checks are easy to bypass, as evidenced by current deployments being circumvented using VPNs, bought or borrowed credentials, or props or AI-based tools (e.g., deepfakes or AI-generated profiles), to change the users’ appearance. Such checks also require the creation of Internet-wide trust infrastructures that do not exist today, whose technical deployment would be quite complex, and whose worldwide legal enforcement seems doubtful.”
-Exposure to security risks: “Age-assurance checks not only might be ineffective, but can actually diminish safety online by exposing users to malware and scams when they resort to alternative services that do not implement verification—and users will undoubtedly turn to such alternative sources.”
-PETs do not address harms: Privacy enhancing technologies (PETs) might “bolster discrimination if only some (smart)phones have the necessary capability, software, firmware, or hardware […] Moreover, when PETs require complex cryptographic protocols, likely only a few—potentially even a single—implementation will be available, often provided by a single party or company (e.g., Apple or Google) […] This not only creates a single point of failure but also immense centralization of power on those controlling the cryptographic libraries.”