How technology tackles the new rules for verification of age

Copyright © HT Digital Streams Limit all rights reserved. Ann-Marie Alcántara, The Wall Street Journal 4 min Read 24 Aug 2025, 06:05 PM is very platforms rely on the use of a photo ID issued by the government to verify who users are. (Pixabay) Summary selfies, governments and AI are all used by companies in an effort to adhere to new laws and regulations aimed at protecting children. Lawmakers want technology companies to limit the access of young people to social media by verifying their age. This is no easy achievement. US legislation is already prohibiting social media and other advertising-supported platforms to serve content to children under 13 without parental consent. Lawmakers and advocacy groups are now taking it a step further, with a patch of new rules that children under 18 also prohibit to participate in certain online activities. The UK’s online safety law, which came into effect on July 25, requires platforms to verify users’ ages before showing your content. In the US, several states have adopted similar bills. New laws in Texas and Utah require parental consent before a child can download apps or make purchases in the app. This places the maintenance burden on the providers of app stores, namely Apple and Google. Other states, such as Mississippi, rather ask social media businesses to verify the age of the user. Netchoice, a trade association whose members include Meta, Google and Reddit, continues to challenge the Mississippi Act and generally opposed the government mandates. “Our members have developed many tools to give parents more visibility and control, and they continue to innovate every day to build effective solutions,” says Paul Taske, co-director of the NetCoice Litigation Center. Google even proposed alternative legislation. Age Verification Technology-Many of them outsourced to third-party suppliers such as Yoti, Incode and Persona to prevent excessive data collection-is not always accurate. Legitimate users have complained that they were closed out of their accounts. User error is also a problem, with some that do not offer the right identification. Families sometimes willingly give access to minor children. And often they just don’t know what the kids are doing. Technical skilled children find solutions, such as the use of VPN services masking their location. Here are the various technologies used by the technical enterprises, and the pros and cons of how they work: Photo IDs very platforms rely on the use of a photo ID issued by the government to verify who users are. People are constantly fudge these documents, from pasting another picture to uploading a false ID of high quality-sums even one with AI generating-the Roman Karachinsky, chief product officer at Incode, said. The other problem with photo -id’s is that it assumes that someone will have one and not all people, Karl Ricanek Jr., Professor of Computer Science at the College of Science and Engineering at the University of North Carolina, Wilmington. Parent’s consent “Some of these age verification instruments are no better than parents who control their own children,” says Ari Waldman, Professor in law at the UC Irvine School of Law. Both are ‘poor handrail’, he added. Yet platforms and providers of app stores sometimes rely on parents and others older than 18. Apple and Google have parents create and monitor children’s accounts. In June, Apple announced a new developer API that enables a parent to share a child’s age group with developers without giving them the child’s date of birth so that programs can display age-appropriate content. But it is not always easy to say that the so -called parents are true. “You have no idea if you have a true parent doing these validations,” Ricanek said. “And kids are creative – most children have several accounts.” The video -selfie in 2024 announced Meta’s Instagram teen accounts and limited what they can see and who they can contact with. Those under 16 are also expected to receive a parent or guardian’s permission to make any changes. If someone tries to change their age to more than 18, they must confirm with an ID check or a video -selfie sent to Yoti. If prohibited users want to prove to tiktok that they are older, they can provide a credit card for a temporary cost or send a photo of their governments with three selfies. Tiktok works with Incode, which uses zero-knowledge-proof signs-which confirm the age without disclosing personal information-to ensure privacy. Forbidden users can also ask their parent or Guardian to confirm their age. “We reduce the data and don’t ask too many things,” said Ricardo Almost, CEO of Incode. The verification process can be difficult to get right. Users should take a high-quality video selfie in a well-lit room, Karachinsky said. (The company guides users through the process and can also help them correct it directly.) And the rise of manipulated, Deepfake video makes the work even more difficult, he added. “Even for relatively cases with lower interests, you need technology that can tackle the advanced efforts,” he said. In the UK, Reddit works with Persona to request a uploaded selfie or a government. After persona verifies the age of the user, receives and stores Reddit only the date of birth and verification of the user of the user. The AI ​​guessed in mid-August, Google AI started using to find out the age of a user based on signals they sent to the platform: what they were looking for, what YouTube videos they watched. Indirect signals can be a great way to check without engaging the user. Meta also says it uses technology to estimate age based on activity. But it’s hard enough to distinguish between users in the first place, Ricanek said. “We assume that everyone has their own personalized individualized account and that it does not happen as well.” Safety versus privacy to get age verification right means to nail accuracy without collecting too much data. The Child Safety Act is legislation that sailed by the Senate but was stuck in the home. This will lead to great technical companies to address the host of risks that children pose on their platforms, from misleading marketing to sexual exploitation. It may never become the US law, but if it does, platforms will have to know exactly who the children are. “If businesses are expected to do age verification, they have to do it in a right-based way,” says Aliya Bhatia, senior policy analyst of the Center for Democracy and Technology, a non-profit organization. Write to Ann-Marie Alcántara on [email protected], capture all the business news, market news, news reports and latest news updates on live currency. Download the Mint News app to get daily market updates. More Topics #Social Media #Technology Companies Read Next Story