How far should we go with online age checks to protect the young?

ISTD
DAI
DATE
19 February 2026

The Straits Times, How far should we go with online age checks to protect the young?

By Assistant Prof Roy Ka-Wei Lee, Information Systems Technology and Design (ISTD) & Design and Artificial Intelligence (DAI)

 

A tougher line is vital, but there’s room for a nuanced approach that doesn’t compromise privacy and data safety.

 

Today, anyone in Singapore, regardless of age, can open a social media account in under a minute. Pick a platform, tap “Sign up”, scroll the birth year a little further back, and hit “Yes, I’m over 13”.

 

No parent, no teacher, no official. Doesn’t matter whether they are six or 16, the same rules – or rather lack of – apply.

 

That “free-for-all” era of age guesswork is ending.

 

From March 2026, app stores must fully comply with a code of practice from media regulator, the Infocomm Media Development Authority (IMDA). This means having in place age screening to prevent users aged below 18 from downloading apps meant for adults, such as dating apps or those with sexual content.

 

Singapore is one of the first countries to mandate this for app stores, with its App Distribution Service (ADS) Code one of the first that specifically targets the app store layer.

 

Social media platforms, too, face growing regulatory pressure to be more accountable by showing how they know who their younger users are – and what steps they take as a result.

 

Some are now experimenting with a new generation of tools: AI systems that infer your age from your clicks, selfie scans that estimate it from your face, and ID checks that feel uncomfortably close to airport security.

 

However, this poses sweeping challenges to do with privacy and safety. The answer for Singapore is to move decisively, but carefully towards layered, proportionate age-screening systems that provide real protection for children – without defaulting to blunt, intrusive ID checks.

 

The hard approach

At one end are “hard” methods: uploading a government ID, or verifying via a credit card or digital ID. For example, Instagram and Facebook request IDs, if an account is flagged as underage; YouTube sometimes requires IDs or credit-card authorisation for age-restricted videos, especially in stricter jurisdictions.

 

In Singapore, one response would be to favour the strongest checks everywhere. Make Singpass the default login. Require parental unlocking for social apps. Technically, this would address many verification challenges.

 

Socially, it raises other issues.

 

There is a safety-privacy trade-off. Systems strong enough to keep most under-13s off mainstream platforms are also strong enough to keep very detailed logs of who sees what.

 

Even if no active monitoring is intended, the technical ability to link every login, search or “like” to an NRIC-verified account introduces what is called a panopticon effect: the awareness of traceability can alter how people explore ideas, culture or opinions online.

 

There is also a hard cybersecurity reality. Centralising verified identity together with social-media activity creates a high-value data honeypot. Singapore’s SingHealth breach in 2018 and subsequent phishing waves demonstrated that even well-resourced systems are not immune.

 

If platforms storing NRIC-linked identities are breached, attackers gain not just contact details, but also complete private digital histories tied to legal identity. That risk is qualitatively different from today’s data leaks.

 

Child protection is essential – but without strict data-minimisation and separation between identity verification and behavioural records, the infrastructure built to protect children could unintentionally normalise lifelong traceability for everyone.

 

There is also a safety-inclusion trade-off. The children most at risk online are often those with the least tidy paperwork: no passport, no card – the age to legally register for the NRIC is 15 – and complex family situations.

 

Designing a system that assumes every child has an attentive parent with a Singpass and time to manage settings, risks excluding those who most need safe, moderated spaces.

 

And there is safety versus the reality: However strict the gates, some teens will climb the fence, sharing accounts, borrowing devices, using VPNs, or drifting to less regulated platforms.

 

A good age-verification regime protects most children most of the time, but cannot be the only line of defence. What we need are less intrusive systems.

 

The middle way

Facial age estimation is one: Instagram, TikTok and Twitch use short selfie videos to estimate age. Images are analysed by models trained to identify age ranges, then discarded.

 

Error rates still vary across demographics, but the technology is often reliable enough to separate children from adults when buffers are built in, such as treating anyone who appears under 23 as “not safely over 18”.

 

For users, this process is quick. For policymakers, it can look like a pragmatic compromise between assurance and intrusion.

 

Beyond the platforms themselves, there is also the ecosystem layer. App stores already provide age ratings and family controls. With the new ADS Code, Singapore is formally making these intermediaries part of the age-verification chain.

 

Indeed, rather than Singapore defaulting to a single “nuclear option”, there is clearly room to explore more layered approaches, such as checks that become stricter as content becomes riskier.

 

Another approach is systems that verify age without hoarding data such as facial images and IDs. There also needs to be transparency around error rates by AI in verifying the user’s age.

 

It’s about layering the safeguards depending on how much information is shared – and, correspondingly, how much information of the individual is at risk in the event of a security breach. Such an approach minimises the damage.

 

Simply put, if a platform only has an “age token” rather than a scan of your passport, a breach is a minor inconvenience rather than a life-altering identity theft.

 

The next steps

Singapore’s framework separates responsibilities across the ecosystem – app stores, social media services, and parents  – and asks each to do a different part of the job. There’s no single silver-bullet solution.

 

The Social Media Services Code (SMS Code), in effect since 2023, requires major platforms to curb harmful content and to provide children with more protective, age-appropriate default settings. Then came the ADS Code, mandating app stores to do age verification.

 

While the layered model is thoughtfully designed, design is not the same as demonstrated impact. We should be clear-eyed about what success looks like. Are fewer children accessing age-inappropriate content? Are harmful exposure rates declining? Are screen-addiction indicators improving?

 

At present, publicly available outcome data remains limited, making it difficult to measure whether our approach outperforms stricter bans or looser regimes elsewhere. Without transparent metrics – on under-age access rates, circumvention rates, harmful-content prevalence and well-being outcomes – policy risks being assessed on intent rather than effect.

 

A mature system must therefore pair new safeguards with rigorous, published evaluation, so that approaches can be adjusted based on evidence, not assumption.

 

Australia has gone for a complete crackdown approach, by banning under-16s from holding social media accounts, setting a global precedent. However, such a ban is a blunt instrument.

 

In a hyper-connected hub like Singapore, a total ban is likely to drive young users underground, pushing them towards VPNs or unregulated fringe platforms where safety oversight is weaker, not stronger.

 

It would also undercut the very digital literacy Singapore seeks to cultivate. Social media is today’s town square: the answer is not to lock children out of it, but to design safer, age-appropriate spaces within it and teach young users how to navigate responsibly.

 

A model of curated access – safer environments rather than sealed doors – is more consistent with Singapore’s pragmatic, future-ready approach to building tech-savvy citizens.

 

One thing is clear – the days of a simple “yes tick” will soon be a thing of the past. And about time, too.

 

  • Roy Ka-Wei Lee is an assistant professor at the Singapore University of Technology and Design, specialising in AI.