The Pitch Sounds Reasonable. The Architecture Doesn’t.

Protect the children. It’s the most politically bulletproof justification in the legislative toolkit, and New York’s Senate Bill S08102 deploys it without hesitation. Introduced by Senator Andrew Gounardes and currently under review in the Senate Consumer Protection Committee, the bill promises to shield minors from the harms of the internet by enforcing age verification at the device level.

The child safety rationale is real. Youth mental health outcomes correlate with certain patterns of social media use, and New York Attorney General Letitia James—whose office would both write the rules and enforce them under this bill—has framed the legislation in those terms, pointing to high rates of anxiety and depression among young people as a driver for regulation.

But the mechanism S08102 proposes to achieve that goal goes dramatically further than keeping kids off TikTok. Read carefully, it describes a mandatory identity layer baked into every internet-connected device sold or operated in New York—one that would broadcast a verified age signal to every app, website, and online service a user touches. That’s not a parental control. That’s infrastructure.


What the Bill Actually Does

S08102 would amend New York’s General Business Law to require “covered manufacturers”—a category that encompasses device makers, operating system providers, and app stores—to conduct what the bill calls “age assurance” at the point of device activation. The bill defines age assurance broadly as “any method that can reasonably determine the age category of a user, using methods that reasonably prevent against circumvention.”

Users would be sorted into four buckets:

  • Under 13
  • Ages 13–15
  • Ages 16–17
  • 18 or older

Once categorized, the device would transmit that age signal to every app and website the user accesses via a real-time API. App developers who receive the signal would be legally required to treat it as authoritative. The bill is not proposing a voluntary system or a parental opt-in. It is mandating that the identity check happen at the OS layer—before a user ever opens an application.

Penalties for non-compliance are set at $50,000 per day per violation. News organizations are exempt. If signed into law, the bill would take effect one year after enactment.


The Verification Problem

The bill’s deliberate vagueness about how age must be verified is where its implications become most serious.

The phrase “commercially reasonable” methods is doing enormous work in S08102’s text, and Attorney General James has already signaled what those methods might look like. In September 2025, her office released a Notice of Proposed Rulemaking for the SAFE for Kids Act—a separate but related New York law targeting addictive features on social media—that explicitly floated biometric analysis and government-issued identification as acceptable verification approaches.

S08102 would apply the same rulemaking framework to device-level verification. That means the AG—who writes the rules, enforces the rules, and has already stated a preference for biometrics and government ID—would determine exactly what “age assurance” means in practice. The bill’s broad language hands her office that discretion explicitly.

The difference between California’s approach and New York’s is instructive here. California’s Digital Age Assurance Act (AB 1043), signed in October 2025 and taking effect January 1, 2027, requires OS providers to collect age via self-declaration at account setup. Users simply enter their age. No facial scan. No passport upload. Assemblymember Buffy Wicks, who authored the California bill, specifically cited that design choice as an effort to “avoid constitutional concerns.”

Colorado’s SB26-051 takes a similar self-attestation approach, requiring OS vendors to collect age brackets at setup and share them with app stores.

New York went a different direction. S08102 explicitly draws from the SAFE for Kids Act’s rulemaking framework, where “commercially reasonable” verification already includes biometrics and government-issued ID. California asks you to say how old you are. New York wants to check.


A Permanent Identity Layer

Even without biometric requirements, the architecture S08102 establishes would represent a fundamental change to how the internet works.

Today, internet use is largely pseudonymous by default. You can install apps, visit websites, and use services without your legal identity being attached to those activities unless you affirmatively choose to provide it. That default matters—not just for privacy, but for free expression. The Supreme Court established in McIntyre v. Ohio Elections Commission (1995) that anonymous speech is protected under the First Amendment, recognizing a tradition of political, religious, and personal communication that depends on the ability to speak without identification.

S08102 would erode that default at the infrastructure level. Every device would carry a persistent age signal. Every app that requests it would receive a verified identity datum tied to the device. Over time—through data aggregation, API logging, and the inevitable integration of identity systems—that signal becomes something more than an age bracket. It becomes a trackable identifier linking device usage to a verified legal identity.

This concern is not hypothetical. Age verification systems create records. Records can be subpoenaed, breached, or compelled by government order. A database of who verified their age to access what service is, functionally, a surveillance log—one that maps which New Yorkers accessed which websites and applications, and when.

The ACLU has argued in related litigation that laws limiting online access in the name of child safety consistently harm speech and privacy rights for adults as well. In Free Speech Coalition v. Paxton, challenging Texas’s adult content verification law, the ACLU noted that such laws “have a massive chilling effect on adults” while often doing little to prevent minors from accessing restricted content.


The Open Source Problem

S08102’s reach extends beyond commercial platforms in ways that have already alarmed technical communities.

The bill’s definition of covered manufacturers is broad enough to encompass open source operating systems—Linux distributions, FreeBSD variants, and similar projects maintained by volunteer communities with no revenue model, no legal team, and no mechanism to implement real-time age verification APIs. California’s AB 1043 raised the same concern, prompting one FreeBSD derivative, MidnightBSD, to add a clause to its software license explicitly barring California residents from desktop use rather than attempt compliance.

An open source calculator application, DB48X, added similar restrictions for California and Colorado users, with its maintainers stating plainly that the project “does not, cannot and will not implement age verification.” Major Linux distributions including Fedora, Ubuntu, and Linux Mint have opened legal discussions about how—or whether—to comply.

New York’s bill, by extending verification requirements to every internet-connected device and mandating real-time API infrastructure, compounds this challenge. Open source projects cannot verify user identity at the OS layer without fundamentally compromising the privacy principles that make them valuable alternatives to commercial operating systems.


The Enforcement Asymmetry

One structural feature of S08102 deserves particular attention: the consolidation of rulemaking and enforcement authority in a single office.

The New York Attorney General writes the verification rules. The Attorney General enforces them. The penalties are $50,000 per day. And the AG has publicly stated that biometric analysis and government ID verification are the preferred compliance mechanisms.

That concentration of authority means the specific meaning of “age assurance”—which methods are required, how data must be stored, what constitutes a violation—will be determined through administrative rulemaking with limited legislative guardrails. If those rules require facial recognition or government ID uploads, they will do so without a separate legislative vote, and the same office that wrote the rules will pursue violations.

This is not a hypothetical concern. The SAFE for Kids Act’s proposed rules, released in September 2025, demonstrate exactly what the AG’s office considers acceptable. S08102 explicitly connects its compliance framework to those rules.


The Broader Wave

New York’s bill doesn’t exist in isolation. A wave of state legislation is pushing age verification requirements from the application layer down into the operating system and hardware layer, creating a patchwork of mandates that collectively describe a new model for internet access.

StateBillApproachStatus
CaliforniaAB 1043 (Digital Age Assurance Act)Self-declaration at OS account setup; age bracket APISigned October 2025; effective January 2027
ColoradoSB26-051Age attestation at OS account setup; age bracket APIActive legislative consideration, March 2026
New YorkS08102Age assurance at device activation; biometrics/gov ID possible; real-time API to all apps and websitesIntroduced; in Consumer Protection Committee

Each of these bills requires OS vendors to collect age data and broadcast it to developers. New York’s goes furthest by extending the requirement to all websites and online services—not just app store downloads—and by leaving the door open for verified identity methods rather than self-declaration.

The cumulative effect, if all three laws are implemented as written, is that major operating systems would be required to maintain age databases for their users, transmit those records to any requesting developer, and—in New York—potentially do so based on biometric or government ID verification. The internet, for users of those devices, would no longer have a default-anonymous baseline.


What Supporters Say

The case for S08102 isn’t trivial, and dismissing it entirely misreads the genuine problems it’s responding to.

Children do encounter harmful content online. Platforms have demonstrably exploited engagement mechanics that research suggests are damaging to adolescent mental health. Age-gating in physical spaces—buying alcohol, seeing an R-rated film—is accepted as reasonable policy without being treated as surveillance infrastructure. If platforms know users’ ages, they can be held accountable for the experiences they deliver to minors.

AG James and Senator Gounardes have positioned S08102 as a necessary tool given the failure of voluntary industry compliance. Platforms have had years to implement meaningful protections and have largely chosen engagement over safety. State-level action, supporters argue, fills a vacuum left by the absence of federal standards.


What Critics Are Getting At

The objections to S08102 are not primarily about whether children should be protected online. They’re about whether this specific mechanism is proportionate, effective, or constitutional.

On proportionality: verifying the identity of every user of every internet-connected device in New York in order to protect minors from social media content is, by any reasonable measure, a very large gun aimed at a much smaller target. Existing parental control tools, filtering software, and platform-level age-gating could address the problem without mandatory device-level identity infrastructure.

On effectiveness: motivated teenagers have circumvented every age verification system deployed to date. VPNs, borrowed devices, and family sharing accounts are well-understood workarounds. The bill’s compliance burden falls overwhelmingly on law-abiding adults and technical communities, not on the minors it’s designed to reach.

On constitutionality: the Supreme Court has repeatedly struck down broad internet regulations premised on child protection when they impose significant burdens on adult speech and less restrictive alternatives exist. The doctrinal question S08102 would face—whether mandatory device-level identity verification imposes an unconstitutional chill on protected speech and anonymous communication—is serious and unresolved.


The Question Worth Asking

Every piece of legislation that builds identity infrastructure does so in the context of today’s political environment. The systems created for one purpose do not remain limited to that purpose. A device-level age verification database maintained by Apple, Google, or Microsoft—or by the state of New York—is also a database that can be queried, breached, subpoenaed, or repurposed.

The child safety rationale for S08102 is real. The surveillance architecture being constructed in its name is also real, and extends well beyond any child. Whether those two things can be separated—whether it’s possible to build effective age verification without building a permanent identity layer across the internet—is the question New York’s legislature has not adequately answered.

The bill is in committee. There is still time to ask it.


S08102 was introduced by Senator Andrew Gounardes and is currently under review in the New York State Senate Consumer Protection Committee. The amended version, S08102A, was reprinted February 25, 2026. Enforcement and rulemaking authority under the bill would vest in the New York Attorney General’s office.