Content Reviewed By

Reviewed by a board-certified physician (Medical) · Reviewed by a licensed attorney specializing in mass tort litigation (Legal)

Social Media Age Verification Laws by State: 2026 Guide for Parents

Published March 2026 · 9 min read

Medically reviewed by licensed healthcare professionals · Legally reviewed by mass tort litigation specialists · Last updated:

Social media age verification laws by state. Since 2023, a growing number of states have enacted laws requiring social media platforms to verify user ages and obtain parental consent before minors can create accounts. This guide explains what the laws require in key states, how platforms are complying, where compliance failures have occurred, and why these failures matter for families pursuing legal claims.

Why Age Verification Laws Matter Beyond Compliance

Age verification laws serve two functions: they create prospective protection for children who are not yet on platforms, and they create a retrospective evidentiary record for families whose children were harmed before or during the law's implementation period. When a platform fails to implement age verification required by law, that failure is evidence of the same pattern plaintiffs have alleged throughout MDL 3047: platforms knew about the problem, had legally enforceable obligations to address it, and either failed to comply or complied inadequately in ways that allowed ongoing harm.

For families with children who created social media accounts as minors — especially before any state age verification requirement applied to those platforms — the regulatory history is still relevant. It establishes the standard of care that was knowable and achievable, because other jurisdictions (notably the United Kingdom and the European Union) had implemented age-appropriate design requirements years before U.S. state legislatures acted. A platform that complied with UK regulations but not with analogous U.S. state requirements made a choice, not an oversight.

State-by-State Overview

Florida (HB 3, effective 2024): Florida's law is among the most aggressive in the country. It prohibits platforms from allowing children under 14 to create accounts at all, and requires parental consent for users aged 14 and 15. Platforms must implement "reasonable age verification" — which can include government-issued ID verification, account-holding credit or debit card verification, or third-party age estimation tools. Platforms that knowingly allow a child under 14 to create an account face civil penalties up to $50,000 per violation. The law also requires platforms to delete accounts found to belong to underage users and to notify parents when deletion occurs.

Texas (SCOPE Act, effective 2024): Texas requires age verification for platforms with 100 million or more monthly users, using government-issued ID, financial account verification, or comparable methods. The law applies to accounts created after the effective date and to existing accounts where the platform determines the user may be a minor. Violations are enforceable by the Texas Attorney General with civil penalties. The law also prohibits covered platforms from using features "designed to be addictive" for minors — a provision with direct relevance to product liability claims about engagement optimization.

Utah (Social Media Regulation Act, effective 2024): Utah requires parental consent before minors can use covered platforms and gives parents the right to access minor account content, including private messages. The law prohibits platforms from using design features that "facilitate excessive use" by minors — language that directly addresses variable reinforcement notification systems and infinite scroll. Utah's law is notable for its explicit anti-addiction provisions, which go beyond age verification into design standards.

Arkansas (Social Media Safety Act, modified 2023-2024): Arkansas's original law required platforms to verify ages and obtain parental consent before minor account creation. The law faced First Amendment challenges that resulted in modification, but core provisions survived judicial review. Age verification requirements remain in effect with updated implementation standards.

Louisiana (HB 61, effective 2024): Louisiana requires parental consent for users under 18 — the highest age threshold of any U.S. state law. Platforms must use age verification methods approved by the state attorney general. Louisiana's law has been cited as a model by other states considering similar legislation because of its broad age coverage and specific implementation standards.

Georgia, Indiana, Ohio, and Montana have all enacted or advanced similar legislation with state-specific variations in age thresholds (ranging from 16 to 18), verification method requirements, and enforcement mechanisms. The legislative trend is clearly directional: more states are enacting stricter requirements, and the floor for what constitutes legally required platform safety conduct is rising.

How Platforms Have Responded to Age Verification Requirements

Platform responses to state age verification laws have varied in approach and in compliance quality. Meta has implemented an age estimation tool that uses artificial intelligence to infer user age from account behavior and profile data. Independent researchers and regulators have raised questions about the accuracy and reliability of this approach, particularly for young teenagers whose online behavior patterns may not reliably signal their age. TikTok has implemented parental control tools through its Family Pairing feature and has introduced age gating for certain content categories. Critics argue these measures are insufficient because they rely on user self-reporting for the initial age gate.

Third-party age verification services — companies that use government ID matching, credit bureau data, or facial age estimation — have grown significantly as a result of state mandates. However, platforms have been reluctant to require government ID verification because of user friction concerns (users who are required to provide ID documents drop off at high rates) and privacy concerns. This reluctance to impose user friction is consistent with a broader pattern: when safety measures reduce engagement or acquisition, platforms have historically deprioritized them.

Age Verification Failures and Their Legal Significance

Age verification failures occur when platforms allow users to create or maintain accounts despite indicators that those users are minors. Common failure patterns include: accepting self-reported birthdates without verification; failing to act on behavioral or profile signals indicating underage use; failing to verify ages for accounts created through linked devices or family accounts; and applying age verification requirements inconsistently across product versions or regional markets.

In states with enacted age verification laws, documented compliance failures are actionable by state attorneys general and may support private civil causes of action under state consumer protection statutes, depending on the specific law. For individual families, a platform's failure to comply with an applicable age verification law at the time your minor child created an account is potentially relevant to claims of negligence per se — violation of a statute designed to protect minors from the specific harm they experienced.

Federal Age Verification picture

At the federal level, COPPA (Children's Online Privacy Protection Act, 1998) requires platforms to obtain verifiable parental consent before collecting personal data from users under 13. COPPA's implementation has been widely criticized as inadequate: the law relies on self-reported ages for enforcement, has never been substantively updated for the social media era, and has not been enforced in ways that deterred platforms from acquiring underage users. Proposed updates through COPPA 2.0 and the Kids Online Safety Act (KOSA) have advanced in Congress but have not been enacted as of early 2026.

The FTC has taken enforcement actions against platforms for COPPA violations — including a notable 2023 action against TikTok resulting in a $5.7 billion proposed settlement — but these federal enforcement actions have focused on data privacy violations rather than the mental health harm claims at the center of MDL 3047 private litigation. The two tracks are parallel and complementary: federal data privacy enforcement establishes that platforms knew they had underage users they were required to protect; mental health harm litigation establishes what that failure to protect caused.

What This Means for Your Family's Claim

If your child created a social media account as a minor in a state with an age verification law, and if that account was created after the law's effective date without the platform obtaining required verification or consent, that compliance failure is potentially relevant to your family's claim. Document the account creation date, your child's age at creation, the state where you resided at the time, and whether you received any notification from the platform at account creation. If you did not receive a notification that you were required to provide consent, document that absence.

Even in states without specific age verification laws, the regulatory history is relevant context. Platforms that complied with stricter international requirements but not with more permissive U.S. standards made deliberate choices that courts may find relevant to the duty-of-care analysis. A legal team evaluating your case will incorporate the regulatory history into the overall claim framework.

Related Pages on This Site

Platform Compliance Failures May Affect Your Claim

A free case review will incorporate the regulatory history into your family's situation. Start today.

Check Your Eligibility →
As Referenced In