TikTok and Meta align on deepfake-disclosure rules ahead of 2026 elections
A rare coordinated policy announcement names a shared detection standard and a common disclosure label.
TikTok and Meta jointly announced an aligned deepfake-disclosure policy this week, naming a common detection standard and a shared disclosure label that will appear on AI-generated or AI-manipulated political content across both platforms. The coordination, rare between two platforms that rarely coordinate on anything, comes six months ahead of the US midterm elections.
Under the aligned policy, political content — defined as content mentioning candidates, election administration, or pending legislation — must disclose any use of AI to alter a real person's appearance, voice, or actions if a "reasonable viewer" would be misled about what the person said or did. The definition draws heavily from a 2024 FEC advisory opinion on synthetic media in campaign ads, and from an emerging consensus in academic research on what counts as "deceptive" synthetic content.
The technical backbone is the Coalition for Content Provenance and Authenticity (C2PA) standard, which embeds cryptographic Content Credentials into media at the point of creation. Both platforms will automatically label C2PA-marked AI content and will apply their respective classifiers to flag unmarked synthetic media for review. The two companies have also committed to sharing classifier signals with each other on a limited basis — content flagged on TikTok can trigger review on Instagram, and vice versa, via a cryptographically hashed signal that doesn't reveal the underlying content to the other platform.
What's notable is not the disclosure requirement itself — both platforms had individual versions already — but the shared label. Previously, TikTok's "AI-generated" label looked and behaved differently from Meta's, which created confusion when the same clip crossed platforms. A unified visual standard makes the label itself a more durable trust signal, which is arguably the whole point. The label design itself was workshopped with academic experts and tested against focus-group audiences in six US media markets before finalization.
YouTube declined to sign on to the specific aligned label but told HowSociable that its existing required-disclosure system for "realistic altered" content remains in effect. X was not included in the coordination. A spokesperson for X, when asked about participation, said the company's existing Community Notes system provides "superior crowd-sourced context" and that formal platform-labeling is not a direction the company plans to move in.
Academic and civil-society reaction has been cautiously positive. Dr. Jennifer Martinez, a disinformation researcher at UC Berkeley, told HowSociable that the aligned label is "the first piece of actually useful platform policy on synthetic content I've seen in years — because it treats the reader as the beneficiary of the policy, not the advertiser." Critics on the creator side have raised concerns about classifier false positives and the risk that legitimate satirical or clearly-AI content (which is not trying to deceive anyone) will be labeled in ways that make it appear more suspect than it is.
The political-content narrowing is intentional and, per the policy language, durable. Non-political AI content remains governed by each platform's individual rules (Meta has now made labels mandatory for realistic AI Reels as of April 30; TikTok has maintained similar rules since 2023). The aligned label applies only to political content, which is a meaningful constraint — it keeps classifier cost manageable while addressing the category where deceptive synthetic content is most dangerous.
For creators in the political space, the implication is direct: disclose any AI use in political content, use C2PA-enabled tools where possible, and assume aggressive classifier review in the six months preceding the election. For creators outside the political space, the alignment is an early preview of how platform policies are likely to converge on other topics — a development that could make cross-platform publishing simpler, for once, rather than more complex.
Platform Policy Reporter
Alex covers platform policy, regulation, and moderation. They hold a law degree and have written about Section 230, the EU's Digital Services Act, and algorithmic transparency.
Related stories
Instagram will now require AI-content labels on all Reels, closing a loophole
The change, effective this month, closes a loophole that let creators skip disclosure on partially AI-assisted content.
Live: TikTok CEO testifies before Senate on child-safety rules
Follow along for updates from the hearing as they happen.