New Warning System for Altered Media
X has quietly rolled out a new labeling system that automatically flags posts containing doctored images and videos. The “manipulated media” tag appears directly beneath posts that contain intentionally altered visuals designed to mislead users. When people try to share these flagged posts, they also encounter warnings in the share menu that direct them to X’s policy page about deceptive media.
I noticed this change started appearing earlier this week, though X hasn’t made any formal announcement about it. The labels seem to be appearing organically as users share content that gets flagged by the platform’s detection systems. It’s interesting timing, really—coming right as concerns about election misinformation are heating up again.
Transparency Efforts and Content Moderation
This update follows X’s recent release of its first comprehensive transparency report since Elon Musk acquired the platform. The company disclosed that it suspended 5.3 million accounts during the first half of this year and removed or labeled 10.7 million posts for various policy violations. That’s a significant number, though I wonder how effective these measures actually are in practice.
The platform received over 224 million user reports during that same period, which suggests people are actively trying to police content themselves. But there’s been ongoing criticism from advocacy groups who argue X doesn’t act quickly enough against harmful content. Musk has defended the platform’s approach, pointing to Community Notes as a user-driven fact-checking system that he considers more effective than traditional moderation.
Account Verification and Location Features
Alongside the manipulated media labels, X also introduced a feature that displays account metadata including location information, creation dates, and username change history. This data appears at the top of user profiles, with more detailed information available under a “Joined” tab.
X’s Head of Product Nikita Bier described this as “an important first step to securing the integrity of the global town square.” He acknowledged the rollout had some “rough edges” and promised additional improvements would come soon. The location feature includes privacy protections—users in countries with limited free speech can display only their region rather than specific country, and location updates are delayed and randomized.
Already, some interesting discoveries have emerged. Several accounts were deleted after users noticed discrepancies between their claimed locations and the platform’s data. One profile claiming to be based in Washington, DC was actually located in Africa according to X’s system.
These changes represent X’s latest attempt to balance content moderation with its stated commitment to free speech. Whether they’ll be effective in reducing misinformation remains to be seen, but they do provide users with more context about the content and accounts they encounter on the platform.





