Social media used to be a window into real life with vacations, selfies, and moments that actually happened. But scroll through your feed today, and there’s a good chance half of what you’re seeing never existed at all. According to Instagram’s chief, Adam Mosseri, we’ve reached a tipping point.
Mosseri, in a recent Threads post, said artificial intelligence has flooded social media to the point that spotting real, human-made content may soon be easier than identifying what’s fake.
He outlined how AI-generated images, videos, and text now appear so frequently across feeds that the problem has flipped. Instead of chasing down what’s artificial, platforms may need to start focusing on proving what’s real.
Mosseri acknowledged the term “AI slop,” often used to describe distorted images or videos with obvious glitches, but said not all AI content looks that way anymore. As the technology improves, he warned, automated systems designed to detect fake media will struggle to keep up.
The numbers support his warning. Research from Kapwing analyzing 15,000 of the world’s most popular YouTube channels found that more than 20% of videos recommended to new users are synthetic content, with 278 automated channels collectively racking up 63 billion views and generating an estimated $117 million annually.
Meanwhile, about 13% of Reddit posts in 2024 came from algorithms rather than people, a 146% jump since 2021. Even visual platforms aren’t immune, as recent estimates suggest 71% of images shared on social media are now AI-generated.
A new approach to fighting the flood
As an alternative, Mosseri said it may be more practical to verify authentic media by fingerprinting photos and videos created by humans. One idea he raised is digital watermarking at the camera level, where metadata could confirm who captured an image and how it was made. Some stock photo platforms already track that information, though it is often stripped away once images are shared on social media.
The Instagram head’s observations reflect a growing frustration across the industry as AI-generated posts blend seamlessly into everyday feeds with platforms, including LinkedIn, having experimented with labeling AI content, but those systems seem to have made little difference and have proven inconsistent.
An October 2025 audit by Indicator found that major platforms correctly labeled only 33% of content as belonging to AI, while Meta faced criticism in June of 2024 for mislabeling real photographs as AI-generated when all photographers did was use basic editing tools.
The implications go beyond platform policy. If Mosseri is right, the next time you scroll past a sunset photo or a viral moment, the question isn’t going to be ‘Is this real?’ It’s going to reach a point where we ask ourselves, ‘Does it even matter anymore?’
