Scroll through a few YouTube Shorts and a strange sensation lingers. Your finger keeps moving even though nothing is particularly funny or offensive. No scene sticks in your memory, yet time evaporates instantly. This state, dubbed "brain rot"—the phenomenon of cognitive decay—is not merely a matter of personal taste. According to Kapwing's 2025 analysis, approximately 21% of YouTube Shorts shown to new accounts are AI-generated videos, while 33% are classified as brain rot content characterized by repetitive, meaningless patterns (Curtis, 2025). This is no longer a genre deviation but has become a structural feature of platform feeds themselves. AI slop has effectively become the new norm in YouTube's algorithm.
Why Does YouTube's Algorithm Reward "Boring Repetition"?
Platform recommendation systems do not evaluate fun or meaning. The metrics platforms prioritize are watch time, completion rate, and repeat viewing. Platform algorithm analyses show that shorter, context-free videos tend to have higher completion rates, and completion rate serves as a key signal determining further exposure. Generative AI fits this structure perfectly. Production costs approach zero, and identical patterns can be endlessly varied. According to Kapwing's analysis (utilizing Social Blade data), top-tier AI slop channels upload hundreds of similar videos in short periods and rack up hundreds of millions to billions of views (Curtis, 2025). Here, "boring" is not a flaw but an advantage. Strong messages or narratives demand judgment, and judgment stops scrolling. Conversely, meaningless repetition suspends judgment and increases dwell time. Algorithms do not select creativity. They select repeatability.
Youth: The First Victims of AI Slop
The demographic most immediately affected by this structure is youth. According to research on children's digital environments by the OECD (2021) and UNICEF (2019), adolescents exhibit high dependence on recommendation feeds and are particularly influenced by recommendation algorithms during initial usage stages. Especially when interest data has not yet accumulated sufficiently, the influence of recommendation algorithms is overwhelming.
The problem is that AI slop and brain rot content appear "harmless" on the surface. There is no violence, no overt misinformation. However, concerns are raised that prolonged exposure to repetitive, low-stimulation content can accumulate attention deficits and cognitive fatigue. When combined with the "illusory truth effect" presented in the classic research by Hasher, Goldstein, & Toppino (1977)—where repeatedly exposed statements are judged as more valid regardless of factual accuracy—the concern deepens. The core of youth protection is no longer "whether harmful content is blocked." It is about how much seemingly harmless but thought-numbing content is amplified as the default. The European Union's Digital Services Act (2022) demands transparency in recommendation algorithms and protection of minors, treating the initial recommendation experience as a policy concern that can influence subsequent usage patterns.
Advertising Contamination: Exposure Increases, But Brands Don't Stick
The problem felt by the advertising industry is more direct. According to brand safety research, advertising effectiveness is more heavily influenced by the quality and context of the content where ads are placed than by mere exposure volume. Ads placed next to repetitive, meaningless content show significantly lower recall and favorability. In AI slop environments, views become cheap. But connection quality deteriorates sharply. According to WARC's "The Future of Programmatic 2024" report, advertisers are shifting budgets toward premium, curated environments due to brand safety concerns (WARC, 2024). This signifies a re-evaluation of "safely connected advertising (high brand safety advertising)" over "widely seen advertising." The more serious problem is the cumulative effect. When advertising is repeatedly combined with low-quality content, advertising itself is perceived as a fatigue signal. This is not an individual campaign issue but a structural risk eroding the entire platform's advertising trust capital.
Content Policy Must Now Address "Algorithmic Amplification" by Giant Platforms, Not Just "Labeling"
Discussions about labeling AI-generated content have already revealed their limitations. As mentioned earlier, source labels or warning texts may have limited effectiveness against repeated exposure. Familiarity substitutes for judgment. Therefore, the focus of policy discussions is shifting from content production to the responsibility of recommendation systems. This is why the European Union's Digital Services Act requires large platforms to conduct algorithmic risk assessments and provide explanatory accountability. The question is not who made it, but who amplified it. Particularly, what content is amplified as the default in feeds for youth and new users is being treated not as a technical issue but as a matter of public interest. According to the European Union's Digital Services Act (2022), protection of recommendation algorithms targeting minors is one of the core issues in platform regulation.
Actively Respond to the New Trend of Shorts: "AI Slop"
When discussing the ethics of AI slop, people often think of creators' lack of sincerity. However, a more uncomfortable question exists: Why do we keep watching those videos? As Shapiro (2024) and Salvaggio (2024) point out in their respective analyses, people in information-overload environments tend to rely on algorithms for judgment. Rather than deciding for themselves what is important and trustworthy, they follow recommendations. The ethical risk of AI slop lies not in overt misinformation but in the accumulation of "seemingly harmless meaninglessness." It stops judgment and automates choice.
AI slop will not disappear. Production costs are too low, and it works too well. The problem is when it becomes everything. Policy is attempting to rewrite the rules of amplification, industry is recalculating the price of trust, and youth protection is moving to the center of recommendation design agendas. Ultimately, all discussions converge on one core question: What are we making bigger? And who will bear the consequences first? Competitiveness in the AI era is not the ability to create more content. It is the courage to decide what not to amplify. Platforms without that courage can fill feeds but cannot protect the next generation.
About the Author: Professor Yoo Seung-chul teaches the 'Media Engineering & Startup Track' at Ewha Womans University's Department of Communication & Media. He provides inspiration to CEOs through various columns and videos on business content.
Original Source: MADTimes
Publication Date: December 23, 2025






