05/01/2026 | Press release | Distributed by Public on 05/01/2026 09:52
Article by Kevin Tritt Photo illustration by Jeffrey C. Chase May 01, 2026
The immensely popular social media platform TikTok features over 1 billion daily active users and 34 million videos posted every day, with 63% of United States teenagers ages 13-17 active on the site.
While many of those short-form clips are harmless, some types of content and the platform's recommendation algorithm contribute to worsening mental health and increase risk of self-harm and even suicidal thoughts among teenagers.
As these platforms reshape how young people consume information and connect with one another, they have also raised urgent questions about mental health and digital safety. Jiaheng Xie, assistant professor of management information systems in the University of Delaware's Alfred Lerner College of Business and Economics, and his team have conducted research on how algorithm-driven platforms like TikTok can inadvertently amplify harmful mental health content, and how artificial intelligence (AI) itself might be used to mitigate those risks before tragedy occurs.
Xie's research project, "Short-Form Videos and Mental Health: A Knowledge-Guided Neural Topic Model," in Information Systems Research, aims to enhance the first step of content moderation: automated detection. Rather than focusing solely on video content itself, his model analyzes both videos and the comments viewers leave beneath them.
"What we are really interested about is whether the viewers have suicidal thoughts or not," Xie said.
If a large proportion of comments on a video reflect suicidal ideation, that reaction becomes a powerful signal that something in the video may be harmful.
Using this insight, Xie and his team collected both high- and low-risk videos from the platform and fed them into an AI pipeline designed to learn patterns associated with dangerous content. Over time, the model became capable of predicting whether new videos are likely to elicit suicidal thoughts among viewers, even before they gain widespread traction.
Xie, who recently received the Association for Information Systems (AIS) Early Career Award and was among only four scholars worldwide to be recognized this year, has been interested in mental health research that predates TikTok's rise to cultural dominance.
"I've always been interested in depression and healthcare-related research," he said, as his scholarly focus on depression began years before short-form video platforms became ubiquitous.
That long-standing interest took on new urgency around 2019 and 2020 when TikTok's popularity surged, particularly among teenagers. During that period, reporting from news agencies like the The New York Times and NBC News increasingly linked the platform to disturbing content and, in some cases, youth suicides.
"That caught my attention," Xie said. "That's obviously a very serious issue, and it falls into my category of research."
Rather than viewing TikTok simply as a social media app, Xie approached it as a complex socio-technical system - one where algorithmic design, human behavior and mental health outcomes intersect.
TikTok, like other major social platforms, already employs large-scale content moderation strategies. According to Xie, the company relies on a two-step process: AI systems first scan uploaded videos to flag potentially concerning material, and then tens of thousands of human moderators review the flagged content.
Yet despite these efforts, critics continue to argue that harmful content slips through the cracks. This is where Xie saw an opportunity to contribute, arguing that his field is uniquely positioned to understand not just the technology, but also the organizational and societal context in which it operates.
"We know the business context. We know what's going on with that technology as well as the problem domain," he said.
One of the most troubling aspects of short-form platforms is their recommendation systems. Xie pointed out that once users engage with concerning content, the algorithm may push similar videos into their feeds. For vulnerable teenagers, this can create a dangerous feedback loop.
"If you're a teenager that's depressed, you will keep getting those videos," he said. And while some may be supportive, others may intensify harmful thoughts.
Even more alarming, Xie noted that some videos explicitly violate community guidelines yet remain accessible, including content that provides step-by-step instructions for suicide or references to lethal substances.
These realities underscore the need for faster, more precise detection methods - ones that can scale to the volume of content produced daily.
A key innovation in Xie's approach is explainability. Rather than simply flagging a video as "high risk," his model draws on medical literature and established mental health frameworks, known as medical ontologies, to identify specific factors associated with suicidal ideation.
This means moderators receive not just a warning, but context: which topics, themes or risk factors contributed to the model's assessment. With more information, they'll be enabled to make faster and more informed decisions, he explained.
Such guidance is critical, especially when human reviewers must watch entire videos to assess risk - an inherently time-consuming and emotionally taxing process.
Xie is careful to emphasize that AI moderation is not a one-time fix. Models must evolve alongside platforms, user behavior and emerging content trends.
"This should be a continuing process," he said, pointing to ongoing concerns about insufficient accuracy in existing systems.
While his current work remains academic, Xie aspires for future collaboration with platforms like TikTok. Ultimately, his goal is not to discourage social media use, but to reduce its most dangerous consequences.
"We're not saying we don't want teens to use social media," he said. "We're just trying to prevent the negative outcomes."
For Xie, the motivation is deeply personal and profoundly human.
"These problems are directly related to people's lives," he said. By advancing AI tools that can operate at a global scale, his research aims to ensure technology serves as a safeguard, rather than a silent amplifier, of mental health risks in the digital age.