Suicide risk on the netWhy suicidal content on social media is dangerous

In addition to all the beautiful, interesting and funny content on social media services, there is unfortunately also negative content. For people in an acute crisis or even at risk of suicide, this content can be life-threatening. For example, if it reinforces the negative world view of those affected, condones suicide or even gives instructions for suicide. klicksafe explains why social media algorithms reinforce the problem and how we can help those affected.

Serious suicidal thoughts or even a suicide attempt are often triggered by a crisis situation: Bullying at school, lovesickness, a conflict with friends, family problems, school failure or general fear of failure. Depression can also lead to suicidal thoughts. In the adolescent's own assessment, these difficulties and crises are usually seen in an unrealistically negative light. Those affected feel unable to continue living in the same way as before. Suicide is seen as the only apparent solution to a situation that can no longer be overcome.

At this moment, it is important for those affected to receive help and be shown possible solutions to their problems. In this situation, however, content that confirms those affected in their negative world view is particularly harmful. This could be videos of other sufferers talking about their current poor condition, for example. Or content that glorifies a negative world view, self-harm and suicide. Content that portrays help options (e.g. counseling centers, therapy, medication) as useless or ineffective is also problematic. And, of course, all content that openly condones suicide and contains instructions for suicide.

What role do social media services play?

Many children and young people use social media platforms to communicate with friends, for entertainment, but also as a source of information. Instagram and TikTok are particularly popular among German adolescents. According to the JIM Study 2023, around 60% of young people use these platforms daily or several times a week. Content that glorifies suicide is prohibited on both platforms. TikTok writes on its website on the topic of "Suicide and self-harm": "We do not allow content that depicts, promotes, trivializes as "normal" or glorifies actions that could lead to suicide or self-harm."
Instagram has also set out clear rules for dealing with suicide content in its guidelines: "We do not allow glorifying or advocating self-harming behavior or suicide under any circumstances. In addition, we remove fictional depictions of suicide and self-harm as well as content about methods or aids."

Despite these clear rules, both platforms are repeatedly criticized. One reason for this is that people in crisis situations are more likely to see problematic content. This is because when the platform's algorithm has recognized that someone is interested in content on the topic of mental health, more and more of this content is suggested.

Amnesty International, for example, was able to document this phenomenon in the report "Driven into darkness: How TikTok promotes self-harm and suicidal ideation" (available in English only). In test accounts that simulated the behavior of people with mental health problems, every second suggested video contained problematic content after a short time. These accounts were also shown mental health content up to ten times more frequently than other accounts.

The same problem can also be observed on Instagram. In an internal study entitled "Teen Mental Health Deep Dive", Instagram surveyed users between the ages of 13 and 17 about mental health. The research was published by Instagram a few years later (only available in English) after results were leaked to the press and reported worldwide. In the study, around 12 percent of users stated that they had been shown suicidal or self-harm content on Instagram in the last month. Particularly problematic: users who were mentally unwell were shown problematic content significantly more often than users who were well(see page 40).

How can you help those affected?

  1. Be attentive and use reporting functions: Parents should always start by talking openly with their children about their online behavior and, if necessary, about the topic of suicide. As a general rule, if you notice content that glorifies suicide, please always report it to the support team at first (e.g. using the report function). The platform operators can contribute most quickly and effectively to making it more difficult for children and young people to access this content.
  2. Keep calm: If you suspect that a child is at risk of suicide, you should not assault themwith appeals, demands or even coercion. Try to speak directly to the person concerned. Take a neutral, non-judgemental stance if possible. Share your own worries and fears very clearly. Address the fear of suicide concretely and very directly (without paraphrasing or trivializing). Signal that you are available as a contact person. If there is a willingness to talk, you can ask what is so stressful in life and what you can do to help the problems be better addressed and dealt with. Ask how serious the suicidal intentions are. Make an effort to identify alternative ways of solving problems.
  3. Get professional support: If the person concerned is not willing to talk or you are at a loss for advice yourself, you do not have to remain inactive. Share your observations with other people you trust. Getprofessional support if possible  Adults in particular who are unsure how to assess the behavior of affected children or adolescents should seek advice and help from counseling centers. Professional help in acute crisis situations can be found, for example, at school psychological counseling centers, educational counseling centers or at girls' and women's clubs.
  4. Have content checked: You can also have content that is difficult to assess checked by experts for its risk potential. You can contact or for this.
  5. Check the use of social media: Talk to the person concerned about whether social media content has contributed to their negative mood recently. Draw the person's attention to the fact that sorting by algorithms can lead to more negative and problematic content being displayed. If this is the case, you can consider whether deliberate social media breaks make sense and are good for you. If you don't want to give up the platform, you can also create a new account and use it until further notice. Please note: The use of social media is not necessarily problematic or dangerous. On the contrary, social media services can also be perceived as helpful and positive by people in crisis situations. For example, because they can keep in touch with friends and distract themselves.

Further information in the klicksafe topic area Suicide risk online