Racism in Social Media
The number of people using social media is multiplying. In April 2021, about 72 percent of Americans used social media compared with 5 percent in 2005, according to Pew Research. While social media enables people to connect with friends and family in positive ways, they can also allow people to engage in negative behavior.
Is racism illegal?
The Civil Rights Act of 1964 outlawed discrimination by race in employment, housing, and education. However, speech, including hate speech, is protected under the First Amendment. Hate speech is only a crime if it consists of violent threats aimed at a particular group or can be directly linked to criminal activity.
Is racism a crime?
Racism that violates the Civil Rights Act is most often a civil issue. For example, those discriminated against in employment or housing based on race file lawsuits to protect their employment rights. Racism can turn into a hate crime, however. The FBI defines a hate crime as a “committed criminal offense which is motivated, in whole or in part, by the offender’s bias(es) against a race, religion, disability, sexual orientation, ethnicity, gender or gender identity.”
How does racism manifest itself on social media?
Racism manifests itself on social media in various ways, ranging from policies and censorship to racist speech.
Censorship of Black Users
Black users report that social media censor their posts while allowing racist posts. For example, black social media users who are particularly outspoken about white supremacy often find their postings removed because they violate community standards censorship, according to an article in Forbes. The group No White Saviors recently saw much of its content shadow banned, or hidden, on Instagram, according to the article. No White Saviors also says that Instagram has threatened to eliminate their profile entirely. The article also quotes a diversity consultant as saying her responses to racist comments on Linked In were suppressed. The groups say that white supremacist speech remains protected.
Allowing the Spread of Misinformation about BLM
Social media has also allowed untrue about Black Lives Matter to remain in posts and spread, according to NPR. BLM organization leaders believe the mistruths are part of an overall strategy to undermine trust in the movement. For example, posts have identified BLM as a terrorist group and said BLM activists badly beat elderly white people. The post about the beatings was removed after someone discovered that photo was of an incident in Africa, the NPR article said. An article in the Columbus Dispatch says that a rumor on social media that “Antifa” protesters were sending thugs around the country to turn demonstrations violent contributed to making one Ohio protest violent.
Allowing hate speech and bullying to thrive
Social media also creates an environment that allows hate speech and bullying to thrive. One part of that environment is policies and algorithms. For example, content moderation policies allow anonymity to harassers, according to a study cited by Ariadna Matamoros- Fernandez and Johan Farkas. The policies also allow racist jokes because they encourage engagement, according to a study sponsored by many universities. The APA reports on a study in which 48 percent to 60 percent of minorities had seen a joke posted on social media about their ethnic group.
Another reason hate speech thrives on social media is that these platforms provide an opportunity to post thoughts as soon as you have them. People sometimes post without appropriately filtering their thoughts, especially when emotional topics are involved, according to a Bark article. Also, some people are just trolls, meaning they like to post comments that will cause a stir, the article says. People also can find community around their anger, which in turn fosters more angry posts. The article cites Reddit as being particularly tolerant of racism thriving in their communities.
Misinformation about people of different ethnicities is also rampant on some social media. The Brookings Institution has identified four ways in which racist misinformation operates on social media. These ways are “stereotyping, scapegoating, allegations of reverse racism, and echo chambers.” An echo chamber is an environment where people only hear opinions similar to their own. It can distort perspective and further perpetuate misinformation, according to socialmedia.biz.
How Social Media Influences Hate Crimes
Some statistics demonstrate a correlation between an increase in hate crimes and an increase in social media usage. For example, social media use increased 3.1 percent from 2019 to 2020, according to Backlinko. During that same period, the number of hate crimes based on race and ancestry increased by about 32 percent, according to the FBI.
A paper published in the Journal of the European Economic Association shows a correlation between anti-refugee Facebook posts and anti-refugee events. The study looked at hate speech on the Facebook page of German right-wing populist political party Alternative for Germany (AfD) and found that anti-refugee posts and violence against refugees were linked even when there were no other refugee-related events that could have caused the violence.
The researchers concluded that “[their] results appear to be most consistent with the idea that short-run bursts in anti-refugee sentiment on social media can translate into real-life hate crimes by enabling coordination online, both through group actions and local spillovers”
Perpetrators of several white supremacist attacks have publicized their acts on social media or circulated among racist communities online. For example, Dylann Roof, who killed nine black clergy and worshippers in Charleston in 2015, read online that white supremacy required violent action, according to the New Yorker. The shooter who killed ten people in a Buffalo supermarket in 2022 posted a white supremacist manifesto on Facebook before the shootings.
Racism on Social Media Particularly Affects Children
The racism on social media is particularly damaging to children. Racism has a “profound effect” on children’s health, according to the American Academy of Pediatrics. Emotions such as anger, fear, and shame can be particularly devastating for the young. Children do not initially realize that they are being harassed because of race and may think they are only being teased. Then cyberbullying thrives, says the Bark article.
How Bystanders Can Intervene
Bystanders can intervene to prevent racism from spreading further on social media. One way is through countering misinformation with facts, especially by linking to reputable online sites that refute the misinformation. Refuting misinformation and providing accurate information is a positive way to counteract racism and can be successful, the Brookings Institution says.
Some bystanders call out offenders for their racist remarks. However, this strategy works better on some media than others. For example, Reddit allows for downvoting or using moderator bots to call out racist behavior. Twitter, however, does not, and users must do so with posts. When bystanders try to call out racist behavior through posts, their actions often result in more racist posts from the offender rather than fewer, the Brookings Institution says.
One of the most effective ways to intervene is to work toward changing the policies on some social media. Matamoros-Fernandez and Johan Farkas call for more studies “interrogating how race is baked into social media technologies’ design and governance rather than just focusing on racist expression,” as a way of combating racism in social media.
Help for Children Affected by Racism on Social Media
The Social Media Victims Law Center is committed to helping children who’ve been affected by social media racism. We understand how devastating racism can be to children and their families. We are experts at litigating cases against social media companies and can help you obtain the maximum compensation you deserve. Contact us today for a free case evaluation.
Frequently Asked Questions
For individuals and children who have been
We only handle cases on a contingent fee basis. This means that we are paid a portion of any recovery obtained in the case and you do not owe us any attorneys’ fees if the lawsuit does not result in a recovery.
Every case is unique. Our attorneys will work with your family to evaluate your potential case and help you evaluate whether filing a lawsuit or other legal proceeding is in your family’s best interest. Generally speaking, the types of cases we handle involve serious mental health effects, including attempted or completed suicide, eating disorders, inpatient mental health treatment, or sexual trafficking/exploitation that was caused by or contributed to through addictive or problematic social media use by teens and young adults.
We are a law firm based near Seattle, WA comprised of lawyers who have spent their entire careers representing victims who have been harmed by dangerous products. We are also parents. Shocked and troubled by the recent revelations about the harm caused to teens and young adults by social media platforms, which powerful technology companies have designed to be highly addictive, Social Media Victims Law Center was launched specifically to help families and children who have suffered serious mental harm or exploitation through social media use to obtain justice.
Matthew P. Bergman