Trending Topics

So your police department’s social post was removed for content violations

Sometimes, police need to share disturbing pictures or video on social media to catch criminals, but content moderation is a tricky subject. Here’s what agencies need to know

facebook-11.jpg

In March, the Yonkers Police Department claimed YouTube removed a video of an assault from the department’s YouTube channel due to “violent or graphic content” policy violations. YouTube later reinstated the video behind an age restriction wall after an appeals process, police said.

Content moderation by social media platforms can be a tricky gray area, so Police1 asked social media expert Yael Bar-tur for guidance on best practices for police agencies.

Everyone knows that police work isn’t always pretty. If you’re using social media to engage with your community, your posts won’t always be cops and kids playing basketball. Often, you will need to share content that might make people uncomfortable, like photos of wanted suspects or security videos of violent crimes.

Police should be posting this type of content because their social media channels can reach many people, which can lead to tips and arrests. However, sometimes this content may be removed or restricted from certain social media platforms, or members of the public may complain about its graphic nature. How do police departments decide when to share violent content that could aid a criminal investigation? And what happens when social media moderators disagree? Before we can answer those questions, we need to travel back to the dawn of the internet.

A brief history of online content moderation

Content moderation on social media platforms is perhaps one of the most widely discussed topics in online policy, particularly when it comes to violence, threats and “hate speech,” which means different things to different people.

To better understand content moderation, we need to look at Section 230 of the 1996 Communications Decency Act, also known as “the 26 words that built the internet.” Section 230 is the reason why social media platforms can almost never be sued for content published by users on their platforms. Here’s what Section 230 says:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Some people misread Section 230 as permission for platforms to publish whatever they want without worrying about consequences. In reality, Section 230 allows platforms to moderate more, not less. Stay with me.

Before Section 230 was written, websites that tried to moderate could be held liable for inappropriate content because the platform had appeared to have reviewed the content and determined it acceptable. This imperfect discernment made social media platforms vulnerable to lawsuits that accused them of letting bad content slip through the cracks. It became easier to not moderate at all. Thus, Section 230 was created to allow companies to moderate content and to protect them when they get it wrong.

To post or not to post

When drafting a social media post, have a clear goal in mind. Remember, the investigation comes first. If you are looking to identify a suspect, your post should make it obvious that that is your goal. Put the suspect’s identifying information front and center in your Tweet, Instagram caption, or Facebook post.

But what about the brutality of the incident? This is where it gets tricky. If the crime is especially jarring, a graphic video will likely help spread the case to more people, which may generate more leads for investigators. That said, you want to be balanced in your presentation of violent content. Don’t show more than what is needed to make your point. The graphic nature of a crime should be shared if it can serve the purpose of the investigators, not to make a political point.

Police department social media accounts must walk a fine line

In short, the field is a bit of a mess. Social media platforms like Twitter, Facebook and YouTube often use a combination of machine learning, user reports and human moderators to determine what should be removed from the platform for violating their terms of service. Sometimes they get it right, but more often, inappropriate content slips through the cracks, or acceptable content is caught in a wide net and removed without good reason.

Sometimes, moderators may compromise by marking content as sensitive or hiding it behind an age restriction wall. Unfortunately, these compromises will significantly decrease your audience because most people won’t take the extra step to click on a link or verify their age. Police departments must be aware of these pitfalls and try to walk between the raindrops when sharing potentially sensitive content.

What can my department do if its post is taken down by moderators?

If you can, develop a relationship with your social media company’s government representative. That way, if you do feel like you were restricted unjustly, you can appeal it or at least get clarification. It’s also helpful to connect with other people in your field through different organizations such as the IACP Public Information Officers section to trade tips, tricks, and yes, commiseration.

For what it’s worth, it seems like no platform has found the perfect content moderation formula. In fact, these companies often admit that moderation is a haphazard field in need of fine-tuning and optimization. It may take a while until companies and users get that balance right. Until then, it’s helpful to know that the moderation of police investigation content isn’t necessarily a targeted policy choice, but rather a longstanding problem that is shaping the future of online discourse as we speak.

NEXT: Do not cross this police Facebook line

Yael Bar-tur is a social media consultant who previously served as the director of social media and digital strategy for the New York City Police Department where she developed her own strategy and training guide for social media and policing. She has trained hundreds of members of service on the use of social media, both in the NYPD and in other agencies. She is also responsible for exploring new channels for the NYPD and creating viral videos with millions of views.

Born and raised in Israel, Yael served in the Israeli Army as a foreign press liaison in the height of two wars and was also a reserve duty soldier in the Israeli mission to Haiti immediately following the 2010 earthquake. She holds a master’s degree from the Harvard Kennedy School of Government, where she wrote her thesis on police use of social media. In 2016, she was named one of the International Association of Chiefs of Police “40 under 40,” recognizing 40 law enforcement professionals under the age of 40 from around the world that demonstrate leadership and exemplify a commitment to their profession. In 2018, Yael was awarded the Hemmerdinger Award for Excellence for distinguished public service by the New York City Police Foundation.

Follow Yael on Twitter or at www.yaelbartur.com.