Facebook Mea Culpa Continues

Image for post
Image for post

The fallout continues for the world’s most popular social media platform. Weeks after CEO Mark Zuckerberg appeared before joint Congressional committees, the social media engine he launched is still answering to furious consumers who are livid the company allowed others, like Cambridge Analytica, to violate their privacy.

Facebook is still struggling to find a fix for that particular burning bridge, so the company has stepped back and taken a more unilateral approach to addressing customer experience on its platform. Now, Facebook wants to target “trolls” as well as those who post “sexist, racist and hateful” content on newsfeeds. But the company isn’t promising much, except to say this will be more difficult that stopping other kinds of posts that violate the forum’s community standards.

To date, Facebook has targeted and disabled about 1.3 billion accounts it deemed to either be “fake” or in violation of community standards. And that doesn’t include all the accounts “fakers” tried to create but were stopped in the process. No matter how you measure it, that’s a huge number of “unwanted” accounts for the platform to manage. And, apparently, that’s just the tip of the iceberg.

But, there is a silver lining. Facebook’s automated programs seem to be getting better at identifying and blocking violating content and those that spread it. By the company’s estimation, it has been able to successful stop about 86 to 99 percent of all “violence, nudity, and terrorist propaganda” that users have tried to post on the platform.

While consumers tend to think that’s a great start, that doesn’t touch the “fake news” issue, and that’s something Facebook is definitely going to have to reckon with sooner, rather than later.

The first step in that process is identifying exactly what counts as “fake news.” That process is fraught with potential traps, and fine tuning its net will likely upset many across the socio-political spectrum. In trying to tighten up a net that most agree is too loose, Facebook is bound to “snare” some content or content creators who, objectively, do not deserve to be censored.

Part of that comes from the fact that Facebook is still heavily dependent on user ratings in determining which content violates the platform’s guidelines. For example, in the case of hate speech, about 62 percent of the reports came from users, not from FB operations.

To fully regain customer confidence and repair it’s damaged reputation, Facebook will have to find that fine line between competent policing and overreach… Not an easy task, certainly, but well worth the effort.

Ronn Torossian is a public relations executive with over 20 years of experience

Written by

Ronn Torossian is CEO & Founder of 5WPR & one of America’s most notable PR executives. He is the Author of best-selling PR book, “For Immediate Release.“

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store