Facebook chose to fight fake news with AI, not just user reports

Facebook built two versions of a fix for fake news this year, and decided to trust algorithmic machine learning detection instead of only user behavior, a Facebook spokesperson tells TechCrunch.

Today Facebook was hit with more allegations its distribution of fake news helped elect Donald Trump. A new Gizmodo report saying Facebook shelved a planned update earlier this year that could have identified fake news because it would disproportionately demote right-wing news outlets.

Facebook directly denies this, telling TechCrunch “The article’s allegation is not true. We did not build and withhold any News Feed changes based on their potential impact on any one political party.”

However, TechCrunch has pulled more details from Facebook about the update Gizmodo discusses.

news-feed-machine-screengrabBack in January 2015, Facebook rolled out an update designed to combat hoax news stories, which demoted links that were heavily flagged as fake by users, and that were often deleted later by users who posted them. That system is still in place.

In August 2016, Facebook released another News Feed update designed to reduce clickbait stories. Facebook trained a machine learning algorithm by having humans identify common phrases in old news headlines of clickbait stories. The machine learning system then would identify and demote future stories that featured those clickbait phrases.

According to Facebook, it developed two different options for how the 2016 clickbait update would work. One was a classifier based off the 2015 hoax detector based on user reports, and another was the machine learning classifier built specifically for detecting clickbait via computer algorithm.

Facebook says it found the specially-made machine learning clickbait detector performed better with fewer false positives and false negatives, so that’s what Facebook released. It’s possible that that the unreleased version is what Gizmodo is referring to as the shelved update. Facebook tells me that unbalanced clickbait demotion of right-wing stories wasn’t why it wasn’t released, but political leaning could still be a concern.

The choice to rely on a machine learning algorithm rather than centering the fix around user reports aligns with Facebook’s recent push to reduce the potential for human bias in its curation, which itself has been problematic.


A Gizmodo report earlier this year alleged that Facebook’s human Trend curators used their editorial freedom to suppress conservative trends. Facebook denied the allegations but fired its curation team, moving to a more algorithmic system without human-written Trend descriptions. Facebook was then criticized for fake stories becoming trends, and the New York Times reports “The Trending Topics episode paralyzed Facebook’s willingness to make any serious changes to its products that might compromise the perception of its objectivity.”

If Facebook had rolled out the unreleased version of its clickbait fix, it might have relied on the subjective opinions of staffers reviewing user reports about hard-to-classify clickbait stories the way it does with more cut-and-dry hoaxes. Meanwhile, political activists or trolls could have abused the reporting feature, mass-flagging accurate stories as false if they conflicted with their views.

This tricky situation is the inevitable result of engagement-ranked social feeds becoming massively popular distribution channels for news in a politically-polarized climate where campaign objectives and ad revenue incentivize misinformation.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *