Remember a time when we could go a day without reading the words “fake news?” Since the election, people have been questioning the authenticity of all kinds of online content, everything from news articles to Facebook comments. Now, a hot-button issue has gotten caught in the fake news crosshairs: net neutrality.
Net Neutrality: The Latest Hot-Button Issue with a Fraud Problem
You’d be hard-pressed to find someone who doesn’t have an opinion on net neutrality. If you’re in the tech world, you’ve probably heard 97 of those opinions just on your way to work. (But if you’re not quite clear on what net neutrality is, here’s an overview.) Although net neutrality is the default, it’s not a given. In fact, Federal Communications Commission (FCC) chairman Ajit Pai has moved to dismantle it. In 2015, the FCC instituted rules preventing ISPs from blocking or throttling legal content. But Pai has proposed removing these regulations.
Unlikely Common Ground
To gauge support for Pai’s proposed deregulation, the FCC opened its website to a three-month period of public comment on net neutrality. As expected, the site was quickly flooded with comments from all sides.
Before long, something weird started to happen. People noticed that comments had appeared on the FCC website in their name – but these people had never actually submitted comments. The Verge released a report corroborating some of these claims. And in May, a non-profit called Fight for the Future, which supports regulations enforcing net neutrality, pressured the FCC to open an investigation into fake anti-net neutrality comments filed with the agency.
Surprisingly, the fake comments problem transcended policy perspectives. It wasn’t just anti-net neutrality advocates who were resorting to online fraud: the National Legal and Policy Center has estimated that 20% of the pro-net neutrality posts on the FCC site are fake, too. After analyzing the 2.5 million comments filed with the FCC, they found that over 100,000 pro-net neutrality comments were submitted with email addresses that did not match the signer’s name, as well as multiple comments filed under the same email addresses.
An Alarming Trend
These fraudulent comments are part of a growing lineage of fake online content. “Fake news” has been – well, in the news since before the presidential election. Andrew Bleeker, president of the organization that ran Hillary Clinton’s digital advertising, says that manufacturing and spreading fake content is easier than ever. Most information needed to craft believable fake news or comments is readily available online.
This is true even for political issues beyond net neutrality. In most states, voter rolls are legally downloadable or purchasable. Platforms like Facebook and Twitter make it shockingly easy to target audiences by their age, location, gender, political views, and other demographics. Everything a fraudster needs for their fake content toolbox can be found with the click of a mouse.
What’s the Answer?
The tempting answer is to start limiting people’s access to information like voter rolls and sites like Facebook so they can’t manufacture fake news in the first place. And while that might be a pragmatic solution in the short term, it is unscalable and undemocratic. A better answer might come from an unlikely source: machine learning.
Businesses and online publications have started relying on machine learning to parse fake and real content. The New York Times recently implemented a system called Moderator, which picks out potentially hateful, spammy, or fake comments from articles’ comment sections; Moderator was trained on over 16 million moderated Times comments. Facebook also uses machine learning to weed out out fake content. Google Maps has benefitted greatly from a machine learning system that picks out fake listings. And, of course, Sift Science offers a machine learning-based product aimed at countering content abuse.
Clearly, fraud-fighting tools like machine learning systems must soon become the norm. The alternative is an online world weighed down by distrust. Organizations like the FCC cannot afford to be reactive, but instead should proactively implement such systems before engaging with the public. To do so is to promote transparency and trust online. To fail to do so is to build an online world where everything – every comment, every piece of news, and every user – is suspect.