Sift Logo Several blue dots forming a sphere to the left of the word Sift in italic font.
  • Products

    Digital Trust & Safety Suite

    Fight fraud without sacrificing growth

    Learn more →

    Passwordless
    Authentication

    Account
    Defense

    Content
    Integrity

    Payment
    Protection

    Dispute
    Management

    Sift
    Connect

    PSD2
    Solution

    New Releases & Enhancements

  • Partners

    Sift Partner
    Program

    Join the leader in Digital Trust & Safety

    Learn more →

    Commerce platform partners


  • Industries

    One solution, many applications

    Learn how Sift can work for your industry

    Learn more →

    Featured industries


    Fintech

    Retail

    Food & Beverage

  • Customers

    See case studies by industry

    Sift works across every use case and region

    Learn more →

    Featured customers


  • Resources

    Explore our resources

    Access trends, guides, and insights from Sift

    Learn more →

    Blog

    Ebooks

    One Pagers

    Demos

    Videos

    Webinars

    Infographics

    Podcasts

    Trust & Safety University

  • Fraud Center
  • Company

    Why leaders choose Sift

    Technology, community, and partnership

    Learn more →

    Our mission: Help everyone trust the internet


    About

    Careers

    News & Press

Request a demo
Products
  • Digital Trust & Safety Suite
  • Passwordless Authentication
  • Account Defense
  • Content Integrity
  • Payment Protection
  • Dispute Management
  • Sift Connect
  • PSD2 Solution
  • New Releases & Enchancements
Why Sift
  • Salesforce
  • Magento
  • Shopify
Industries
  • Fintech
  • Retail
  • Food & Beverage
Customers
Resources
  • Blog
  • Ebooks
  • One Pagers
  • Demos
  • Videos
  • Webinars
  • Infographics
  • Podcasts
  • Trust and Safety University
Fraud Center
About
  • Search Careers
  • Our Company
  • Contact Us
  • Engineering Blog
Request a DemoSign In
  • Blog Home
  • Fraud
< prev / next >
Share this article on LinkedIn
Tweet this article
Share this article on Facebook
SOCIALICON
Share this article via email

Net Neutrality Has a Fraud Problem

By Roxanna "Evan" Ramzipoor  / 

17 Jul 2017

Remember a time when we could go a day without reading the words “fake news?” Since the election, people have been questioning the authenticity of all kinds of online content, everything from news articles to Facebook comments. Now, a hot-button issue has gotten caught in the fake news crosshairs: net neutrality.

Net Neutrality: The Latest Hot-Button Issue with a Fraud Problem

You’d be hard-pressed to find someone who doesn’t have an opinion on net neutrality. If you’re in the tech world, you’ve probably heard 97 of those opinions just on your way to work. (But if you’re not quite clear on what net neutrality is, here’s an overview.) Although net neutrality is the default, it’s not a given. In fact, Federal Communications Commission (FCC) chairman Ajit Pai has moved to dismantle it. In 2015, the FCC instituted rules preventing ISPs from blocking or throttling legal content. But Pai has proposed removing these regulations.

Unlikely Common Ground

To gauge support for Pai’s proposed deregulation, the FCC opened its website to a three-month period of public comment on net neutrality. As expected, the site was quickly flooded with comments from all sides.

Before long, something weird started to happen. People noticed that comments had appeared on the FCC website in their name – but these people had never actually submitted comments. The Verge released a report corroborating some of these claims. And in May, a non-profit called Fight for the Future, which supports regulations enforcing net neutrality, pressured the FCC to open an investigation into fake anti-net neutrality comments filed with the agency.

Surprisingly, the fake comments problem transcended policy perspectives. It wasn’t just anti-net neutrality advocates who were resorting to online fraud: the National Legal and Policy Center has estimated that 20% of the pro-net neutrality posts on the FCC site are fake, too. After analyzing the 2.5 million comments filed with the FCC, they found that over 100,000 pro-net neutrality comments were submitted with email addresses that did not match the signer’s name, as well as multiple comments filed under the same email addresses.

An Alarming Trend

These fraudulent comments are part of a growing lineage of fake online content. “Fake news” has been – well, in the news since before the presidential election. Andrew Bleeker, president of the organization that ran Hillary Clinton’s digital advertising, says that manufacturing and spreading fake content is easier than ever. Most information needed to craft believable fake news or comments is readily available online.

This is true even for political issues beyond net neutrality. In most states, voter rolls are legally downloadable or purchasable. Platforms like Facebook and Twitter make it shockingly easy to target audiences by their age, location, gender, political views, and other demographics. Everything a fraudster needs for their fake content toolbox can be found with the click of a mouse.

What’s the Answer?

The tempting answer is to start limiting people’s access to information like voter rolls and sites like Facebook so they can’t manufacture fake news in the first place. And while that might be a pragmatic solution in the short term, it is unscalable and undemocratic. A better answer might come from an unlikely source: machine learning.

Businesses and online publications have started relying on machine learning to parse fake and real content. The New York Times recently implemented a system called Moderator, which picks out potentially hateful, spammy, or fake comments from articles’ comment sections; Moderator was trained on over 16 million moderated Times comments. Facebook also uses machine learning to weed out out fake content. Google Maps has benefitted greatly from a machine learning system that picks out fake listings. And, of course, Sift Science offers a machine learning-based product aimed at  countering content abuse.

Clearly, fraud-fighting tools like machine learning systems must soon become the norm. The alternative is an online world weighed down by distrust. Organizations like the FCC cannot afford to be reactive, but instead should proactively implement such systems before engaging with the public. To do so is to promote transparency and trust online. To fail to do so is to build an online world where everything – every comment, every piece of news, and every user – is suspect.

Related

fake accountsfraudnews

Roxanna "Evan" Ramzipoor

Roxanna "Evan" Ramzipoor was a Content Marketing Manager at Sift.

  • < prev
  • Blog Home
  • next >
Company
  • About Us
  • Careers
  • Contact Us
  • News & Press
  • Partner with us
  • Blog
Support
  • Help Center
  • Contact Support
  • System Status
  • Trust & Safety University
  • Fraud Management
Developers
  • Overview
  • APIs
  • Client Libraries
  • Integration Guides
  • Tutorials
  • Engineering Blog
Social

Don't miss a thing

Our newsletter delivers industry trends, insights, and more.

You're on the list.

You can unsubscribe at any time. Please see our Website Privacy Notice.

If you are using a screen reader and are having problems using this website, please email support@sift.com for assistance.

© 2022 Sift All Rights Reserved Privacy & Terms

Your information will be used to contact you about our service and subscribe you to our direct marketing communications. You can, of course, unsubscribe at any time. Please see our Website Privacy Notice.