Stop bad content before it’s posted, and build better communities
By Yuchen Zhang /
29 Mar 2018
A listing that leads to a phony or non-existent product, apartment, or job. Spammy or abusive comments polluting your dating app or community. Messages from fraudulent sellers luring your buyers offsite…
Sound familiar? Trust is difficult to earn, but easy to lose – and if you’re a community or marketplace, the integrity of your site can make or break your business. Malicious or poor quality content represents an existential threat. The open nature of these platforms exposes significant risk to you and your users: bad actors can leverage your community to propagate spams, scams, toxicity, and more. You must protect your community.
At the same time, it’s imperative to keep your user experience friction-free, so that good users keep engaging on your platform. So how do you scale while keeping fraudsters at bay?
That’s where Sift Science Content Abuse Prevention comes in. You can protect your community from all types of fraudulent user-generated content – before it goes live. Customers who are already using Sift Science Content Abuse Prevention have seen their volume of flagged content go down by as much as 70%, without increasing headcount.
Real-time machine learning to fight multiple forms of content abuse
Existing rules-based or manually-intensive solutions are reactive, expensive, and difficult to maintain as your business grows. The Sift Science solution includes built-in automation tools and easy-to-use dashboards to help you stop content abuse in real time.
Our Live Machine Learning models analyze both the content itself and the behavior of users who are creating content. Here are just a few of the signals that our models learn from:
- When and how much a user is posting
- Sequence and speed of activities
- Who they interact with
- Number of duplicate or similar messages posted
- IP address and device used
- Locations and distances
- Social data
- Email, mail exchanger (MX) records, and domain analysis
- Flagged content from the community
- Resemblance of content to previously detected bad content
- Repeat versus first-time actions
- Presence of contact information or URLs
We also perform natural language processing to analyze the riskiness of the text itself, so you can avoid maintaining cumbersome blacklists of words and phrases.
We combine these learnings with 16,000+ other fraud and content abuse signals to get the most holistic view of each user – then score them according to their trustworthiness. And if a user commits fraud or abuse on your site or anywhere across our network of 6,000+ sites and apps, we’ll update their risk score immediately, so you can make sure similar behavior doesn’t happen on your site.
What’s new with Sift Science Content Abuse Prevention
Stop fraudsters early and catch bad content faster with automated review queues
We’ve enhanced our review queues to put the content front and center, highlighting all of the most relevant risk signals so you can make quick, easy decisions.
Not only can you block fraudulent users, but you can also make decisions on individual pieces of content. For example, you could block a specific comment while allowing the user to continue transacting on your site.
Protect your community without needing larger content moderation teams
Automate how you want to treat users and their content. Workflows and Queues let moderation teams act on suspicious content and users easily and quickly, without involving engineering resources.
Our technology will automatically pick up the new ways fraudsters adapt their language patterns, so your team does not have to continuously identify and blacklist new terms.
Want to learn more? Contact our sales team, visit the Content Abuse Prevention product page, and check out our integration guide. As always, we’d love to hear what you think!
Yuchen was a Product Marketing Manager with Sift. A promoter of machine learning, Yuchen has also worked in consulting, at Facebook, and at a number of data science and analytics startups.