• Products

    Digital Trust & Safety Platform

    Fight fraud without sacrificing growth

    Learn more

    Platform solutions

    • Payment Protection
    • Account Defense
    • Dispute Management
    • Content Integrity
    • Sift Connect

    Sift innovations

    • PSD2 Solution
    • New Releases & Enhancements
  • Industries

    One solution, any industry

    Learn how Sift can work for your industry

    Learn more

    Featured Industries

    • Fintech
    • Payment Service Providers
    • Retail
  • Customers

    Case studies by industry

    See how leading brands succeed with Sift

    Learn more

    Featured Customers

    • DoorDash
    • Uphold
    • Paula’s Choice
  • Partners
  • Fraud Center
  • Resources

    Fraud-fighting resources

    Explore fraud trends and insights

    Learn more

    • Blog
    • Demos
    • Infographics
    • Ebooks & Reports
    • Videos
    • Podcasts
    • One-Pagers
    • Webinars
    • Trust & Safety University
  • Company

    Why leaders choose Sift

    Technology, community, and partnership

    Learn more

    Our mission: Help everyone trust the internet

    • About
    • Careers
    • News & Press
Talk to an expert
Products
  • Digital Trust & Safety Platform
  • Payment Protection
  • Account Defense
  • Dispute Management
  • Content Integrity
  • Sift Connect
  • PSD2 Solution
  • New Releases & Enchancements
Industries
  • Fintech
  • Retail
  • Payment Service Providers
Customers
Partners
Fraud Center
Resources
  • Blog
  • Ebooks & Reports
  • One-Pagers
  • Demos
  • Videos
  • Webinars
  • Infographics
  • Podcasts
  • Trust and Safety University
Company
  • Search Careers
  • Our Company
  • Contact Us
  • Engineering Blog
Talk to an expert Sign in
  • Blog Home
  • Digital Trust & Safety
  • Fraud
< prev / next >
Share this article on LinkedIn
Tweet this article
Share this article on Facebook
SOCIALICON
Share this article via email

Machine Learning Isn’t Always a Black Box

By Janet Wagner  / 

23 Jan 2017

There are a number of common myths about machine learning, which we touched on in a previous post. Now, we want to dive deeper into one of those myths: that machine learning is a black box.

While it’s true that many machine learning platform companies do not provide any insights about results or explain how their algorithms and models work, this does not have to be so. Some companies invest in making their machine learning predictions and results interpretable to users. Machine learning isn’t always a black box.

Interpretability is important

Companies that provide machine learning solutions should help users interpret the results. This instills a sense of trust for users, helping them feel that they’re not using an opaque system. For example, some fraud prevention platforms score transactions but don’t provide adequate explanations of the scores. Without these explanations, users won’t have confidence in the platform and may question its accuracy.

In a 2015 presentation, Sift Science CEO and Co-Founder Jason Tan explained that customers need to understand where scores come from, so they can trust the system enough to make automated decisions based on those scores.

“The nice property of an algorithm like Naïve Bayes or decision forest is that it’s very easy to visualize,” Tan said. “We’ve built out this whole console that our customers can log into to get insight into why someone was scored the way they were, and to get transparency.”

In addition to building trust between human and algorithms, another important aspect of interpretability is that it actually augments the human in decision making, helping focus human attention and intuition to a small set of important data, and thereby improving efficiency.

Interpretability is challenging

There are many different types of machine learning algorithms, models, and approaches – each with their strengths and weaknesses. One of the classic problems in the field of machine learning is telling a story about the results that come from those algorithms and models. Some models are extremely accurate but not very interpretable. Some models are highly interpretable, but not all that accurate. And AI researchers are starting to design models that are optimized for both – though skepticism remains about whether this is even possible.

Ultimately, regardless of what that algorithm or model surfaces, a challenge exists across all of them: how do you use human language to describe why the algorithm surfaced a particular example of fraud – for example, a particular user, order, or transaction – as being “bad.” It’s difficult to explain why an algorithm or model make a specific prediction. It’s challenging to describe or present the results in a way that humans can interpret and understand.

Explaining results with visualizations

Visualizations and dashboards are a great way to explain the results of a machine learning-based platform. Visualizations can take complex machine learning concepts and simplify them so that users can interpret and understand the results. For example, Sift Science provides dashboards where users can log in and find out why fraudulent transactions are scored the way they are.

Sift Science provides users with a numerical score that indicates how likely an order, user, or transaction is fraudulent. Users base their decisions on these scores, and it is important that they understand what those scores mean and whether or not a transaction should be allowed to proceed or should be reviewed to ensure it isn’t fraudulent.

Sift Science uses visualizations to tell the story of fraudulent transactions and suspicious signals. The visualizations simplify the results that come from the platform’s advanced machine learning algorithms and models. Users can also access the raw data and activity details behind the visualizations. Providing both visualizations and raw data helps users better understand different levels of fraudulent activity. Visualizations also help guide users so that they take the proper course of action.

Users must understand the results

Interpretability is key when it comes to machine learning. Users of fraud prevention and other machine learning platforms must understand the results to trust that the platform is doing its job. Plus, interpretability can help users make better, more efficient decisions. Visualizations are a great way to accomplish this.

Bottom line: machine learning platforms don’t have to be black boxes.

Related

machine learning

Janet Wagner

Janet Wagner is a technical writer who specializes in creating well-researched, in-depth content about machine learning, deep learning, GIS/maps, analytics, APIs and other advanced technologies.

  • < prev
  • Blog Home
  • next >
  • Company
  • About Us
  • Careers
  • News & Press
  • Partner With Us
  • Blog
  • Support
  • Help Center
  • Contact Support
  • System Status
  • Trust & Safety University
  • Fraud Management
  • Developers
  • Overview
  • APIs
  • Client Libraries
  • Integration Guides
  • Tutorials
  • Engineering Blog
  • Social

Don’t miss a thing

Get industry trends, insights, and actionable fraud-fighting tips.

You're on the list.

You can unsubscribe at any time. Please see our Website Privacy Notice.
Do Not Sell My Personal Information

If you are using a screen reader and are having problems using this website, please email support@sift.com for assistance.

© 2023 Sift Science, Inc. All rights reserved. Sift and the Sift logo are trademarks or registered trademarks of Sift Science, Inc.
Privacy & Terms

Secure your business from login to chargeback

Stop fraud, break down data silos, and lower friction with Sift.

  • Achieve up to 285% ROI
  • Increase user acceptance rates up to 99%
  • Drop time spent on manual review up to 80%
Your information will be used to contact you about our service and subscribe you to our direct marketing communications. You can, of course, unsubscribe at any time. Please see our Website Privacy Notice.