Couldn’t make it to Transform 2022? Watch all the sessions from the summit in our on-demand library now! Look here.
In today’s highly competitive digital marketplace, consumers are more empowered than ever. They have the freedom to choose which companies they do business with and enough options to change their minds at any time. A misstep that diminishes a customer’s experience during registration or onboarding can lead them to replace one brand with another, simply by clicking a button.
Consumers are also increasingly concerned about how companies protect their data, adding another layer of complexity for businesses as they seek to build trust in a digital world. Eighty-six percent of respondents to a KPMG study reported growing concern about data privacy, while 78% expressed fears related to the amount of data being collected.
At the same time, growing digital adoption among consumers has led to a staggering increase in fraud. Businesses need to build trust and help consumers feel their data is protected, but they also need to provide a fast and seamless onboarding experience that actually protects against back-end fraud.
As such, artificial intelligence (AI) has been touted as a fraud prevention panacea in recent years for its promise to automate the identity verification process. Yet despite all the talk about its application in digital identity verification, there are still a multitude of misunderstandings about AI.
MetaBeat will bring together thought leaders to provide guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Machine learning as a silver bullet
As the world is today, there is no true AI where a machine can successfully verify identities without human interaction. When companies talk about leveraging AI for identity verification, they are really referring to the use of machine learning (ML), which is an application of AI. In the case of ML, the system is trained by feeding it large amounts of data and allowing it to adjust and improve, or “learn,” over time.
When applied to the identity verification process, ML can play an innovative role in building trust, removing friction, and fighting fraud. With it, companies can analyze massive amounts of digital transaction data, create efficiencies and recognize patterns that can improve decision making. However, getting caught up in the hype without really understanding machine learning and how to use it correctly can diminish its value and, in many cases, lead to serious problems. When using machine learning ML for identity verification, companies should consider the following.
The potential for bias in machine learning
Bias in machine learning models can lead to exclusion, discrimination, and ultimately a negative customer experience. Training an ML system using historical data will translate data biases into models, which can be a serious risk. If the training data is biased or subject to unintentional bias by those who build the ML systems, decision making could be based on biased assumptions.
When an ML algorithm makes wrong assumptions, it can create a domino effect where the system constantly learns something wrong. Without the human expertise of data scientists and fraudsters, and without oversight to identify and correct bias, the problem will repeat itself, compounding the problem.
New forms of fraud
Machines are great at spotting trends that have already been identified as suspicious, but their crucial blind spot is novelty. ML models use data patterns and therefore assume that future activity will follow those same patterns, or at least a constant rate of change. This leaves open the possibility that the attacks will be successful, simply because the system has not yet seen them during training.
Overlaying a fraud review team on machine learning ensures that new frauds are identified and flagged, and updated data is fed back into the system. Human fraud experts can flag transactions that may have initially passed identity verification checks but are suspected to be fraud and provide that data to the business for closer analysis. In this case, the ML system encodes that knowledge and adjusts its algorithms accordingly.
Understand and explain decision making.
One of the biggest blows against machine learning is its lack of transparency, which is a basic principle in identity verification. One must be able to explain how and why certain decisions are made, as well as share with regulators information about each stage of the process and the customer journey. Lack of transparency can also foster mistrust among users.
Most ML systems provide a simple pass or fail score. Without transparency in the process behind a decision, it can be difficult to justify when regulators arrive. Continuous data feedback from ML systems can help companies understand and explain why decisions were made and make informed decisions and adjustments to identity verification processes.
There is no doubt that ML plays an important role in identity verification and will continue to do so in the future. However, it is clear that machines alone are not enough to verify identities at scale without adding risk. The power of machine learning is best harnessed alongside human expertise and data transparency to make decisions that help businesses retain customers and grow.
Christina Luttrell is the Executive Director of GBG Americas, comprised of How much Y IDology.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data techies, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more about DataDecisionMakers