By Kacy Zurkus, Card Not Present
With every technology product seemingly touting artificial intelligence (AI) as part of its solution, organizations can sometimes fall prey to the glamour of marketing promises only to find themselves disappointed when the solutions don’t really deliver what they expected.
Some even have the misguided perception that AI means you no longer need humans, which Tricia Phillips, senior vice president of product and strategy at Kount, said is wrong. “What we see in the market is that providers actually have an army of people in the background who are doing the job of a fraud analyst.”
These human beings are analyzing transactions and feeding back outcome data and looking for trends that the model hasn’t picked up. What some may not realize about AI is that it takes humans to feed the model, and models take time to build out.
“Threats change so quickly that you need AI to keep pace, but you still have to have humans who know how to make sense of emerging trends,” said Phillips. “You also have to have humans pay attention to business outcomes and to drive those business outcomes.”
A model may be able to do a good job of detecting fraud, but it probably isn’t going to detect that you’re spending millions of dollars on a marketing campaign this weekend. With a focus on driving positive customer experience, the goal is to have lower false positives during the campaign—a difficult proposition if your AI-based system thinks the spike in traffic is an anomaly that indicates fraud.
“There’s business context that a model isn’t going to have,” Phillips said.
So, humans are critical and necessary in conjunction with any AI tool, but what should e-merchants be looking for with regard to AI technologies in fraud solutions?
Speed and Detection with Zero Friction
Ultimately, AI is a means to an end, which is why when it comes to identity verification, e-merchants should be looking for speed, fraud detection, and a seamless customer experience, according to Robert Prigge, president of Jumio.
“Generally speaking, they want a fraud detection solution that ferrets out the bad actors, but makes the experience for their legitimate users simple, intuitive and pain free,” Prigge said.
AI technologies can help on each of these fronts. Prigge notes that Jumio, for instance, continuously refines its AI models so they can extract key data from pictures of identification documents and inspect their authentication by quickly comparing images.
When it comes to fraud detection, “AI helps us understand the unique fonts, pictures and security features of thousands of government-issued ID types, making it easier to spot characteristics of fraudulent IDs,” Prigge said. “If an ID document has been manipulated or changed and does not conform to the pattern, our deep-learning algorithms flag it for closer review.”
Solutions that leverage machine learning to assign risk levels based on known risk factors are able to identify riskier transactions, which are then subject to augmented intelligence. “With this approach, we found that 1 percent of ID documents that meet our risk scoring threshold account for 15 percent of known fraud,” Prigge said.
“This risk engine is streamlining the experience for good customers, but increasing the number of hurdles for would-be fraudsters. As a result, this can help e-merchants reduce their online abandonment rates and increase new account conversions.”
Factors Challenging AI
In fraud prevention, companies that rely purely on AI, can be disappointed by its inability to reliably accomplish the tasks it is designed for. Whether it’s a complete fraud case management system, an email verification tool, a device ID tool or something else, if they use AI technology to automate some facet of fraud prevention, there will be challenges. According to Prigge, there are a number of complicating factors when AI is employed to authenticate IDs:
- Need for Big Data: Effective AI algorithms require huge datasets for each type of ID you wish you verify so that it can learn and parse out legitimate IDs from manipulated ones.
- Number of ID Types and Subtypes: AI models must be able to support them all.
- Blurriness, Bad Lighting and Glare: Picture quality is dependent on a user’s ability to capture a good image from their device—there are a variety of environmental factors that reduce the quality of the input image which challenge the very best AI algorithms.
- The Challenge of Omnichannel: Verifying customers across devices presents additional challenges for AI models because the camera quality of the latest iPhone is dramatically different than the camera quality of a 5-year old laptop computer.
- The Selfie Requirement: Many online companies require a selfie to ensure the picture in the selfie matches the picture in the government-issued ID. AI models are still perfecting this “match,” especially when ID photos are outdated or the person’s physical characteristics have changed, such as weight gain, hair loss, glasses, hats or sunglasses.
- Liveness Detection: New types of fraud, including spoofing attacks on selfies, can often stump an AI model.
Avoid the Hype
There is undoubtedly a lot of hype circulating around AI, and it can be misleading. The current reality of artificial intelligence is something else. It’s imperfect in most applications, Prigge said. It requires humans to feed and train it. It can be a huge help, but it’s still a work in progress.
“Any AI solution for us in credit and lending and financial fraud detection has to have rigorous explainability. You have to know why the model is making the decisions it's making,” said Jay Budzik, CTO of ZestFinance.
“Most explainability techniques can analyze the effect on a model of only a single variable or small segment of the model at a time. That one-at-a-time approach is not going to work. With hundreds of variables in an ML model, that kind of brute force work can drag on for hours or days.”
Capturing the interactions between and among multiple variables is something most explainability techniques are currently incapable of doing. Without that capability, though, Budzik said, “your explainer will compromise on accuracy and consistency.”