Behavioural biometrics meet machine learning for fraud prevention
Behavioural biometrics meet machine learning for fraud prevention
Every year opens with business overviews and summaries. This time, some earth-shuttering statistics revealed how vulnerable companies are to cyber-threat. Prolific phishing attacks, data leaks and scams have taken their toll and not once has history showed that even one fraud-ridden episode can spiral into a massive crisis if not approached properly. In the UK only, in 2017 fraud was said to hit £1 billion for the first time since 2011. Three months into the year, the perils of fraud do not cease to be a burning matter and in the face of the looming threat, the question is not whether a company is likely to fall victim to fraud, but rather in what way fraud will affect it if it hasn't yet. 

The urgent call for a supreme fraud prevention solution paved a way for artificial intelligence to become a saviour, which in turn spurred the verification of its effectiveness and ensuing adaptation to the ever-changing challenges of fraud prevention. These days, it's the AI-powered user profiling and behavioural biometrics that are currently poised to have a transformational impact on anti-fraud defences.

Learning algorithms are capable of recognising patterns in data and discern fraudsters from legitimate clients by correlating thousands of pieces of information that, most probably, wouldn't be otherwise noticeable to a human eye (human brain, actually). The algorithms further inspect the user's hardware to build a most accurate profile of the person initiating the operation. Depending on the severity of the discovered “fraud-like” patterns, such a transaction can be accepted, blocked or handed over for a manual review.

Delving deeper into behavioural biometrics, everyone has their habits around typing or how one uses the mouse, keyboard, smartphone or touchscreen. This sort of biometrics is unconscious, but measurable. Models learn those patterns of behaviour as well as their particular traits and account for slight changes that are inevitably bound to occur. As a result, they provide profound data on how an individual interacts with their devices. These, in turn, have various sensors embedded that allow further investigation. The data they collect show how the devices are used – from the force with which the user taps the screen to their scrolling patterns. The data obtained from the gyroscope and accelerometer, for instance, enable the system to precisely define the position in which the device is held, the way it's moved, rotated, and so on. 

Interestingly, even most minute pieces of information can be indicative of the likelihood of fraud taking place. Fraud attempts can be halted before they are conceived provided behavioural data is collected while the user is typing the password on a keyboard. User identification based on keystroke dynamics relies on detailed timing information describing when each particular key was pressed and released. The binary classification models trained by data scientists are able to estimate whether it was produced by the same user by checking if the new behavioural pattern matches those collected upon registration. The precision of the solution is so uncanny that models can now tell the gender of the user with 95 percent accuracy.

There are more tell-tale traces of fraud, for instance, perfectly regular, rhythmical typing will raise suspicion either as it may be proof of a bot being employed. Importantly, despite the apparent complexity of the process, users are not in the slightest affected by this solution. The security of e-accounts is simply enhanced thanks to such a seamless verification system without compromising the user experience. Also, unlike in the case of traditional biometrics, users do not have to share any sensitive information (fingerprints, iris etc.) with institutions as the solution only reinforces the standard verification system (typically: a password) by examining their behaviour.

In order to boost fraud prevention, there is no need to feed a model with as much data as one can get, but rather, to leverage methods and techniques that bring maximum effectiveness at different stages of the process and in its different aspects. Additionally, given how rapidly organisations change, it bears highlighting that models should be bespoke – created individually, per goal, not per industry. The age and vastness of data itself does not necessarily guarantee its applicability in today's dynamically evolving business landscape. Essentially, the success of the solution does not lie in extracting as much data as possible, but in selecting contextually-relevant information and its proper understanding. 

Is this the end of the evolution of fraud prevention tools? Certainly not. Fraudsters, who are also AI-literate, are constantly upgrading their skills and techniques, it is therefore imperative for data scientist to follow suit and out-innovate them. The goal remains the same though: keep winning this seemingly endless tug of war. 

Contributed by Adam Szymański,  CTO, Nethone.  

*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media UK or Haymarket Media.