Artificial Intelligence - is it the answer for identity management?

Lee Painter says a rise in security breaches due to abuse of access has put the spotlight on Identity and Access Management. So how might Artificial Intelligence shape its future?

Lee Painter, CEO, Hypersocket Software
Lee Painter, CEO, Hypersocket Software

Identity and Access Management (IAM) is already a key weapon in the security arsenal of many organisations as a way to mitigate against data breaches and manage the additional risks that come with remote working and Bring Your Own Device (BYOD). And the take up of IAM solutions is set to gain even more momentum. A recent MarketsandMarkets research report predicted that the global market for these solutions will grow to US$12.8 billion (£9.7 billion) by 2020, up from US$7.2 billion (£5.4 billion) in 2015.

IAM solutions enable a network or system to authenticate the identity of a user against a set of pre-prescribed credentials. Depending on the system being accessed, these can range from a simple username and password to digital certificates, physical tokens, biometric passwords (such as fingerprints, iris scans, or facial recognition), or a combination of these factors.

Traditionally, the strength of the authentication required has depended on the sensitivity of the material being accessed, as well as the impact should these resources fall into unauthorised hands. Public information might require little or no authentication, while proprietary or classified data or accounts with administrative privileges will require stronger authentication, preferably using multiple factors.

While the above still holds true, recent thinking around best practice in IAM has moved on. The focus has shifted from authenticating identity and authorisation to controlling access, working on the principle of least privilege access. In practice, what that means is that every user – whether an individual, a device, a programme or a process – is given access only to the resources needed to fulfil their role.

Least privilege is an approach that acknowledges how serious the insider threat is to businesses and the fact that just because someone has established their identity as an employee with the right credentials should not mean unfettered access to company systems.

While this is sound in principle, in reality least privilege and deciding who should have access to what and when can be difficult for organisations to implement and leave their systems vulnerable. One issue with least privilege in IAM is that users are usually given access privileges based on their role in an organisation, but employees rarely fit neatly into single roles. They may need special one-time access or each person fulfilling the same role might need slightly different types of access. Another challenge is that some organisations fail to extend the concept of least privilege access right across the organisation, monitoring those classified as privileged users, such as systems administrators.

But how might Artificial Intelligence (AI) help? So often with data breaches it's not the management of the identity that causes the breach, but the transfer of credentials to some unknown party. While least privilege access control does afford some protection here, there are clearly shortfalls. Identity management and access control have always been two sides of a coin, but in the future AI will be the glue to bind them together to much greater effect.

Moving on from biometric passwords, it's not difficult to conceive of AI that could identify a user even more securely by using sight and sound. So, rather than checking against pre-defined credentials a machine would be able to understand whether a person was who they claimed to be using visual and aural clues, learn when to grant access based on this and act accordingly.

AI also offers the potential for intelligent, real-time security to implement fine-grained access control. Just because a user proved who they were at log on two minutes ago, should the system continue to believe they are who they say they are? Visual images and voice could obviously still play a part here, constantly monitoring users as they move around the network. But in addition behavioural factors and real-time, risk analysis can also come into play.

Working within a user's access permissions, AI systems could monitor in real-time whether a user is accessing or trying to access a part of the system they never normally would or suddenly downloading more documents than they generally would. The rhythm of a user's keyboard and mouse movements could be observed to identify irregular or unusual patterns.  Taking this a step further it's not inconceivable that insights from an individual's online identity and activity – their social profile, groups they are part of, people they follow, websites they visit – could  be used to determine a risk score.  Drawing this data together, actions taken by the AI system could range from an alert being triggered, to specific areas of a corporate system being switched off for a user, to access being instantly revoked.

In future, the truly intelligent system will know, understand, monitor and act drawing whatever clues it requires on a user. Identity and credentials will not be separate elements. An individual's identity will become their credentials. That should be the ultimate goal of any AI system. 

Contributed by Lee Painter, CEO, Hypersocket Software