Neither automated analysis, nor the manual reading of Twitter posts, is a useful practice for HR to engage in.
The word ‘psychopath' is probably as often misused as ‘hacker'. Spurred on by somewhat skewed TV representations of psychopathy, it is usually associated with crazed killers or other deranged characters. But like most personality traits, psychopathy has a sliding scale, and chances are you know someone who could be legitimately labelled as psychopathic, using recognised diagnostic tests.
Recently there have been a number of popular science books publicising this area of mental health. About the same time I came across Jon Ronson's The Psychopath Test, I also became aware of the research being conducted by my friend and 44Con co-conspirator Chris Sumner.
Sumner and his colleagues at the Online Privacy Foundation (OPF) have been researching what people might inadvertently reveal about themselves through their online activity, particularly on social networks, with the idea of measuring their personality based on what they post. There are several recognised ‘personality profiles', such as The Big Five and Dark Triad, so, in theory at least, you can predict people's scores by what they post.
In many HR departments, this sort of thing already goes on, albeit in a far less scientific fashion. Reviewing the Twitter, Facebook and LinkedIn profiles of potential employees (and, indeed, existing ones) is common practice. It's very easy to be glib and casual in online forums such as Twitter, and this can lead to employers making poor decisions, without taking into account the context of such communications.
The OPF research involved 2,927 volunteers (myself included), who first completed Big Five and Dark Triad questionnaires, and then allowed an app to access their Twitter timeline. Over several months, the app collected statistics on the users' activity, and then the number crunching began to see if anything correlated between online messages and personality traits.
The results are detailed in the research paper (http://preview.tinyurl.com/muse188-3), but generally speaking, the automated analysis couldn't determine whether or not a given user was a psychopath.
To me, the research and publication approach were as interesting as the results. Too often in infosec we see whitepapers that are poorly disguised marketing pitches, or lacking in proper scientific rigour. It was also good to see the reporting of imperfect results – when was the last time you read a paper from an anti-virus or IDS vendor discussing how their new approach doesn't fully work as hoped?
The most important result is that automated analysis is not yet an effective way to assess an individual's personality. This is particularly important because there is a trend to apply arguably pseudo-scientific profiling techniques to employees and job applicants; the OPF research provides strong evidence that this practice is, with current technology at least, inaccurate. Interestingly, it turns out that automated analysis may be an effective tool for measuring aggregate traits of a larger sample – comparing the traits of different online communities, for example.
As with any evidence, it's important to ensure that any decisions made using such analysis are based on a clear understanding of its accuracy and weaknesses. I highly recommend keeping an eye on the work of the OPF here to avoid making costly mistakes, by either ignoring the potential of such analysis, or blindly following it.
Ironically, even if automated analysis of online activity could identify psychopaths, it might be that you would actively seek them out for employment, as most of them are so-called ‘functioning psychopaths'.
In a wide-ranging survey of potential psychopaths, psychologist Kevin Dutton found that many successful executives, lawyers, surgeons and soldiers scored highly on the test.
So, if there is a psychopath in the room, it might not be time to reach for the panic button just yet.