AI marketing is 'bullshit' says Eugene Kaspersky; potential biases need to be addressed now

News by Tony Morbin

Anyone who is promoting their product as true AI its just talking bullshit, Eugene Kaspersky told delegates, via video, at Kaspersky's Next Conference in Lisbon on Monday.

Anyone who is promoting their product as true AI its just talking bullshit, Eugene Kaspersky told delegates, via video, at Kaspersky’s Next Conference in Lisbon yesterday.

The first keynote speaker, Kriti Sharma, founder of AI for Good, immediately contradicted that statement adding the qualification, "AI for  general purpose use is different to the issue of specific purpose AI. But while general purpose AI is further away, technology is progressing fast."

How fast is also a matter of conjecture with Kaspersky saying, "not in our grandchildren’s lifetime," and maybe not for 200 or a thousand years. In a later session Alexey Malanov, malware expert, Kaspersky, suggested the consensus is more like 45 years, with those in the US saying 70 years, and those in Asia saying 30 years.

Sharma, who built her first robot at 15, highlighted the issue of bias in AI by citing a response from her new robot which was built to have meaningful conversations. Talking with children it  correctly identified Theresa May as UK Prime Minister, but when it was asked who is the US President, it replied, "President of the US is Donald J Trump, god help us. ' While one delegate suggested it showed true intelligence, Sharma herself acknowledged how it showed the bias toward news sources trusted by its creator.

Of course algorithms are already being used all the time to make decisions about who we are and what we want and, says Sharma, whose concern is that we risk automating biases in public systems at scale. "We are at a moment in time (when we can consider) how to protect society to avoid amplifying negativity of the past, ie don't create more inequality. It was noted how AI is not the first tech to face ethical issues, eg gene editing has gone through this.  But currently its a national race to decide who will lead AI whereas we need to be thinking about what's best for humanity and thus it should be globally coordinated.

Many a true word is spoken in jest, so when Sharma joked that her aim was that, "When the robots do take over we want to make sure they are nice," there was recognition that there are real concerns that need to be addressed.

As a British Asian woman, Sharma is well placed to notice discrimination in AI and drew attention to how virtually all digital assistants such as Siri or Alexa are female, handling trivial decisions such as turning on lights etc, whereas Watson and Einstein, the male versions, are making big economic decisions. This normalising of inferior roles for females is further exacerbated, not just being ordered about, carrying out menial roles,  but when the female voiced robot passively deals with abuse it presents a subservient persona as normal for women. One man asked, so do you want it to respond aggressively and Sharma replied, no but she did want abuse to be challenged, so if abused once, the persona would say that the abuser must do better, if it happened a second time they would get a final warning, and if it happened a third time it would stop working. "It’s about creating a social protocol that reflects the values of society - its not protecting the  'feelings' of the machine, but about avoiding teaching children and adults that its OK to shout and demand," explained Sharma.

About 12 percent of those working In the AI industry are women - about the same as cyber-security was reporting via ISC(2) prior to reclassification to include soft skills (after which figures doubled) and so the concern is that with one view and not asking the right questions at the right time there will be a  reflection of historic biases - such as recent reports of health apps misdiagnosing women with heart attacks, but not men.

Sharma cited how she personally had changed the name she used and her image online, replacing it with a cartoon cat, and found her credentials were no longer questioned, thus she says the cartoon, "had higher credibility than a girl with two computer science degrees." Another example is how there is a higher rejection of code submitted to open source projects when it is  identified as coming from a woman.

An interesting example of the benefits of a non-gendered AI is that people appear more willing to be more open when talking to what is clearly a non-judgemental machine. This particularly applied when handling sensitive subjects such as domestic abuse, sexual health and mental health. Of 350,000 consultations conducted, Sharma reports that 45 percent are more likely to seek help earlier.

A later discussion on robotics showed a possible cause, in that people treat machines as closed circuits - found when people are talking to a physical robot - even though they are connected with data fed to databases controlled by the creator or corporate owners.

It’s often not just women and ethnic minorities that are not part of the discussion about what should be included in AI systems, but everyone who is not a technologist - and that includes lawyers, teachers, politicians, plumbers, healthcare staff, carers, people in manufacturing, sales or retail - all of us/them. Sharma sought to increase diversity of input into her projects as wanted to know why children did not seem interested in getting involved in AI given its such a fascinating area. Responses included, "I  didn’t think I was smart enough," and suggested it helps to have other models, to show how many bugs/problems developers make along the way. Second, they didn't know where to begin. Students may know more about tech than their teachers who are willing but don’t have the means or support. A third response was that they would rather be more creative, and be something like a Youtube star. By taking the code writing out of the equation and getting people to use AI focussed on human skills of creativity, problem solving - , Sharma says she got "awesome results" with  5,000 students coming up with incredible solutions, such as voicing surroundings for the visually imparied, or using image recognition to identify and observe climate change, including retreating ice. We need groups of very different people to ask the right questions at the right time and a human rights based approach

Another  example was the need to shut down Amazon recruitment AI which was found to be biased against women, only offering them lower paid jobs compared to men.

One question asked if we could use AI to help overcome bias. Sharma responded that we can use technology to at least flag the issues. Just as we test for suability or  security we should test AI systems for bias, whether sexist, racist, or what data set it is built on. And then we need to keep testing it as it may become biased over time because of the information it uses.

Find this article useful?

Get more great articles like this in your inbox every lunchtime

Video and interviews