Digital Transformation Expo: AI doesn't understand context

AI algorithms don't see the world as we see it, and this has created either unnecessary panic or outlandish claims, says mathematician Hanna Fry

"Whenever I come across some new claim, a new piece of technology, I always ask myself three questions: How do you verify it? How does it work? What happens when it doesn’t?," said Hannah Fry, mathematician, author, lecturer, television presenter, podcaster and public speaker on AI.

"And I think all of us should be asking those questions, and much more," she said, sharing her insights at the Digital Transformation Expo Europe in London.

This mathematics professor, who has conducted presentations and talks across the world on the development and prospects of artificial intelligence, used criteria to illustrate the applications of artificial intelligence, and AI itself.

"The other day I came across a tech company who claimed they could take a video of a person talking and work out how stressed out they are, on each of the words they talked," she told delegates.

A bevy of examples followed, emphasising the need to be a lot more careful about the limits of what we create.

"You can’t just create artificial intelligence that works. You have to also take into account the human aspect. You have to create Ai for humans."

That itself is an issue, because the AI algorithms don’t see the world as we see it, she explained. And this has created either unnecessary panic or outlandish claims.

Speaking about outlandish applications of AI, she says her favourite one was from Hollywood.

"Someone claimed that they could take a movie script, run it through their neural network, and they could pinpoint which words on that movie script should be changed to make it more profitable at the box office!"

Find this article useful?

Get more great articles like this in your inbox every lunchtime

Video and interviews