AI has application in cyber-security but needs an ethical basis say Lords

News by Jay Jay

AI needs to be representative of the community it serves. It should use established concepts: open data, ethics advisory boards, data protection legislation, new frameworks & mechanisms, such as data portability & data trusts.

Also in:
A House of Lords Committee report today warns that even though the UK has what it takes to become a world leader in the development of artificial intelligence, such new technologies should not come at the price of data rights or privacy of individuals, families or communities.

In the wake of the Cambridge Analytica scandal, Internet users from across the world have rightly questioned the sanctity of their personal data, how much data is being used by companies, who they are shared with, and how much data is being used to create individual profiles that can be targeted via social media.

Similar is the case with artificial intelligence technologies that are being increasingly adopted by companies from across the world. Some of the cutting-edge technologies being used commercially include recognising voices, understanding people's behavioural patterns, their likes and dislikes, their working hours, their driving routes, the devices they use etc. 

These technologies are then incorporated into 'smart devices' that behave as digital assistants - playing their owners' favourite songs, synchronising their emails, adjusting their schedules etc. However, these technologies utilise a well-known concept, machine learning, and such learning requires the collection of vast amounts of data, or in other words, personally identifiable and sensitive data belonging to individuals.

Recognising such massive data collection that the adoption of AI technologies will involve, the House of Lords Select Committee on Artificial Intelligence released a report today in which it urged that the development of AI technologies must be ethical, should be secure enough to be trusted by the public, and should not be used to diminish the data rights or privacy of individuals, families or communities.

"Individuals need to be able to have greater personal control over their data, and the way in which it is used. The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency," the committee said.

"This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability and data trusts."

It added that the government needs to work with the Competition and Markets Authority to promote competition, dissuade the monopolisation of data by big technology companies, and should incentivise the development of new approaches to the auditing of datasets used in AI.

Basically, the government must look into whether existing laws are strong enough to protect the rights and interests of citizens if AI technologies malfunction or leak data to malicious entities, and needs to draw up a national policy framework around the use of AI technologies to ensure the successful delivery of the concept in the UK.

According to Colin Lobley, CEO of the Cyber Security Challenge UK, AI as a whole needs to be representative of the community it's designed to serve. "As AI is applied across various use cases, it has to be developed with the entirety of that community in mind – that means drawing on developers and architects from every background, gender, race, or religion to ensure their viewpoints and outlooks are represented," he said.

Yet another major challenge that the introduction of AI technologies will bring is the question of jobs. As time progresses, AI technologies will mature and will eventually start replacing humans, be it in manufacturing, healthcare, customer services, defence, or hospitality sectors. Recognising this challenge, the committee said that an impending jobs crisis created via the adoption of AI could be managed in the future through significant Government investment in skills and training.

However, Lobley said that even though AI will definitely replace a lot of human jobs, it has the potential to open up opportunities to a much wider cross-section of society.

"A lot has been made of a skills gap in cyber-security and a lack of resources to process, analyse and protect the vast amounts of data being created and processed across virtually every industry. With AI and machine learning, a lot of tasks can be automated, allowing analysts and security professionals to focus on the tasks that require the human touch – assessing flaws, mitigating damage caused by breaches and the like.

"As cyber-attacks become more sophisticated, it's the critical thought brought by people which will be the key to combatting breaches. This will mean a shift from cyber-security being the reserve of the ‘techie' to encompass people with skills in areas as varied as behavioural and forensic psychology or even creative disciplines," he added.    

Find this article useful?

Get more great articles like this in your inbox every lunchtime

Upcoming Events