On 1st October GL Law merged with national law firm Shakespeare Martineau as part of an exciting growth plan. To find out more read the full story here. If you have any urgent queries please reach out to your usual contact, email, or call 0117 906 9400.

Home > News > How should we regulate AI?

How should we regulate AI?

02 March 2017 |

Image of an AI female being overlooked by androgenous AI beings

Image by Edgarodriguezmunoz. Licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

Last night (1 March), I attended a fascinating debate at UWE as part of the British Academy’s season on Robotics, AI and Society which posed the question: “Does AI pose a threat to society?”

Experts in the fields of behavioural computing, bionics, robot ethics, philosophy and political science, shared their thoughts on the Black Mirror-esque depictions of a hostile takeover of the human race by super-intelligent robots. All of the panellists seemed to agree that while we cannot assign a zero-sum risk to the possibility of robots causing physical harm, in the short-to-medium term, such risks are highly improbable. As one member of the audience noted, the imagery used to depict “AI beings” is not particularly helpful to the debate (as the above image perhaps demonstrates).

Instead, the panellists focussed on the more immediate challenges posed by machine learning, in particular, the displacement of humans by robots in jobs that are relatively unskilled or mechanical and how best to distribute the benefits brought about by automation within society. The panel was conflicted on the idea of universal basic income, which is supported by Elon Musk and being trialled by the Scandinavians, as a means of mitigating the socio-economic impact of increased automation.

For me, the question of whether – and how – machine learning and AI (not the same thing, as this Wired article accurately explains) should be regulated is an interesting one. Sometimes being a tech lawyer feels like being a theoretical physicist – using the tools (laws) we have to evaluate, understand and ultimately address the inevitable consequences of new technologies.

In relation to driverless cars at least, the Vehicle Technology and Aviation Bill was laid before parliament last week. If passed, the legislation would make insurers liable for accidents caused by an automated vehicle when it is driving itself at the time. However what about the regulation of machine learning technologies at a more abstract level? Is that even possible? Notions of privacy and what it meant to live a “private life” existed before privacy regulation, but within the domain of machine learning, it seems to me that we don’t know what we don’t know – so how can you begin to regulate that?

One of the panellists, Alan Winfield, a Professor of Robot Ethics at UWE, touched on this question of regulation. He was involved in the development of British Standard BS8611 – the world’s first standard for the ethical design of robots, which covers everything from robot addiction and deception to the unintended harmful consequences of AI-equipped robots. His intimation was that self-regulation, through the adoption of common technical standards which reflect high-level moral principles (building on Asimov’s three laws), would be the key to developing public trust and confidence in AI. At the same time, he suggested that we need the equivalent of the Civil Aviation Authority and Air Accidents Investigation Branch specifically for driverless cars, to ensure that any policy and regulation is evidence-based. A recently published academic paper suggests that we may even need a separate artificial intelligence watchdog.

The debate was thankfully not all negative. While the growth of machine learning and AI technologies is seemingly unstoppable, everyone seemed to agree that for every risk there is a benefit – particularly in the fields of healthcare and advanced manufacturing. As for jobs, this fourth industrial revolution will spur a new economy and with that new jobs that we cannot begin to imagine.

The contents of this article are intended for general information purposes only and shall not be deemed to be, or constitute legal advice. We cannot accept responsibility for any loss as a result of acts or omissions taken in respect of this article.

  • What can we help you with?