Toolkit // Photo credit: tim gouw unsplash

photo credit markus spiske 109588 unsplash

Artificial intelligence (AI) describes when computers act independently, in a way that resembles human reasoning.

Most, if not all, current AI systems employ machine learning. Fundamentally, machine learning algorithms are computerised techniques for recognising patterns in data. Unlike basic computer algorithms, machine learning algorithms do not need a person to directly code the decisions they will follow. Instead these systems are able to ‘learn’ from examples, data and experience to create their own set of decisions.

Once the stuff of futuristic films, artificial intelligence is a real life proposition and an area of importance to digital policymaking.

Why AI is important

1. AI in business.

AI is used in business for routine tasks in manufacturing, data processing, and even calculating credit scores. Long term, this will result in fundamental shifts in business, affecting productivity, employment and living standards. AI has the potential to change industries as diverse as translation, customer service, medical diagnosis, and manufacturing.

2. AI in government.

AI is being employed in government to save time spent on monitoring, documenting and other paperwork, freeing people for the key tasks which uniquely require human reasoning. For example, AI already detects benefit fraud.

3. AI is a key future industry.

As a House of Lords Committee heard, Europe is in a good position to become a leader in research and applications of AI, but competing with players in the US and China. Commentators disagree on whether foreign approaches are a threat or an example to follow, but it is clear that action is necessary to shape the future of this important technology.

Key debates

1. Capabilities.

The impact of AI is hard to estimate, partially because its applications are so far limited to specific domains, such as playing Go, maintaining aircraft, patient care, or analysing medical images. But the technology is moving fast and it is difficult to predict where it will lead. Overview of the applications within government, by Nesta and Deloitte, hint at the potential for saving time and effort of civil servants, allowing them to focus on creative problem solving.

2. Regulation.

How AI should be regulated is a contested area not least because the legal definitions are still unclear. Geoff Mulgan, CEO of Nesta, has called for a Machine Intelligence Commission to create wider public trust, understanding and informed decisions as opposed to paranoia.

Because AI research is carried out by many private parties; individual companies or labs might prioritise getting their system to work over ethical concerns. In absence of regulation, AI might be deployed first, with crucial questions being asked later, much like sharing economy applications were, but with potentially more severe consequences.

Even if policymakers decide not to limit AI legally, as some experts demand, this should still be a conscious decision.

3. Bias.

Closely linked to regulation, is that AI systems are likely to assume unconscious biases from their creators or the data they are trained on. See Readie’s forthcoming explainer on algorithmic accountability and bias.

4. Superintelligence worries.

A debate in expert circles is whether or when AI will surpass human intelligence and what the consequences of this might be with positions ranging from confidence in humanity’s ability to control technology to warnings on how AI is uniquely predisposed to escape such control.