How should law and regulation cope with fast changing technologies and industries? How should they balance the risks that come with new ideas and the risks of crushing them? And how should they help to ensure that the benefits of new technologies are widely spread?
In this article, which follows on from Fourth Industrial Revolution, Geoff Mulgan suggests that we are beginning to see a radical change in both the theory and practice of regulation with the emergence of a new field of ‘anticipatory regulation’.
From stable to iterative regulation
In recent decades the dominant ideas about regulation emphasised that it should be constant, simple and predictable. If it was, markets could do what they do best, finding ways to optimise the implementation of new technologies and ideas.
A host of regulatory theory, and regulatory institutions, emerged from the 1970s onwards, to put the economic theory of regulation into practice. They promised to replace the capricious decisions of bureaucrats with more arms length rule-makers and more rational rules. They promised more competition and a better deal for consumers. The idea was that regulators should not try to second guess the direction of technological change. Instead they should set the rules of the game, and then stand back.
Traditional regulatory theory still arguably works fairly well for stable industries with relatively stable technologies
But it struggles to cope with more fluid, dynamic and uncertain fields, particularly ones where the boundaries between industries are constantly changing.
Governments around the world are now grappling with these questions. Nesta is directly involved in one part of it through the Open Up programme which has been developed with the Competition and Markets Authority and the major banks, and aims to link opening up SME’s data held by banks, a drive to improve competition, and concerted moves to finance accelerated innovation to ensure that new products and services are available to make full use of open data.
This represents a new approach to regulation, in that the regulator is more directly involved in advancing innovation as a tool to promote competition.
On other fronts, there is intensifying debate about how to handle the big platform oligopolies that dominate the digital world – what rules to impose on their handling of data, whether to break them up, or whether to turn some of their functions into public utilities (covered well in this recent piece in the Economist).
Here I focus on a subset of these questions, the emergence of a family of new methods that can loosely be described as ‘anticipatory regulation’, recasting regulation to assist in the emergence of new technological tools. My guess is that these will soon become part of the mainstream armoury of governments.
Elements of this aren’t new. One of my first pieces of work as an academic in the late 1980s was on the regulation of high speed broadband, and zero marginal cost technologies, which simply didn’t fit much of the prevailing economic theory.
Smart governments have long tried to achieve a better alignment of technology development, market regulation and public policy, as Scandinavian governments successfully did with GSM mobile a generation ago (though many, like the UK, divide up these roles between different bits of government).
But in other respects the tools are new and are evolving to navigate economies in which:
- the pace of change is rapid, as Moore’s law continues to operate and is matched in other fields like genomics
- barriers to entry are often very low
- ability to operate across sectoral boundaries is high (with technologies like 3D printing, platforms)
- business models encourage big firms to operate across multiple sectors (as Google/Alphabet now does) or run whole ecosystems of products and services (like Apple or Amazon).
In the past, regulators assumed that they could ignore new developments until they reached a certain scale
Likewise, new firms didn’t engage with regulators until they hit a large enough scale. But speed undermines both sets of assumptions. Small firms can become big very fast. and that’s forcing attention to quick and lean ways of linking what may be a large pool of potential new approaches and innovators to the limited resources of regulators.
Finance has arguably led the way. Organisations like the UK’s Financial Conduct Authority use regulatory sandboxes that allow new entrants to test out their products, and the potential regulatory implications, in a close dialogue with policy makers. This has been one of the factors that has allowed the alternative finance sector to grow – with crowdfunding, peer to peer and other tools, each of which posed challenges for regulators.
Over the last few years, BitCoin and Blockchain technologies have prompted regulators to attempt much more open dialogue with potential entrants, aware of the comparative advantage that could be offered to London, New York or Dubai from being ahead of the curve.
The United Arab Emirates (UAE) has attempted to spread this idea to other sectors through what it calls ‘government accelerators’ – for example, bringing in start-ups to work on reducing traffic congestion, road accidents or air pollution in close collaboration with government officials.
Other regulators have tried to act in advance of shifts in technology. In the US, for example, the National Highway Traffic Safety Administration developed policies on autonomous vehicles in 2013 – to preempt their widespread introduction – and worked with industry to better understand how driverless cars and driven cars would interact.
The Innovation Testbeds in Korea are another good example, using residential areas to speed up useful innovation in the Internet of Things (IoT), again with the aim of enabling close communication between innovative firms and policymakers.
A few points are striking from these examples:
1. They imply that regulation should often be iterative rather than definitive. The benefits from continuous adaptation may outweigh the benefits of stability and predictability (though of course there will be trade-offs).
As a result, regulators are trying to work as much as possible through guidelines; promoting self-regulation and understanding, through communicating directions of travel more than specifics (for example, of how wearables might be regulated in the future).
2. Their primary goal is risk management – to test out new ideas in safe environments that minimise negative risks but also make the most of positive risks.
A related concept is risk-based regulation and inspection – using data, and predictive tools to better map where problems are likely to arise so as to economise on scarce regulatory resources.
3. Since the pressure for change comes partly from new technologies and partly from new business models – which can create new challenges for consumer protection, health and safety and so on – regulators need both technological knowledge and understanding of business model design – perhaps more than the theoretical economic knowledge which was prioritised in a previous era.
4. Many of the challenges come from changes that cut across sectoral boundaries.
The 1980s regulators were very sectoral – electricity, post, phones, gas – and assumed clear boundaries. The convergence of communications technologies has been threatening this for decades – and justified the creation of OfCom, for example.
But this is now going much further with the Internet of Things, and the further integration of the data economy and material networks in transport, energy and buildings. All are coming up against very similar challenges around data, monitoring and maintenance of infrastructures and protecting against cyberattack.
5. The practical challenge for regulators is how to interact with stakeholders at scale, rather than just talking to a handful of big firms. Here there are interesting experiments that try to involve many more people, who can be engaged in shaping regulations around emerging industries – like vTaiwan run by the Taiwanese parliament (and described in the recent Nesta study on digital democracy).
The growing field of digital democracy is becoming much more sophisticated in understanding how deliberations can be inclusive, structured and staged to ensure better grasp of the issues, to prevent capture by powerful interests, and to ensure both better diagnosis and prescription.
6. Simplicity may not be the holy grail any more. For decades, it has been assumed that good regulation should be as simple as possible. In the 1990s and 2000s, various governments (the Netherlands, for example) set quantitative targets for cutting the volume of regulation which was seen only as a burden on SMEs.
The UK Better Regulation Executive adopted a ‘one in two out’ rule for similar reasons. But the anticipatory regulation logic may point in the opposite direction, towards more complexity – ideally with simple principles but flexibility to devise sufficiently detailed regulations to enable new models to emerge.
7. Like any form of regulation, anticipatory regulations cannot avoid politics.
There is no pure rational calculus for deciding how a new industry should develop, or how a new technology should be used
Instead many factors intersect – economics, ethics, politics (as I set out in more detail in my recent paper on good and bad innovation). Problems are bound to arise if any of these are ignored and regulation is seen only through a technocratic lens.
Over the next year we’ll be looking at these emergent models of anticipatory regulation. How are these organised? What skills do they need? How do they strike the right balance between breadth and depth? How are they funded? How do they ensure the right input from stakeholders? What can other sectors learn from the methods being pioneered in finance? In particular, how could more experimental methods be used to find out what works best?
Drones are a particularly good example on which we have been working for some time with cities, businesses and governments. They offer huge potential gains but also big challenges – how to handle routes, data, pricing models, environment effects and so on.
Experimental testbeds may be needed to flush out unforeseen problems. Many different regulators may want a say – aviation authorities, municipalities, police – implying a need for new structures to orchestrate ‘joined-up’ regulation. These regulatory coordination arrangements are going to become increasingly common, probably with one agency given a lead role for a fixed period.
On the edge of this field we may also see the emergence of other kinds of new institution with regulatory powers, to handle cross cutting questions in an equally iterative way.
I’ve written previously about the case for a Machine Intelligence Commission – and the debate is slowly limbering up on possible new institutions. Nesta is also working with partners across Europe on new data commons which could also become an important part of the landscape for transport and other infrastructures, and part of the regulatory picture.
We may need quite new kinds of institution to handle options which are increasingly being debated, such as mandatory data-sharing, or prohibitions on uses of certain classes of data, or, as in the work of Glen Weyl and others, rewards for the public who generate data in the first place.
For now, however, there are no professionals in anticipatory regulation; no handbooks; and not much theory.
There are some promising studies such as this recent one from Deloitte, and growing interest in regulatory innovation. I was recently told that over 24 countries are trying to set up FCA-style sandboxes, helped, for example, by R2A, the RegTech for Regulators Accelerator which is focused on innovations in financial regulation.
The field is roughly where the policy of behavioural economics was in the early 2000s (when I was involved in a first attempt to map out its implications), or open data in the mid-2000s. But there’s little doubt that this will become part of the mainstream. If I was an ambitious young economist this might well be where I would devote my efforts.
This article was originally published by Nesta.