The flurry of AI regulatory activity over the past two years makes it hard for busy leaders to keep up to date. You want to ensure the ways you are using data, and the AI models you’re devising, are not going to get caught on the wrong side of new laws or standards. The shifting sands don’t make this easy.
The EU – leading the pack
Without doubt the European Union is leading the way with its suite of new AI regulations. It’s attempting to categorise AI use cases by the level of risk they present, outlawing the most damaging and creating reducing levels of regulation for uses deemed lower risk.
Building on the EU General Data Protection Regulation makes sense and the additional regulations in the pipeline on liability, and placing put additional restrictions on large platforms, share common values. The EU’s prescriptive, detailed approach could come unstuck, however, where new algorithms and use cases, and indeed new platforms, appear in the months and years to come that don’t neatly fit the rules. Will we see perpetual legal challenges? Will the requirement to submit code for inspection scare developers and investors away?
Putting the new rules into practice looks set to be more challenging than creating the legislation in the first place. Read more analysis here from the Future of Life Institute.
The US – a piecemeal approach?
New York City has provided a live example of the problems of translating legislation into reality. They have been forced to delay enforcement of new rules to prevent bias in AI tools used in employment. The new law says that employers in the City will not be able to use AI in hiring (known as automated employment decision tools) to screen candidates for hiring or promotion unless they have carried out an audit to assess whether there is bias in the tool. But a flood of challenges has led to the implementation date being put back to April to give more time for the detailed rules to be finalised.
State-by-state regulation would be very difficult for businesses to cope with so a holistic approach is more reasonable than a piecemeal approach. In January the National Institute of Standards and Technology released its draft AI Risk Management Framework, providing a first glance of their voluntary standards to address risks in the design and use of AI products, services and systems. It’s worth reading to see how they are approaching risk, particularly the four functions (Govern, Map, Measure and Manage). Together with the Blueprint for an AI Bill of Rights published last year by the White House, the indications are that the US will provide federal regulations, but these are likely to lag the more proactive areas like California and New York City.
The UK
In the UK we’re waiting for the Government’s White Paper to reveal its plans for regulation, building on the 2022 consultation that suggests they will take a light-touch approach. We’ll update clients as soon as the White Paper is published.
In the meantime ask us, or a specialist lawyer, for advice if you want to know how the regulations may affect your existing – and planned – uses of AI.