Around the world the regulation of Artificial Intelligence (AI) is starting to evolve. It’s not keeping pace with tech evolutions but legislation is making some progress. In the week that the EU AI Act comes into force, we’re highlighting the progress of regulation in different jurisdictions.
No common definition of AI
The concept of machines that can learn has been around since the 1950s, but there’s still no global definition of what AI is. Some countries, like Brazil, have broad definitions encompassing any computational system, whilst others, like China, take a narrower focus on specific techniques such as machine learning and foundation models.
Purpose of regulation
Across different jurisdictions the goal of regulation varies widely, from fostering innovation and growth in the UK and US, to preventing harm and upholding human rights in the new EU AI Act. China has acknowledged the socio-technical nature of AI systems by banning AI models that do not support the state’s social goals.
Not surprisingly these wide differences in approach mean there is no harmony in how risks are perceived and what gets regulated. The EU and South Korea, for instance, focus their fire-power on high-risk AI applications like facial recognition whilst the UK has yet to declare its hand, but the new Labour Government is thought to be planning to regulate only foundation models.
Monitoring compliance
It’s fair to say that no jurisdiction has yet cracked the problem of how to monitor compliance with new rules. At the two global AI Safety Summits (Bletchley Park, UK in November 2023 and Seoul, South Korea in May 2024) big tech companies made voluntary commitments to identify the possible risks of AI, set thresholds at which those risks would be designated as too high and be transparent about the risks. Whilst the voluntary approach was lauded as a step forward, the pledges only came from 16 big tech companies and history teaches us that this approach will not be sufficient to protect people into the future.
What we need are adaptive regulations, that can flex and be updated to keep pace with technological advancements, together with proper monitoring and enforcement mechanisms to ensure compliance and deter bad actors. We foresee a future where clear standards are set for compliance with independent audits, public reporting and penalties for non-compliance.
Whilst some are pressing for top-down harmonisation of regulations and standards (and that may come in the future) we believe that approach will be too slow to prevent harm to people and society. Instead we support different jurisdictions in pursuing their own AI rules now, with the hope they will not lag too far behind technical reality to be effective.