Is AI ethics ahead of its time?

If you asked 602 technology innovators, business leaders, researchers and policy specialists how universal AI ethics will be by 2030, just how upbeat do you think they would be?  This month a survey from Pew Research Center and Elon University asked tech experts what progress they thought would be made on creating ethical artificial intelligence by 2030.  A massive 68% reported they did not believe most AI systems would incorporate ethical principles for public good in the next decade.  So expect a slew of “oops” moments (or worse) when unintended consequences of AI hit the headlines.

I stand with the 32% who took the alternative view as I believe most businesses and organisations using AI will pause to think about the impacts, but for most AI systems to use fully-rounded ethical principles will undoubtedly need more push from regulators and public opinion.  On our MSc AI & Data Ethics course at the University of Hull, ethics has been taught as part of every module; every AI conference I attend offers stimulating sessions debating what we mean by AI ethics as well as whose responsibility it is to define and incorporate ethical AI principles.  So there’s no shortage of ideas and opportunities to formulate approaches to ensure your intended use of AI has an ethical basis.

But I get why there’s so much scepticism.  I’ve recently been talking to key players at tech incubators about supporting the fledgling businesses in their hot houses to develop good governance and ethical approaches. As I know from my own business experiences, when you’re caught up in the energy (and stress) of establishing a new business, ethereal concepts such as ethics (and anything else that is not simultaneously “urgent” and “important” to developing the concept and keeping investors onside) get push to one side.  Talk to an entrepreneur one-to-one and you’ll have a stimulating conversation about the ethical implications of their new developments.  But ask them to set aside a day a week to ponder the principles underpinning their approach and the best you’ll get is polite obfuscation.

So can we rely on Governments to set the frameworks up for us? Not really. We’ve seen in areas from financial services regulation to COVID guidelines that Government policies lag behind reality on the ground.  There are interesting collaborations going on round the world and institutions like the UK’s Centre for Data Ethics and Innovation are producing fascinating insights, such as these interim findings on how user interfaces can nudge behaviour on privacy settings. But it would be folly either to attempt to put the brakes on using AI until policy is in place or to rely on policy-makers to hand-down fully-formed frameworks that will work in all circumstances.

Life – and business – is messy and does not develop in a neat, linear fashion. Whilst there may be doubts that AI ethics will have an established, universally used framework by 2030, no-one doubts that AI will be all pervasive long before then. So my recommendation is that we adopt the pragmatic mantra “If I make it, I am responsible for it” and remind everyone using AI that being busy and focused on tech can never absolve them from taking responsibility for what they create and its impacts.

You may also like...