Last week I attended an “Ethics and Automation” panel run by HMG’s Automation Taskforce, and hosted by Katie Rhodes, Senior Policy & Strategy Advisor; this is part 1 of my thoughts from the session.
The first panellist was Bethan Charnley, Head of Strategic Projects at the Centre for Data Ethics and Innovation (CDEI). The role of this organisation is to advise Government on how to maximise the benefits of data.
Bethan raised that often innovation and ethics are posed as contentious, but she sees ethics as an enabler for innovation. (For those of us in tech circles, particularly where we have association with BCS, the Chartered Institute for IT, we are also familiar with people thinking that professionalism can stifle innovation. But look at some of the incredible, stunning, unique buildings that we see erected, such as the one below. Does professionalism in Architecture stifle innovation there? Anyway… ).
No, she clearly stated – and I wholeheartedly agree – that “ethics and innovation go hand in hand“.
I would argue that by considering ethics alongside innovation we are far less likely to have the unintended consequences we have seen in examples of AI that has been trialled, and not just trialled but put into production. By considering the ethics we are not prevented from innovating, we just do it better. Innovation is more relevant, more inclusive, more valuable to society.
Furthermore, Bethan challenged us to realise that we have to consider not just the ethics of doing something but the ethics of NOT doing something. I’ll take that one step further: is it ethical if a Government Department does not provide a service to someone who doesn’t know they are entitled to it, when said Department may already have the data that shows they are? (Just a hypothetical question of course.)
Naturally, we can’t have a conversation about ethics in the field of data science without talking about bias in algorithmic decision making. AI could be a way to remove such bias. But if we’re not careful it’s a way to bake that bias in: training with biased data, building bias into algorithms, testing with biased data, and so on. We need to make sure we get insights into every stage of the AI lifecycle.
That’s one of the many reasons why IBM has developed Watson OpenScale. It can trace and explain AI decisions across workflows, and it allows you to intelligently detect and correct bias to improve outcomes.
A good, fun example of this is how we applied AI fairly to pick highlights from Wimbledon. If you think about it, the main courts have the biggest audiences and may make the loudest roars during rallies and wins. But there may still be a fabulous shot, unique win, and so on, on one of the higher number courts. Just as in life where those who shout the loudest are not always the most successful, at Wimbledon you may still have an amazing shot with only a ripple of applause. We wanted to make sure all successes were considered.
I suggested that by considering the ethics we innovate better. By applying fairness to this AI at Wimbledon the result was “a higher-quality selection of sports highlights—and more of them.”
Read that Wimbledon storoy for yourself: https://www.ibmbigdatahub.com/blog/ai-picks-highlights-wimbledon-fairly-fast
Learn about how KPMG stewards responsible AI with Watson OpenScale: https://mediacenter.ibm.com/media/1_ulgwi98c