What is effective ethical governance?

In mid-August I attended an “Ethics and Automation” panel run by HMG’s Automation Taskforce, and hosted by Katie Rhodes, Senior Policy & Strategy Advisor; this is a rather delayed part 2 of my thoughts from the session.

The second speaker was Dr Brent Mittelstadt of the Oxford Internet Institute and Alan Turing Institute. His belief is that only when you can answer the following three questions you can have effective ethical governance:

  1. What is legally required?
  2. What is ethically desirable?
  3. What is technically feasible?

AI has the potential to derive inferances about private life protected characteristics that could be used for online advertising, for example. We know that is technically feasible. It’s definitely legally and ethically dubious.

As many will know, we have a ‘black box’ problem. Often AI is designed in such a way that it cannot be or is not explained. Where decisions have been made (e.g. not to offer a loan, to increase car insurance, etc.) we have to be able to explain what has been done, and to people who are not technical too.

Brent talked about the following paper in the Harvard Journal of Law and Technology 2018, which suggests that making counterfactual explanations can be useful. For example, as a consumer who has a loan application rejected, the bank should instead tell you what would need to be different in order for you to get that loan.

Brent also introduced the need for ethical auditing. As he said:

“Principles alone cannot guarantee ethical AI”.

They are a good starting point, however. Katie took us through Google’s 7 principles, including ‘be socially beneficial’ and ‘be accountable to people’.

As you’d expect, IBM has a set of “Principles for Trust and Transparency” and a longer paper on “Everyday Ethics for Artificial Intelligence“. That paper discusses and provides suggested actions in 5 areas:

  • Accountability
  • Value Alignment
  • Explainability
  • Fairness
  • User Data Rights

Essentially, ethics is everyone’s responsibility, and we have to embed it in right from the very start, through to the very end, of whatever we are creating.

Then, moving from principles allow, Brent shared with us that the Social Science Research Network has been considering how to audit how we implement, measure and govern AI.

Brent added a couple of cautionary remarks to close: models need to be trained with local data, and when we are building a solution, do we really need to use AI within it? (That is, when we have an AI hammer we have to be careful not to just see everything as nails!)

Let’s not forget why we are doing this though. AI has great potential to transform public services and help join up delivery. Bethan mentioned that AI can be put to good use, tackling misinformation online. Brent suggested that we can use AI as a critical mirror, to hold our personal biases to account.

For part 1 of this session, see https://samoore.me/2020/08/18/ethics-in-automation-part-1/.

Leave a comment