Category Archives: Automation

What is effective ethical governance?

In mid-August I attended an “Ethics and Automation” panel run by HMG’s Automation Taskforce, and hosted by Katie Rhodes, Senior Policy & Strategy Advisor; this is a rather delayed part 2 of my thoughts from the session.

The second speaker was Dr Brent Mittelstadt of the Oxford Internet Institute and Alan Turing Institute. His belief is that only when you can answer the following three questions you can have effective ethical governance:

  1. What is legally required?
  2. What is ethically desirable?
  3. What is technically feasible?

AI has the potential to derive inferances about private life protected characteristics that could be used for online advertising, for example. We know that is technically feasible. It’s definitely legally and ethically dubious.

As many will know, we have a ‘black box’ problem. Often AI is designed in such a way that it cannot be or is not explained. Where decisions have been made (e.g. not to offer a loan, to increase car insurance, etc.) we have to be able to explain what has been done, and to people who are not technical too.

Brent talked about the following paper in the Harvard Journal of Law and Technology 2018, which suggests that making counterfactual explanations can be useful. For example, as a consumer who has a loan application rejected, the bank should instead tell you what would need to be different in order for you to get that loan.

Brent also introduced the need for ethical auditing. As he said:

“Principles alone cannot guarantee ethical AI”.

They are a good starting point, however. Katie took us through Google’s 7 principles, including ‘be socially beneficial’ and ‘be accountable to people’.

As you’d expect, IBM has a set of “Principles for Trust and Transparency” and a longer paper on “Everyday Ethics for Artificial Intelligence“. That paper discusses and provides suggested actions in 5 areas:

  • Accountability
  • Value Alignment
  • Explainability
  • Fairness
  • User Data Rights

Essentially, ethics is everyone’s responsibility, and we have to embed it in right from the very start, through to the very end, of whatever we are creating.

Then, moving from principles allow, Brent shared with us that the Social Science Research Network has been considering how to audit how we implement, measure and govern AI.

Brent added a couple of cautionary remarks to close: models need to be trained with local data, and when we are building a solution, do we really need to use AI within it? (That is, when we have an AI hammer we have to be careful not to just see everything as nails!)

Let’s not forget why we are doing this though. AI has great potential to transform public services and help join up delivery. Bethan mentioned that AI can be put to good use, tackling misinformation online. Brent suggested that we can use AI as a critical mirror, to hold our personal biases to account.

For part 1 of this session, see

Leave a comment

Filed under Automation, Government, Public Sector, Smarter Cities

Ethics and innovation go hand in hand

Last week I attended an “Ethics and Automation” panel run by HMG’s Automation Taskforce, and hosted by Katie Rhodes, Senior Policy & Strategy Advisor; this is part 1 of my thoughts from the session.

The first panellist was Bethan Charnley, Head of Strategic Projects at the Centre for Data Ethics and Innovation (CDEI). The role of this organisation is to advise Government on how to maximise the benefits of data.

Bethan raised that often innovation and ethics are posed as contentious, but she sees ethics as an enabler for innovation. (For those of us in tech circles, particularly where we have association with BCS, the Chartered Institute for IT, we are also familiar with people thinking that professionalism can stifle innovation. But look at some of the incredible, stunning, unique buildings that we see erected, such as the one below. Does professionalism in Architecture stifle innovation there? Anyway… ).

No, she clearly stated – and I wholeheartedly agree – that “ethics and innovation go hand in hand“.

I would argue that by considering ethics alongside innovation we are far less likely to have the unintended consequences we have seen in examples of AI that has been trialled, and not just trialled but put into production. By considering the ethics we are not prevented from innovating, we just do it better. Innovation is more relevant, more inclusive, more valuable to society.

Furthermore, Bethan challenged us to realise that we have to consider not just the ethics of doing something but the ethics of NOT doing something. I’ll take that one step further: is it ethical if a Government Department does not provide a service to someone who doesn’t know they are entitled to it, when said Department may already have the data that shows they are? (Just a hypothetical question of course.)

Naturally, we can’t have a conversation about ethics in the field of data science without talking about bias in algorithmic decision making. AI could be a way to remove such bias. But if we’re not careful it’s a way to bake that bias in: training with biased data, building bias into algorithms, testing with biased data, and so on. We need to make sure we get insights into every stage of the AI lifecycle.

That’s one of the many reasons why IBM has developed Watson OpenScale. It can trace and explain AI decisions across workflows, and it allows you to intelligently detect and correct bias to improve outcomes.

A good, fun example of this is how we applied AI fairly to pick highlights from Wimbledon. If you think about it, the main courts have the biggest audiences and may make the loudest roars during rallies and wins. But there may still be a fabulous shot, unique win, and so on, on one of the higher number courts. Just as in life where those who shout the loudest are not always the most successful, at Wimbledon you may still have an amazing shot with only a ripple of applause. We wanted to make sure all successes were considered.

I suggested that by considering the ethics we innovate better. By applying fairness to this AI at Wimbledon the result was “a higher-quality selection of sports highlights—and more of them.”

Read that Wimbledon storoy for yourself:

Learn about how KPMG stewards responsible AI with Watson OpenScale:

1 Comment

Filed under Automation, Government, Public Sector

“Blizzard of Demand and a Blizzard of Data”

RPA is dead, long live RPA!

With so much talk about intelligent automation, digital business automation, integrated automation platforms, and other such terms, you’d think that robotic process automation – RPA – doesn’t apply anymore.

But not so. Whilst I believe much automation will indeed come from machine learning, AI – and so on – applied to work that gets done, organisations are still reaping the benefits of RPA. I recently attended an event run by the Government Automation Taskforce and whilst they too are contemplating the value of intelligent automation and are in its early stages, many of the success stories there – such as this one from the HMRC – show RPA has more potential to bring value across the breadth of Her Majesty’s Government.

The title of this blog is a quote from Chief Constable Andy Marsh of Avon and Somerset Police. They have a grand vision of being an outstanding police force, but with “the blizzard of demand and blizzard of data” – 10 million new pieces of data into the force every day – they knew they need to do more in order to turn this into smart decisions. With many data flows and processes, there had to be potential for automation.

They began this process of applying RPA in 2019, after running a Proof of Concept with us at IBM. As Nick Lilley, Director of IT at Avon and Somerset Police, said, this was about “extending and augmenting” the police force, freeing up capacity to work on more activity where humans can truly add value.

Of course, key to implementing RPA to make sure you get the best value is not to automate bad, poor or unnecessary process. This is an opportunity to apply ‘Lean’* or ‘Lean Six Sigma’ to truly understand processes, improve on them, and collect relevant metrics to support continuous improvement.

One of those processes they decided to tackle was uniform ordering. With a backlog of 700 orders, that would take 2 months for a human worker to process, the digital worker they designed dealt with that backlog in just 2 weeks.

The public wants officers on the street and RPA is helping Avon and Somerset achieve exactly that. This video tells you all about it.

And this is not the only example of recent RPA success. When I attended #ThinkGov2020 I learned about what has been done at the Veterans Benefits Administration from Dr Paul Lawrence, Undersecretary for Benefits. With regards to their intake, it took a long time to move from fax/email to an examiner’s hands and they desperately needed intelligent workflow.

By applying RPA they were able to turn a 10 day process in 1 afternoon’s work. Furthermore, the folks doing that manual work had great experience and insight into the business and they were reskilled into higher paid jobs.

The VBA needed to be agile to implement new benefits, and RPA has been an enabler for this. The organisation did have to deal with a few myths, such as the belief that a wet signature was necessary for approvals, when in fact it turned out it wasn’t.

As Dr Lawrence said, these days we can get a pizza and see it tracked by the hour – why can’t we do the same with benefit applications?

(As always, if you’d like to know more about how automation can help the public sector deliver service more effectively, or even to discuss what we mean by RPA or intelligent automation then get in touch.)

*I searched on ‘lean’ to find an appropriate link to add to this blog. Turns out every day is a school day: according to Wikipedia, ‘Lean, also known as purple drank and several other names, is a recreational drug cocktail, prepared by combining prescription-grade cough syrup with a soft drink and hard candy.” I definitely did not mean that!

Leave a comment

Filed under Automation, Government, Public Sector