18 months later and nothing’s changed

I’ve been neglectful of my blog whilst I focused much of my time on a subset of my clients to help drive change more quickly.

And yet it feels like nothing has changed.

What have I spent most of the last few weeks talking about? Ethical AI, trustworthy AI, trust in the technology industry. Exactly the same topics I last blogged about back in 2020.

Don’t get me wrong, I am passionate about this and so it’s not exactly a hardship talking about it. We absolutely must be responsible with how we implement digital technology-oriented solutions, and not just AI ones at that. I had the privilege of discussing the topic with esteemed colleagues from a variety of backgrounds at a House of Lords-based event at the start of the month, and then just this week past I hosted a discussion with the immediate past-President and now President of BCS, The Chartered Institute for IT – John Higgins CBE and Mayank Prakash respectivey – where much of our time was spent on how digital technology is being used, the impact on society, and what the Institute is doing – and must do – about it. I’m going to quote one of my mentors here:

“Building more trustworthy and ethical AI systems is not only a research question, it’s a business, legal and societal imperative.”

Andrea Martin, IBM Distinguished Engineer & Leader IBM Watson Center Munich

It’s handy that I work for IBM, because “we” stipulate that at least 5 things matter when it comes to trustworthy AI, which I think are a handy way to explain at least some of what’s needed.

  1. Transparency
  2. Explainability
  3. Fairness
  4. Robustness
  5. Privacy

Diving into those a little:

Transparency – Transparent AI systems share information on what data is collected, how it will be used and stored, and who has access to it. They make their purposes clear to users. The UK Goverment’s Centre for Data Ethics and Innovation, in conjunction with the Central Digital and Data office, are leading (globally, I think) on this within the public sector, having launched an algorithmic transparency standard in November 2021. The OECD Principles on AI state that there should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them, which expands into explainability too…

Explainability – People have a right to know if they are interacting with a system using AI and how and why that AI system has come to a certain decision or recommendation.  And we need to explain it to them in non-technical terms that are understood by people not at all skilled in the subject.

Fairness – Fairness is about ensuring that we treat individuals, or groups of individuals, equitably.  In theory, AI could actually assist humans in making fairer choices, countering human biases, and promoting inclusivity.  The risk we’re all talking about – still – is that it could go the other way as we build in our own unconscious biases through training and so on.

Robustness – Here we need to ensure that our AI solutions are robust to withstand intentional and unintentional interference such as evasion, poisoning, extraction, and inference.

Privacy – If we are to honour the privacy of individuals we must fully disclose what data is collected, how it will be used and stored, and who has access to it. I remember the time of my career when my focus was on how social technologies could make such a positive difference (they still can, despite some of the negative experiences), as the data meant we had greater potential to personalise. I still stand by that, but that doesn’t absolve us of our obligation as solution providers to ensure privacy is paramount.

This feels a bit like an ad now, but ah well (other technology companies are available). I’m proud to work for a company that can help with ALL of the above, that has led the way in research to ensure we can be more ethical in our use of AI. For every point above, IBM Research has developed a tool to help.

So, the way I see it, if technology solution providers – whether big corporates like IBM, public sector orgs such as local authorities, even 3rd sector organsations – want to be ethical with their use of AI, then the solutions are already there to support it. What’s your excuse?

Those tools to help? They are:

TransparencyAI FactSheets 360 website, which presents a first-of-its-kind methodology for assembling documentation – or “fact sheets” – about an AI model’s important features, such as its purpose, performance, datasets, characteristics, and more. 

ExplainabilityAI Explainability 360, an open source toolkit that can help support the interpretability and explainability of machine learning models. 

FairnessAI Fairness 360, can help examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. 

Robustness – The Adversarial Robustness Toolbox (ART) provides tools that enable developers and researchers to evaluate, defend, certify and verify Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference.

Privacy – The AI Privacy 360 Toolbox includes several tools to support the assessment of privacy risks of AI-based solutions, and to help them adhere to any relevant privacy requirements.   

AI Explainability 360, AI Fairness 360 and AI Privacy 360 have all been donated by IBM Research to the Linux Foundation AI and Data.

The problem is wider, we need to do more on skills, on the digital divide. We need to ensure our teams are diverse, that they better represent the society our solutions serve. There may even be a role for precision regulation. But there is absolutely NO excuse for not evaluating and improving the solutions on which we are already working on and which we are already providing.

Advertisement

One thought on “18 months later and nothing’s changed

  1. Thank you for your thoughts. Sharon. Very helpful.

    As someone who coaches for interviews, I’m very aware of the ever increasing use of AI for video interview decision-making, often without any human involvement. It’s well known that it’s possible for these systems have inbuilt biases, sometimes driven by the biases of those who define the requirements, but also because of the differing behaviours of diverse candidates not being understood by or programmed into the AI.

    Just saying that AI interviewing is less biased than human interviewers should no longer be used as an excuse for the poor specification of systems which determine people’s futures. If some of these AI interview companies would be more transparent, and honestly explain the pros and cons of their approaches, they’d engender a lot more trust. So, I hope you can raise standards in the AI world and shame some companies into closer compliance with what you’re trying to achieve.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s