Photo of Rachel Dugdale accompanying an interview about what trustees need to know about AI.

We talk to Dr Rachel Dugdale of Complexical, a small technology firm founded in 2022 that provides bespoke, ethical consultancy on technical topics such as artificial intelligence (AI) and data strategy, with a commitment to environmental sustainability, social equity and sustainable business practices. Rachel shares her insights on what charity trustees need to know about AI and governance.

Everyone is talking about artificial intelligence (AI) at the moment. From a governance perspective, what do you see as the key risks and opportunities?

Well, one of the risks is that everyone is talking about it! That can easily be a distraction from more important issues. Consider the current hype around large language models (LLMs) that power chatbots like ChatGPT – while there’s a lot of exciting work in this space from a technical perspective, it’s a very new technology and has some major problems, such as deeply embedded biases (reflective of broader social bias) and inaccuracy. So, I would say one of the major risks is a kind of organisational FOMO, where people get drawn in by the hype and invest in a tool they don’t need or that isn’t fit for purpose.

Bias, and the potential for automating different outcomes for different communities, is another major risk for any organisation looking to incorporate AI into its processes. Algorithmic bias isn’t only a chatbot problem. Charities are often dealing with vulnerable or underserved populations and so need to take this especially seriously to avoid replicating inequalities.

The opportunities, on the other hand, range from increased efficiency in back office services, to better financial and impact modelling, to accelerating research, to improvements in the delivery of frontline services. To be clear, I don’t think about this in terms of AI ‘taking our jobs’ but more about the ways we can use technology to help people be more efficient, to automate repetitive tasks or to generate new insights. Used wisely, AI has the potential to bring huge benefits to charities. We’re always overstretched and trying to get the most out of our limited resources.

How can trustees take their first steps with learning about AI?

For an overview, I’d start with a general book like Hello World by Hannah Fry, which is very readable but gives great insights into how algorithms work and how they’re already being used in the real world. She gives examples from a whole range of sectors and concrete use cases. This predates LLMs, but that’s on purpose, because I think focusing on the underlying algorithms is a better starting point than reading about today’s trends.

There are also some excellent newsletters that summarise breaking news stories. Some are quite technical but I’d recommend AI Ethics Weekly as one that’s focused on impact (how algorithmic decision making affects people and society) rather than technical details.

And, of course, if you want to explore chatbots for yourself, you can get a free account on ChatGPT. Just make sure you don’t put any sensitive personal or organisational information into it as there have already been some data leaks.

What effect do you think AI could have on the sector?

For some charities, I can see future value in chatbot-based triage and helpline services. If we could save frontline staff from having to stay up all night to take calls, and only wake a human for those cases that need urgent help and can’t wait until morning, we could improve staff quality of life, since shift work is a proven health risk factor. I’d say the technology isn’t quite there yet as we’d need to know the tools were reliable enough, but it could be a great opportunity. Chatbot interfaces to a knowledge base, such as Citizens Advice information, is another area where recent advances in language technology could be really valuable to help sift through large amounts of information, as long as it’s applied with appropriate safeguards.

I’d love for more people to understand that although chatbots are dominating the news at the moment, they’re not the only applications of AI. Medical research charities can benefit from drug discovery models that predict potential molecules and their effects. Environmental charities already use the predictions from climate models to draw attention to future worst-case scenarios and could extend this by developing their own models to predict the impact of various potential interventions. The term ‘AI’ covers a collection of technologies and applications and every problem has a different ‘best’ solution.

I also think we have to beware of companies selling unproven technologies – this has the potential to waste a lot of money. Many trustees don’t have a technical background, which makes it hard to know whether you can believe what you’re being told by salespeople whose goal is to close a sale. Of course, no one’s suggesting that every board should have a trustee with a computer science background. If you think your charity has a good use case for AI and you’re serious about taking it further, I’d recommend getting in a part-time, independent board advisor with an appropriate expert background and who isn’t going to try and sell you anything.

Can you tell us more about Complexical and the work you’re doing in AI?

Complexical is a small, impact-driven technology consultancy and research firm based in Bristol.

One of the services we offer is to help organisations understand what AI could do, within their existing business model, to improve the outcomes they care about. And, of course, how to do that as safely and ethically as possible. We can also help with prototyping solutions.

Although I’ve personally worked in AI for almost 20 years, a lot of what we do isn’t actually AI: it’s helping people get their data in order, understand what they’ve got in their digital estate and stay compliant with GDPR. These are all prerequisites for using AI effectively, but they can also be enough in their own right. It might be controversial but I think one of the hallmarks of being really comfortable in the AI space is knowing when you don’t need it and when a cheaper, simpler approach will do just as well.

What advice would you give to charity leaders about how to prepare for future developments in AI?

I think a lot of ‘AI questions’ are really business questions. You know your organisation best. You know what bottlenecks are constraining delivery and those areas where you’d benefit from greater efficiency. Going from those familiar business questions to ‘AI questions’ is largely a matter of understanding the data you hold and what it can tell you. So, I would always start with an audit of your internal data. If you don’t have good data in a usable format, then even the most advanced AI system is going to struggle to generate useful insights.

Then you need to think about user privacy, particularly when it comes to vulnerable service users or working in sensitive sectors. You have a responsibility to ask whether individuals would want their data to be used, especially if it involves data leaving your own systems, and to ask people for explicit consent if you can. Of course, you can be more adventurous with internal business information, but you still need to consider potential bias.

It’s also worth being aware that a lot of what’s currently cutting-edge functionality will soon be coming for free (or for a small subscription) as part of standard business packages like Microsoft Office. For most charities, putting a lot of effort into being an early adopter isn’t going to be the right choice when the technology is going to become steadily more mainstream.

Finally, if you’re feeling out of your depth, consider whether you need a bit of expert help. A couple of hours of consultant time could potentially save you from making expensive or embarrassing mistakes. It also shows you’re taking your legal responsibilities (such as acting with reasonable time and skill) seriously.