Diverse group of charity trustees and board members sitting around conference table discussing AI policy governance in modern boardroom

Over the last year we have seen an explosion in charities’ use of AI rising from 61% in 2024 to 76% of charities using AI in the 2025 Charity Digital Skills Report. In this year’s report I was excited to see that almost half (48%) of charities now have an AI policy. This number has tripled since 2024 when only 16% were doing so. This rises to 68% of large charities, compared to 37% of small charities.

This is good news. This Trustees Week it should be music to charity boards’ ears that more charities are unlocking the benefits of AI and getting the right governance in place, towards which a policy is a helpful first step.

Why aren’t 48% of charities having AI policies necessarily good news?

However, are these numbers as rosy as they first appear?

As part of my day job as a digital and AI consultant I’ve seen many of these draft policies, and drafted (and redrafted) them. I’ve seen some brilliant policies – and some which are full of red flags. Which is why the 48% gives me pause for thought. And in truth, if you don’t have someone with AI expertise on your team or board then you won’t know how good the policy is until it is tested – which could be when something has gone wrong, or you realise your board don’t have the information to make a key decision about AI.

What we don’t know from the report is what the quality of the policies are like, nor do we know what the governance around it is. Perhaps we may gather data on this in the 2026 Charity Digital Skills Report. Ahead of that, I wanted to share some reflections on charity AI policies that trustees might find helpful during Trustees Week. Like me, maybe you’ve been asked to take a look at your charity’s shiny new AI policy.

What makes an effective charity AI policy?

As a charity trustee and AI consultant, here is what I love to see in AI policies:

A well defined scope. I have seen policies which limit the scope to generative AI. Given the pace at which AI is developing, I think this should be broadened out to all forms of AI. I would define AI at the start of the policy but keep the door open for more innovations eg include the wording ‘including but not limited to’ when introducing your definition of AI and referring to different types of AI.

Clear principles linked to your charity’s values. In many ways this is the most important part of any charity AI policy. These are the foundational rules and guidelines which will enable your charity to adopt AI ethically, equitably and responsibly. The principles you use should be linked to your charity’s values, and ideally the behaviours you want to see from staff and volunteers when they use AI in their roles.

For example, if one of your principles in your AI policy is about being transparent, what does that really mean? Is it about being telling your donors and beneficiaries about how you are using AI? Telling staff about your plans for using AI? Planning how to communicate both of these things? What do you want staff and volunteers to do differently to make this happen. It will really help your team if the policy is as clear and specific as possible about this.

Have good guidance on data privacy. To be fair I do see this in many of the charity AI policies I have seen. However, one of the tripwires of AI tools is that they are just so easy to use – and I have heard of staff unwittingly putting sensitive data into them. Providing examples of what staff should and should not do could be the difference between your charity being a Daily Mail story about a data breach or not.

Opens the door for training and support. One thing that has surprised me when working with charities who are advancing with AI is how many questions staff often have once an AI policy is launched. Sometimes these aren’t about the policy – they are broader questions about what staff have permission to do, and what is expected of them.

This really shows how we are all figuring this out together. It also tells us that there will be opportunities to engage staff off the back of a new AI policy being launched. This could take the form of training, all staff meeting announcements, time to do show and tells in teams meetings, lunch and learns. Your charity needs to think about how it can offer opportunities to engage in different places where you bring staff together, piggybacking on the energy that comes with disruption and making it positive.

This is where an AI policy really is unlike any other policy. I have never seen staff get this excited or energised about a new expenses policy.

Flies the flag for inclusion. This is something I don’t see getting talked about nearly enough in charity AI policies. How will your charity make decisions about AI to ensure that marginalised communities are not disadvantaged? This could range from beneficiaries who are digitally excluded, to specific groups who have not been factored into the design of third party tools, to supporting staff who may be nervous about technology, let alone AI.

Spells out what your AI governance looks like. Do you have an AI working group? Will progress with AI get reported up to board? Do you need to update your risk register? You need to map out what your AI governance looks like, make sure it is integrated with your organisational governance.

Finally, don’t forget to draft your AI policy with sector best practice in mind, such as The Charity Digital Code of Practice, which includes a major focus on AI guidance for small and large charities.

What should charity trustees do about AI policies right now?

A charity AI policy is not a one and done compliance exercise. Of all the policies you have, it is likely to be the one you review most often. In the years to come we will see a huge amount of innovation and change across the sector. Your AI policy is a critical mechanism when ensuring your charity is prepared.