A.I. Checklist for charity trustees and leaders
Checklist navigation
1. Introduction
Before you use the checklist
If your charity is new to AI, we have written a blog to accompany this checklist which explains what AI is, how other charities are using it and how you might get started with it.
Purpose of the checklist
There has been a significant increase in charities’ interest in emerging tech over the last year, with 78% of charities stating that artificial intelligence (AI) is relevant to their work, but 73% not feeling prepared to respond to the opportunities and challenges it poses. Meanwhile, other research has revealed that AI could save UK employees ‘390 hours of working time per year’ and could potentially replace 300 million jobs and a quarter of work tasks in the US and Europe.
Whilst no-one knows the exact impact that artificial intelligence will have, we are all being touched by this technology everyday, from when we verify online banking transactions to receiving product recommendations from a retailer. And as AI becomes further integrated into the technology tools we all use as part of our working lives, trustees and leaders will need to understand why and how to deploy these technologies in a way that aligns with their charitable purpose.
As artificial intelligence and other emerging technologies grow in sophistication, leaders will need to understand how they are changing the lives of their beneficiaries, and what this could mean for the decisions about strategy, scrutiny and support that need to be made around the board table, whatever the size of your organisation.
The aim of this checklist is to help you:
- Create a shared understanding of AI amongst trustees and leaders
- Make the right decisions about AI
- Review progress so far
About artificial intelligence
By artificial intelligence we mean the capability of a computer system to undertake human-like tasks, such as problem solving and learning. You may have come across different types of AI such as generative AI (which includes tools such as ChatGPT), a relatively new development where algorithms create new content; and predictive AI, which can predict outcomes based on historical data patterns and existing information.
If your charity, or your trustees, are just getting started with AI we have written a blog to accompany this checklist with more detail about what AI is, how charities are using it, and how to get started with it.
How to to use the checklist
From our work with charities we know that everyone is at different stages with these technologies. You may be starting out, or have been using them for some time.
If you are starting out, we encourage you to begin with the first section below on developing your understanding. Once you have explored this section and feel confident with it, you might want to move onto the next section on skills.
We know that AI is a big topic so you may wish to pick one topic at a time, and you may even ask your executive team to review your progress against each topic and report back to trustees. Do not worry if you cannot answer all the questions straight away- the idea of each point is to prompt discussion.
We hope that charities of all sizes will find something useful in our checklist to start the conversation amongst boards about AI, regardless of size, resources or stage of digital maturity. We encourage you to focus on the points which are most relevant to you.
In the checklist we have referred to staff, but the points below may also be relevant to volunteers.
Thank you to Nick Scott for his support in drafting the checklist and to Kaz McGrath at PLAI – Purpose-Led AI for her contributions. Also a huge thank you to the many people who reviewed and provided feedback on early drafts.
2. What charities have said about the checklist
This resource will undoubtedly serve as a powerful and transformative tool for any charitable organisation embarking on the exciting and innovative journey into the realm of artificial intelligence.
Antonio Cappelletti, Digital Lead, Christian Aid
There is so much to still figure out on this path, and your tool is a really positive place to start thinking about how AI fits into an individual charity across its structure, resources, aims and risk appetite.
Natasha Iles, Head of Development & Communications, Wikimedia UK
I found the AI checklist to be full of useful questions that all Trustees should be considering. The skills, ethics and governance areas are so important for strategic leaders, and often overlooked.
Stephen Thorlby‑Coy, Director of IT and Digital Services, Hospice UK
3. Developing our understanding
- Are staff or volunteers already using AI tools such as Chat GPT? How are they using them?
- How secret or open is the usage of these tools in the organisation? Are staff confident that learning is happening in the open, so that risk of misuse is minimised?
- What do we know about the hopes and fears of staff around the use of AI tools?
- Do all the leadership team understand AI and how it might impact on their areas of responsibility?
- What can we learn from how other charities are using AI that is relevant to us?
- Have we already identified specific areas where AI can improve our operations, programs, or service delivery? What are they and why are they important?
4. Skills
- Are we offering space and time for staff to experiment safely with AI tools? Do we have processes to capture and share learning?
- Are staff developing the skills required for AI i.e. data literacy, familiarity with relevant programming languages, tools, and platforms, prompt engineering (i.e. the ability to craft effective prompts which get effective results from tools like Chat GPT) and horizon scanning, ethical principles, and coding?
- Are staff also developing the soft skills required for AI e.g. problem solving, collaboration, the ability to find new ways of doing things?
- Do we know which staff have already developed their skills at using AI – or specific AI tools?
- Are we incorporating hard and soft skills relating to AI when recruiting new staff or redeveloping job descriptions?
- Are we developing knowledge sharing opportunities between staff and also with suppliers and peer organisations?
- Have we created a continuous learning mindset and how is this embedded into our culture and ways of working?
- Are staff developing the right skills, and are they adequately resourced, to edit, quality assure and fact check outputs created by AI tools?
5. Ambition and purpose
- Have we considered how artificial intelligence (AI) could enhance our mission, goals and impact?
- Have we considered scenarios planned for different ways in which AI could play out in our future? What are the potential impacts of AI on our stakeholders, business models, strategies and mission?
- How might AI change our audience’s behaviour, and what they want from, and how they interact with our charity, as a result?
- How are peer organisations using AI tools? How might we differentiate ourselves from them?
- How do we foster collaborations across the sector (could some of the opportunities, or risks, be easier to manage in partnership with others)? Are we creating space to discuss why and how to use these tools and encourage critical thinking about them, both within and outside our charity?
- How might we leverage AI to expand our reach, increase efficiency, or enhance the impact of our charitable activities?
- How might we break down silos between teams and/or data and use AI to increase the overall effectiveness of our strategy across our areas of work?
- How might we adopt AI tools in a way that is consistent with our values?
- How could AI help our leadership be more effective at an individual and group level?
6. People and productivity
- Are staff exploring which repetitive, manual tasks they could use AI for? e.g. testing out how best to summarise meeting minutes and draft digital marketing content, automating processes.
- Which tasks could AI tools undertake that are currently undertaken by staff or volunteers?
- Which are the staff most affected by AI? Do we need to be planning to evolve their roles and responsibilities as a result?
- How do we reorganise work and processes that are made redundant by AI?
- Are staff on board with working with AI?
- How are we incentivising and supporting staff in using AI? How might they do this while maintaining best practice?
- How might the adoption of AI impact culture and the principles, processes, behaviours and structures you need (eg greater agility, or change in roles)
- Are we communicating about AI in a way that reassures staff, whilst being transparent about how and why we are engaging with AI?
7. Services
- How is AI reshaping what our users need e.g. for students if you’re an education charity?
- Are we exploring how AI solutions (for example chatbots) could be responsibly and safely used in service delivery to increase scale and handle simple or repetitive tasks?
- How can AI add value to our services (e.g. advising people out of hours or helping us to reach more people)? Are we putting the right safeguards in place so we can scale safely (e.g. small pilot projects)?
- Are we devoting the time and resources needed to test how AI might be used in our services so that it can be deployed safely?
- Are we tracking how AI is helping organisations in our field? For example, if you’re a medical research charity how are research organisations using automated drug discovery?
8. Marketing, communications, fundraising
- How might supporters find out about our charity using tools such as Chat GPT? Or browsers with AI built in such as Bing? How might their expectations change?
- How well do our current content channels – in particular our website – support robots already (for example Google search bots)?
- How can AI support us across the many different stages of a supporter journey – knowing who to contact and what drives their engagement, when to contact them, how to contact them and what content is best suited for them to increase impact?
- How might AI help with fundraising bid writing? For example, could resources such as Charity Excellence Framework’s AI bid writing service help?
- Where could we use AI generated content – text, images – and in what format? What usage would represent a reputational risk and what would not? How would we label our usage?
- How could we use AI to increase the personalisation of content and messaging for our supporters?
- Are we at increased risk of more convincing misinformation or disinformation attacks due to AI? Do we have processes in place to monitor and respond to these?
9. Accountability
- Who is accountable for developing, deploying, or utilising AI? If we work with suppliers, what are their values and ethics and how do they compare for us?
- Are we diversifying our investments in new technologies due to rapid developments in the market?
- Have we considered, and been transparent about, how our ambitions for AI align with aims for other areas of emerging tech such as robotic process automation, Web 3 or use of crypto currencies?
- How much do we know about the methodologies, data, models and results of any providers we contract to develop and implement AI? How transparent are they?
- Have we considered the environmental impact of AI and have we asked suppliers how they are addressing this?
10. Data
- Do we have the right processes in place to keep beneficiaries and supporter’s data safe when we are using AI tools?
- Have we communicated these parameters to staff /volunteers, including which data we do not want them to upload to AI tools?
- Do we need to review our data strategy and processes for data collection, storage, analysis, management and use as a result of using AI, ensuring that we comply with legal, cybersecurity and ethical obligations? Have we reviewed our processes and policies?
- What opportunities does AI offer for us to improve our use of data? Are we taking advantage of them? (for example AI can be used to bypass problems of low digital maturity or bad quality data, for example you can use AI to clean and augment poor-quality / messy data for insights or organise systems etc.)
11. Governance
- Have we given staff/volunteers parameters for how to use these tools, specifying what they need to be aware of and how to manage risks?
- Do we have people in our team with the required knowledge, skill, expertise and experience to design responsible AI governance? Do we need to consider bringing in support?
- Do we need to update our policies for example by reviewing our data, Information Security, Data Protection and IT policies , developing a digital ethics policy or creating an AI policy?
- Are trustees developing their skills and understanding, for example of the opportunities and risks for the charity?
- Do we have the right level of digital skills on our board to make informed decisions about AI and other areas of emerging technology, so that the right level of strategy, scrutiny and support is provided?
- Are we tracking potential regulatory frameworks for AI and how this might impact us?
12. Inclusion
- Which decisions are being handed over to machines / AI? Have we defined any risks involved in delegation, and planned for how to mitigate these risks?
- Are we building in guardrails to combat bias in AI tools and what do these look like?
- Are we conducting due diligence when we procure AI tools (e.g. for bias, data security and ethical fit)?
- Is AI being employed in any high risk areas, for example intermediating human interaction, such as collecting client information to assess mental health conditions or providing counselling services?
- How can we make sure digitally excluded users can still access our services and content?
- Are we involving different perspectives and types of lived experience in both the development of how we use and also the testing of AI, creating space for challenge?
- If appropriate, are we collaborating with other charities to ensure that AI is transparent, fair and accountable, for example by asking questions of suppliers or, if appropriate, campaigning alongside other organisations?
- How can we grow our understanding of the bias within AI systems and how when scaled this impacts our work and the communities we work alongside? (See Google’s tool that allows people to see how decisions in AI impact people – https://pair-code.github.io/what-if-tool/)
13. Download the PDF
If you’d like to view or download a PDF version of the checklist, please click the button below.