“What are we going to do with all the time we will have?”
This question was asked by the chair of a charity where I was running an AI workshop. It’s a brilliant question, which not enough people are asking.
Across many of the charities we work with, we are seeing how AI helps people save time and money. Used responsibly, on the right things, it can free up staff time, whether it’s used to lighten the load of admin tasks, act as a thinking partner or do the heavy lifting on strategic tasks.
So what are people doing with the extra time they have? I’ve spoken to fundraisers who are now able to go out and meet more donors and CEOs who are able to build better relationships with stakeholders because they are delegating some of their work to AI. It’s a positive development because sector staff are notoriously stretched thin at work, with little headspace to plan and strategise.
Much of the conversation about the benefits of AI at the moment centres on efficiency. However, I wonder if we need to look at a broader set of metrics when measuring its success.
The promise of more time
When I was a child one of my favourite books was about the technologies that would change the future. The book promised that people would only have to work three days a week as robots and other forms of technology would perform most of their tasks at work and at home.
I feel like I’ve read different versions of this argument over the years, from economist John Maynard Keynes predicting in 1930 that we would all be working a fifteen hour week to the paperless office. Most recently, the case was made for Universal Basic Income – a levy on companies who made the products which would ultimately mean that humans had little work to do.
What’s really happening
What I’m seeing in practice though is that as AI takes some of the workload off employees, they take on more work to fill the time. This can be good news, such as the fundraiser who can go out and meet more donors. However, if this is taken to an extreme, it could lead to burnout. That fundraiser will still need to take time for quality assurance on the tasks they give to AI. Someone still has to do the thinking and reviewing.
I’ve noticed something else too. As Dan Sutch of CAST says, the way AI is changing work isn’t always visible; there is a lot of grassroots innovation in AI happening across the sector, which leaders aren’t always aware of. We know from our own Charity Digital Skills Report that 76% of charities are using AI but more than 1 in 4 (28%) of charities say that their boards have poor digital skills, while more than 1 in 5 CEOs (21%) have poor digital skills.
This is worth paying attention to. If AI is genuinely reshaping how people do their jobs, I think there may be a case to review roles and responsibilities, not with the aim of cutting posts, but to make sure people are getting the right support and there’s clarity on what’s expected of them.
Let’s come back to the fundraiser we talked about earlier. Imagine they’ve become a really confident user of AI. They’re proactive, innovative, and getting great results. I wonder if this could be better recognised in their role. Could we free up some of their time to coach the rest of the fundraising team and share what they’ve learned? It feels like there’s an opportunity here that we might be missing.
Beyond efficiency: how we measure success
When I talk to charities about how they measure success with AI pilots, they often start with metrics like time saved and salary costs, for example ‘we saved 10 hours of staff time at an average salary of £30.’ This makes sense as a place to begin. It gives senior leadership teams and boards the quantifiable benefits they’re looking for, and it can help offset some of their anxieties about the costs of AI tools and the investment needed to scale access across the organisation.
I wonder if we need a more holistic set of metrics alongside these. What about the value and impact created? I’m thinking about the fundraiser who, because of the clever ways they are using AI, now has time to visit more high net worth donors and secures a major gift. Or the CEO who has the headspace to build a coalition with peers and secure a policy change through campaigning. I think we need to track these things too, otherwise the conversation stays stuck on efficiency.
I know this isn’t easy. AI pilots may only run for short periods, and these kinds of opportunities only emerge over the longer term. But for me, they’re still part of the picture when it comes to measuring the impact of AI.
Wellbeing and employee satisfaction
Back at the AI workshop, the chair was discussing her views on AI. Refreshingly, she said that she did not want staff to feel under pressure to fill time freed up by AI with more work. She knew how hard staff worked and trusted them to organise their time.
This is part of the human-centred approach we all need to take to AI adoption. Much as I am an advocate for AI, one of my worries about it is that it will make work feel more commoditised and transactional. If we can keep the needs of staff and the communities that we serve at the centre of how we make decisions, then that is the ultimate test.
So I wonder if we should be measuring the impact on employee satisfaction and wellbeing. How is AI making their working lives better and more rewarding? And if it isn’t, how can we change that?
This has all got me thinking that if your organisation has a wellbeing policy, you might want to review it as you develop your AI policy.
Why this matters
AI is a change programme, and we need to measure its success accordingly. It requires a holistic set of metrics to track its impact. There is nothing wrong with tracking the time and money saved- we absolutely should do this. And we should also think about the other changes AI may be driving, for better or worse, and what we might need to track as a result.
What this comes down to is having an organisational understanding of the opportunities and risks which come with AI, and leaders owning the agenda on this, informed by insights from staff using AI on the ground. Until this happens, we are all going to be looking at AI through different lenses, not seeing the whole picture on what is changing now- and the bigger changes coming at us all in the future.
Where does this leave us?
So, back to that question from the charity chair: what are we going to do with the time that AI creates?
I’ve been thinking about it ever since she asked, and I still don’t have a simple answer, not least because the reality of what I see in the sector is hugely overstretched staff, who are doing incredible work on limited resources. However I do think we need to be more intentional about asking the question in the first place. If we let AI adoption happen to us rather than shape it ourselves, I worry we’ll end up in a world where people simply work harder, not smarter or happier.
The charities getting this right will be the ones treating AI as a change programme, not just a productivity tool. It’s about having honest conversations with staff about what’s changing and what support people need.
I wonder if that’s really what all of this comes down to. What happens to people when their work changes? It’s not a question I hear asked enough. Yet it feels like the one that matters most.