Headshot image of Dr Francine Ryan, Senior Lecturer in Law & Director of the Open Justice Centre at The Open University.

We talk to Dr Francine Ryan, Senior Lecturer in Law and Director of the Open Justice Centre at The Open University, about the project she is leading on responsible AI – including the free online courses they’ve developed – and how charities can get involved to develop their understanding of Generative AI (GenAI).

1. Lawyers and members of the public – who are representing themselves to save costs – could pay a high penalty for using misinformation in court generated by GenAI tools. Can you tell us more about the consequences of this? 

Although tools like ChatGPT are not designed for legal advice, we know that many people are using them to find legal information and conduct legal research. In June 2025, the High Court considered two cases, Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin), where there had been alleged misuse of GenAI. In the judgment, the court said that lawyers have a professional duty and responsibility for their submissions, which includes ensuring they have checked and verified outputs from AI tools.

The Judicial Office Holders’ publication, Artificial Intelligence (AI): Guidance for Judicial Office Holders (31 October 2025), recognises that AI might be the ‘only source of legal advice’ for some people representing themselves (litigants in person) and they might not have the skills to check and verify the accuracy of the outputs. The guidance states that ‘if it appears an AI chatbot may have been used to prepare submissions or other documents’, judges should ask litigants if AI has been used and check what ‘accuracy checks (if any)’ were done and ‘inform the litigant that they are responsible for what they put to the court/tribunal’. Misleading the court could result in cost penalties and have serious consequences for a party’s case.

2. Can you tell us about the AI, Law and Legal Training project? 

It was a skills project, funded by UKRI Responsible AI, that brought together a multidisciplinary team from The Open University, University of Lincoln and Citizens Advice to co-produce research informed resources to enhance knowledge and awareness of, and confidence in, the use of GenAI for understanding legal processes and accessing legal information. As part of the project, we held three workshops with the legal and advice sector to inform the resources we created. The findings from the workshop are shared in our interim report. The project aims to educate and empower the public, advice sector, charities, small and medium-sized law firms, students and academics to better understand the opportunities and limitations of GenAI.

3. How can charities get involved? 

We have created eight free open access and engaging online courses that provide ethical and responsible knowledge of GenAI and the skills to use it. Participants receive a digital badge and certificate upon completing a short quiz at the end of each course. To find out more and explore the courses go to: OLCreate: AI Law and Legal Training. We hope that charities will get involved by encouraging their staff, volunteers and trustees to complete the courses. The courses have been designed for beginners, and we have suggested pathways for different users. We are planning an evaluation of the courses next year and we would love charities to be involved as part of the evaluation.

4. You’ve said that ‘There is a significant risk of societal harm if we do not build capacity on how to use GenAI ethically and responsibly’. What does this risk look like to you, and how should we mitigate it? 

We explore some of the societal risks in Course 5: Ethical and responsible use of Generative AI. We need to ensure that people understand that GenAI can create realistic but fake images and deepfakes could be used to impersonate public figures and be used to influence or manipulate us. We often don’t know what data a GenAI system has been trained on, so if AI systems are trained on biased data, without ensuring there is ethical oversight, there is significant risk that those biases become embedded within our systems and impact on decision making.

As AI becomes more widespread, there is a growing risk of de-skilling and dependency on automated tools. This over reliance could lead to complacency, with people failing to check or even recognise the inaccuracy of outputs. Ultimately, this could undermine the safety of AI systems.

Education and capacity building are key ways in which we can mitigate against these risks. It is important that everyone has the knowledge and skills to use AI, and to critically evaluate it, so they can understand its potential, recognise its risks and can contribute to, and shape, the conversations around how AI technologies are used within all aspects of our lives. This is why free, trusted resources and training on AI is so important. For anyone in the charity sector, CAST provide free digital resources. Anyone leading a charity should complete the ZAD AI Leadership Essentials training.

5. What AI trends do you see ahead for the law and advice workers in the next two years? 

We know that AI tools are already being used within both the legal and advice sector – in Course 4: Use cases for Generative AI we explore some examples. I think we are going to see the emergence of new models of legal services, where the AI system is central to the delivery of legal advice. In May 2025, the Solicitors Regulatory Authority approved the first AI-driven law firm, Garfield Law, which uses an AI-powered litigation assistant to support small debt claims. In July 2025, the Ministry of Justice launched its AI Action Plan for Justice, which aims to embed AI across the justice system.

We know that many people cannot afford to pay for legal advice so AI-assisted platforms could help people navigate legal processes and resolve their legal problems without the need for lawyers. The Law Society has suggested that the ‘government should create a free AI powered tool to help people understand their legal issues’. AI technologies have the potential to automate routine tasks, which could free up advice workers to support more people, especially those facing complex issues. However, it is essential that we think very carefully about AI adoption to ensure it is done ethically and responsibly. People must be at the centre of all our decisions.

If you are interested in finding out more about the AI Law and Legal Training project and the courses they have produced, feel free to reach out to them at open-justice@open.ac.uk. They would love to hear your feedback on the courses and how they can continue to support the sector with education and training.