In a world increasingly shaped by algorithms, the question is not whether we use AI, but how we use it.
AI for good
Posted:
22 Oct 2025
The AI for Good Global Summit is the United Nations’ (UN) leading platform for advancing the use of artificial intelligence to address global challenges. Held annually in Geneva, Switzerland, the summit is organised by the International Telecommunication Union (the UN’s specialised agency for information and communication technologies), in partnership with over 40 UN agencies, and this year, it attracted over 11,000 participants from 169 countries. I and the research lab that I lead, the Cambridge Affective Intelligence and Robotics Lab (AFAR Lab), were invited to attend the summit to give a talk and exhibit our robotic systems that aim to provide solutions related to the UN Sustainable Development Goal 3 (SDG3): ensuring good health and wellbeing for all.
My talk, titled One Size Does Not Fit All: AI and Social Robotics for Assessing Child Mental Wellbeing, presented our interdisciplinary efforts to develop socially intelligent robots for mental wellbeing assessment. In collaboration with the Department of Psychiatry at Cambridge, we’ve been studying how robot-led interactions can provide more accurate, accessible assessments for children aged from 8 to 13. By combining structured interactions, validated psychological questionnaires, and AI models, we found that our robotised assessments could help identify wellbeing concerns more effectively than traditional self- or parent-reports. Automated analysis of nonverbal behaviours showed that children with higher wellbeing expressed themselves more openly, with notable gender differences observed. These findings challenge uniform assessment methods and support the need for personalised tools that consider individual differences.
The summit came at a particularly special time for us, as it marked the culmination of our ESPRC-funded Adaptive Robotic Emotional Intelligence for Wellbeing project. Since 2019, this project has explored how AI-driven robotics can support mental wellbeing for children and adults through mindfulness coaching and positive psychology interventions, not only in the lab but also in cafés and workplaces. Our VITA system, a longitudinal robotic wellbeing coach, showed significant improvements in participant wellbeing over a one-month pilot. At the summit, Dr Micol Spitale and Dr Minja Axelsson presented VITA to an international audience, attracting a great deal of interest and even a national TV interview. Meanwhile, our SORA4Wellbeing system offered an engaging, validated way to assess children’s mental health both in person and remotely. This line of research has received recognition across over 1,700 global media outlets and has received several awards, including the Runner-up for the Collaboration Award at the 2023 University of Cambridge Vice-Chancellor’s Awards and the Best Paper Award in Responsible Affective Computing at the 2023 IEEE International Conference on Affective Computing and Intelligent Interactions.
I also attended a thought-provoking talk by Dr Sasha Luccioni, AI & Climate Lead at the AI community, Hugging Face, on balancing the promise of AI with its ecological cost, emphasising that focusing only on direct emissions offers a misleadingly narrow view of AI’s environmental impact. Instead, we need to consider Jevons’ Paradox – a phenomenon where increased efficiency doesn’t necessarily reduce overall consumption. In fact, making AI models faster, cheaper, or more accessible can increase demand and usage, which can lead to a larger overall footprint. Reflecting on my experiences at the summit, I felt a renewed sense of purpose.
The AFAR Lab’s most recent work, What People Share with a Robot When Feeling Lonely and Stressed and How It Helps Over Time, will be presented at the 34th IEEE International Conference on Robot and Human Interactive Communication at the end of August 2025. What is exciting about this work is that it is the outcome of the research undertaken by AFAR postdoctoral researcher Dr Guy Laban, that included Trinity Hall participants and was conducted in the offices at Central Site.
In this study, students engaged in repeated conversations with QTrobot, a small humanoid robot powered by a large language model (GPT 3.5), designed to support emotional reflection. Over five sessions, students disclosed their personal events and feelings with the robot, guiding them to reflect on their emotional experiences and constructively reinterpret challenges. Over the course of their participation, results showed students felt less lonely and stressed, and increasingly opened up to the robot during conversations, using richer emotional language and displaying more expressive facial behaviour. Those feeling more distressed, lonely or stressed, tended to talk about friendships and connection, suggesting unmet social needs whereas students who felt less distressed spoke more about personal growth, creativity, and academic ambition. These findings demonstrate how social robots have the potential to help surface students’ emotional needs through everyday conversation.
We recently started a new project called MICRO (Measuring children’s wellbeing and mental health with social robots) that received €1.5M in funding, bringing together a multi-disciplinary team of researchers based at universities across Europe. The project will explore the use of social robots to measure children’s wellbeing and mental health in schools, focusing primarily on vulnerable groups, such as children with developmental language disorders and refugee children who might benefit from preventative interventions.
Our mission, therefore, has been made ever clearer. We are building AI and robotic systems that serve people, not just serving progress for the sake of progress.
Feature image: ©ITU/AI for Good