Back to top

Keynote Speakers

Wray Buntine

Stuart Russell

University of California at Berkeley

Title: Provably safe AI systems

Abstract: The presently dominant approach to building AI systems, based on imitation learning from text, is intrinsically unsafe. In this talk I will outline an approach that could, in principle, yield provably safe AI systems, although there is still much work to do.


Bio: Stuart Russell, OBE, is a Distinguished Professor of Computer Science at Berkeley and an Honorary Fellow of Wadham College, Oxford. He is a leading researcher in artificial intelligence, a member of the National Academy of Engineering, and the author (with Peter Norvig) of the standard text in the field. He has been active in arms control for nuclear and autonomous weapons. His latest book, Human Compatible, addresses the long-term impact of AI on humanity.

Wray Buntine

Murray Shanahan

Google Deepmind and Imperial College London

Title: What Sort of Thing Is a Large Language Model?

Abstract: As large language models (LLMs) increasingly feature in our daily lives, as a society we are struggling to understand what sorts of things they are and how to think and talk about them. Are lthey productivity tools, partners in co-creation, digital companions, or exotic alien minds? How can we do justice to the complex behaviour we encounter when we interact with them without falling into the trap of anthropomorphism? In this talk I will present a catalogue of examples of noteworthy LLM behaviour, and discuss how, and whether, to apply to LLMs familiar but philosophically difficult concepts such as reasoning, belief, and consciousness.


Bio: Murray Shanahan is a principal research scientist at Google DeepMind and Professor of Cognitive Robotics at Imperial College London. His publications span artificial intelligence, robotics, machine learning, logic, dynamical systems, computational neuroscience, and philosophy of mind. He is active in public engagement, and was scientific advisor on the film Ex Machina. His books include “Embodiment and the Inner Life” (2010) and “The Technological Singularity” (2015).

Wray Buntine

Andrew Cropper

University of Oxford

Title: Automating Popper's logic of scientific discovery

Abstract: Karl Popper argues that science advances by proposing bold hypotheses and rigorously testing them against observations. In this talk, I will outline an inductive logic programming approach that automates this process. This method uses constraint solvers to generate hypotheses and supports learning recursive theories and handling noisy numerical data. I will discuss its applications in game playing, program synthesis, and visual reasoning, finally highlighting its potential for advancing automated scientific discovery.


Bio: Andrew Cropper is a research fellow at the University of Oxford. He works on combining logical reasoning and learning, i.e. inductive logic programming. He received his PhD in computer science from Imperial College London.

Wray Buntine

Wang-Zhou Dai

University of Nanjing

Title: From End-to-End to Step-by-Step: Integrating Learning and Reasoning through Abduction

Abstract: Despite substantial advancements achieved by end-to-end learning architectures, these methods often struggle with tasks requiring explicit symbolic reasoning, robust generalization, and interpretability. Integrating statistical learning and symbolic reasoning remains a fundamental yet challenging goal in contemporary AI research. In this talk, we explore abductive learning as a principled framework to bridge neural models and formal logic, emphasizing the transition from purely end-to-end architectures toward structured, step-by-step reasoning. We will discuss how abductive reasoning—a logic-based mechanism to generate explanatory hypotheses—can guide machine learning models to autonomously discover symbols and causal relations from raw sensory inputs. Moreover, we will examine recent progress in abductive reinforcement learning, which recursively decomposes complex tasks into interpretable sub-tasks and discovers symbols and abstraction in open worlds. By moving beyond black-box approaches, this framework aims to improve model robustness, reduce data reliance, and enhance explainability, laying the foundation for the next generation of reliable and generalizable AI systems.


Bio: Wang-Zhou Dai is an Associate Professor and Associate Dean at the School of Intelligence Science and Technology, Nanjing University. He received his Ph.D. in machine learning and data mining from Nanjing University. His research interests primarily include machine learning, data mining, and symbolic learning, and focuses on the integration of statistical/deep learning and symbolic reasoning, his recent work on abductive learning was recognized with the Outstanding Paper Award at AAAI 2025. Dr. Dai has served as the Program Chair of the 4th International Joint Conference on Learning and Reasoning (IJCLR 2024), and regularly serves as a reviewer or senior program committee member for major AI conferences (e.g., ICML, NeurIPS, AAAI, IJCAI, KDD, ICLR, IJCLR, NeSy) and top journals (e.g., TPAMI, TKDE, TNNLS, MLJ).