Group picture outside of Rånäs Castle.

The 2026 Wallenberg Scientific Forum (WASF) at Rånäs Slott in Sweden, focused on neurosymbolic AI, bringing together leading researchers from all over the world to examine how learning and reasoning can be integrated in principled and reliable ways. With experts from machine learning, knowledge representation, planning and cognitive science, the forum created a space for interdisciplinary exchange focused on theory, foundations, and long‑term impact.

While much of today’s AI research centers on new model architectures, benchmarks, and empirical performance, WASF 2026 shifted attention to more fundamental questions. As Luc De Raedt, Wallenberg Guest Professor in Computer Science and Artificial Intelligence, Örebro University, Professor of Computer Science, KU Leuven and one of the organizers of the forum, explains, neurosymbolic AI aims to “integrate learning and reasoning by combining neural networks with symbolic representations and solvers,” but the field still lacks unifying theories and formal models that clearly define what different approaches can and cannot achieve.

“We believe the community needs stronger theoretical and conceptual underpinnings,” says Luc. “Unifying frameworks, formal models, and theories are essential if we want to understand the limits and possibilities of current techniques.”

Three men outside, smiling.
WASF 2026 was organized by Hector Geffner, Professor and Chair of Machine Learning and Reasoning, RWTH Aachen University (and former Wallenberg Guest Professor at Linköping University), Luc De Raedt, Wallenberg Guest Professor in AI, Center for Applied Autonomous Sensor Systems, Örebro University, Professor of Computer Science, KU Leuven and Pablo Barcelo, Director Institute for Mathematical and Computational Engineering, Universidad Católica de Chile.

An interdisciplinary field

Neurosymbolic AI sits at the intersection of multiple research traditions.

“On the neural side, neurosymbolic AI draws on almost the full breadth of modern machine learning,” Luc notes. “But on the symbolic side, we are also seeing growing interest in neural methods, for example in database theory, graph theory, and knowledge representation.”

Researchers today are studying the expressiveness of transformers and graph neural networks, integrating symbolic structures into neural models, and applying neurosymbolic ideas to continuous domains through physics‑informed neural networks. Similar trends are visible in planning and reinforcement learning, where perception is combined with symbolic decision‑making.

Large language models represent another important area of convergence. Here, formal methods are increasingly explored to introduce correctness, robustness, and reliability guarantees.

“By examining learning and reasoning from all of these perspectives, we hope to gain a more coherent and comprehensive understanding of neurosymbolic AI, and ultimately of AI as a whole,” Luc says.

The most urgent question

Neurosymbolic AI is often described as a “third wave” of artificial intelligence. From Luc’s perspective, this wave raises foundational questions that can no longer be ignored.

“Large language models already invoke tools and solvers implicitly,” Luc explains, “but this integration is largely ad hoc and lacks formal guarantees of correctness or reliability. This is precisely where symbolic AI excels.”

Symbolic methods offer structured knowledge representations and verifiable reasoning, while neural approaches excel at learning from large‑scale data. Yet each paradigm has clear limitations when used in isolation. The promise of neurosymbolic AI lies in combining these strengths in a principled way.

“The most urgent questions concern how to formalize this integration,” says Luc. “How can we combine learning and reasoning in ways that are modular, interpretable, and guaranteed to behave as intended?”

Man talking
Guy van den Broeck held a talk about symbolic reasoning in the age of Large Language Models.

Shaping future research directions

Looking ahead, the organizers hope that WASF 2026 will have a lasting impact on both WASP and the broader AI research community. One key message is the importance of foundational research for long‑term progress, alongside the need for unifying frameworks that can act as interfaces between different research traditions.

“NeuroSymbolic AI, or more broadly the integration of distinct research streams, needs a common framework that enables collaboration across fields,” Luc emphasizes.

The forum is already contributing to this momentum. Discussions at WASF have helped inspire new collaborations, research proposals and initiatives.

Man talking in front of a powerpoint
Leslie Valiant held a keynote about Educability.
Woman talking in front of audience
Leslie Kaelbling held a talk about Rational robotics: Learning and using neuro-symbolic modelsm

Interview with Leslie Valiant: Exploring educability and neurosymbolic AI

Man smiling into the camera
Leslie Valiant, T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics, School of Engineering and Applied Sciences, Harvard University and Turing Award–winning researcher.

During the Wallenberg Advanced Scientific Forum 2026, we sat down with one of the keynote participants, Leslie Valiant, a Turing Award–winning researcher, to discuss his recent book and the growing importance of neurosymbolic AI.

His latest book The Importance of Being Educable: A New Theory of Human Uniqueness, published in 2024, examines a central question: What cognitive capabilities make humans unique, and how can these be understood through computational models? He argues that while many abilities, such as vision, movement, and emotion, are shared with other animals, humans possess a distinct “disembodied” cognitive capacity that evolved rapidly and sets us apart. He refers to this as educability: our ability to absorb, interpret, and use information from examples, reasoning, and instruction.

These three components — learning by example, reasoning, and learning through instruction — form the foundation of his work. Today’s systems excel at learning from examples, but their reasoning abilities remain limited and often rely on ad‑hoc methods or external tools. This gap, he notes, is why the field is turning toward neurosymbolic AI, which seeks to integrate learning and reasoning more coherently.

Trust, reliability and interpretability

He views the forum’s focus on the foundations of neurosymbolic AI as timely. As AI systems become more capable, questions of trust, reliability, and interpretability grow increasingly urgent. Many systems appear to reason, but often they are simply reproducing patterns from training data or calling external software. Making systems whose behavior is more principled and therefore better understood is essential for both societal utility and public trust.

The future of AI

The conversation also touched on broader societal implications. He emphasized that even experts disagree on how everyday users should treat AI systems, which he sees as a challenge. Over the next five years, he hopes to see progress toward more reliable AI, clearer explanations of system behavior, and better understanding of when and why models fail. At the same time, he acknowledges that predicting risks is difficult: as with past technologies, society often discovers dangers only after they emerge.
Looking ahead, he expects AI to become deeply embedded in daily life, especially in areas where mistakes are tolerable, such as customer service. More sensitive domains, like healthcare or safety‑critical decision‑making, will require far more caution. He also notes that societal preferences will shape adoption: people may choose automated interactions simply because they seem more convenient, or they may choose otherwise.

Despite the uncertainties, he remains optimistic about the scientific opportunities. The forum brings together researchers from diverse backgrounds, many of whom he is meeting for the first time. Their different perspectives, he says, are exactly what the field needs to make progress on the fundamental questions of learning, reasoning, and human‑machine cognition.

Woman talking in front of audience
Amy Loutfi, WASP Program Director.
People talking during a conference by their table
Man talking in fron of audience
Martin Grohe held a talk about The Power of Recurrent Graph Neural Networks.
Man talking in fron of audience
Denis Kleyko talked about Vector Symbolic Architectures: Bridge Between Neurons and Symbols.

About Wallenberg Scientific Forum (WASF)

WASF is an invitation-only, collaborative forum supported by WASP, designed to bring together leading researchers and practitioners to address foundational challenges in AI. This year’s forum had around 45 participants from all over the world.
This edition of WASF was organized by:

  • Luc De Raedt, Wallenberg Guest Professor in Computer Science and Artificial Intelligence, Örebro University, and Professor of Computer Science, KU Leuven
  • Pablo Barcelo, Director Institute for Mathematical and Computational Engineering, Universidad Católica de Chile
  • Hector Geffner, Hector Geffner, Professor and Chair of Machine Learning and Reasoning, RWTH Aachen University and former Wallenberg Guest Professor at Linköping University.

Published: April 22nd, 2026

[addtoany]

Latest news

We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. View more
Cookies settings
Accept
Privacy & Cookie policy
Privacy & Cookies policy
Cookie name Active
The WASP website wasp-sweden.org uses cookies. Cookies are small text files that are stored on a visitor’s computer and can be used to follow the visitor’s actions on the website. There are two types of cookie:
  • permanent cookies, which remain on a visitor’s computer for a certain, pre-determined duration,
  • session cookies, which are stored temporarily in the computer memory during the period under which a visitor views the website. Session cookies disappear when the visitor closes the web browser.
Permanent cookies are used to store any personal settings that are used. If you do not want cookies to be used, you can switch them off in the security settings of the web browser. It is also possible to set the security of the web browser such that the computer asks you each time a website wants to store a cookie on your computer. The web browser can also delete previously stored cookies: the help function for the web browser contains more information about this. The Swedish Post and Telecom Authority is the supervisory authority in this field. It provides further information about cookies on its website, www.pts.se.
Save settings
Cookies settings