Bernhard Schölkopf
From ML for science to causal digital twins
Abstract: Machine learning is being deployed across the sciences, changing the very way we perform scientific inference. This not only allows us to scrutinise datasets too large for human analysis, but also expands the domain of problems amenable to mathematical modeling toward greater complexity. However, standard machine learning has weaknesses when it comes to discovering causal relationships as opposed to potentially spurious correlations. I will discuss the associated failure modes as well as some success stories (with a particular focus on astronomy), including some that use methods of causal machine learning.
Bio: Bernhard Schölkopf studies machine learning and causal inference, with applications in fields ranging from astronomy to robotics. Trained in physics and mathematics, he earned a Ph.D. in computer science in 1997 and became a Max Planck director in 2001. A professor at ETH Zurich and a Fellow of the CIFAR program Learning in Machines and Brains, he has received the ACM-AAAI Allen Newell Award, the BBVA Foundation Frontiers of Knowledge Award, and the Royal Society Milner Award. He co-founded the MLSS series of Machine Learning Summer Schools, the ELLIS Society, and helped start the Journal of Machine Learning Research, an early milestone in open access and today the field’s flagship journal.

Cynthia Rudin
🏆 IJCAI-25 John McCarthy Award
Interpretable Machine Learning and AI, John McCarthy and I
Abstract: In Interpretable Machine Learning, we add constraints to models to make them easier to understand. I will discuss two benefits of interpretability: improved *troubleshooting* and *scientific discovery*. Troubleshooting is central to computing, which John McCarthy asserted from the 1950's onwards, and it is also central to machine learning. I will discuss how Interpretable Machine Learning enables substantially easier troubleshooting of machine learning models. For tabular data, I will discuss sparse models and the Rashomon Set Paradigm. In this paradigm, the goal is to find all low loss models from a given function class and visualize them to enable user interaction. This paradigm changes machine learning to include not just optimization but enumeration and visualization. It reshapes the way we think about developing models, resolving the "interaction bottleneck" that makes it difficult to interact with classical machine learning algorithms and troubleshoot them. Interpretable Machine Learning also enables scientific discovery. I will discuss a discovery we made concerning computer-aided mammography with interpretable neural networks. My team found subtle asymmetries that predict breast cancer up to 5 years in advance.
Bio: Cynthia Rudin is the Gilbert, Louis, and Edward Lehrman Distinguished Professor in Computer Science at Duke University. She directs the Interpretable Machine Learning Lab, whose goal is to design predictive models that people can understand. Her lab applies machine learning in many areas, including healthcare, criminal justice, materials science, and music generation. Prof. Rudin holds an undergraduate degree from the University at Buffalo and a PhD from Princeton University. Prof. Rudin is past chair of both the INFORMS Data Mining Section and the Statistical Learning and Data Science Section of the American Statistical Association. She has also served on committees for AAAI, ACM SIGKDD, DARPA, the National Institute of Justice, the National AI Advisory Committee's subcommittee on Law Enforcement (NAIAC-LE), and the National Academies of Sciences, Engineering and Medicine.

Heng Ji
Science-Inspired AI
Abstract: Unlike machines, human scientists are inherently “multilingual,” seamlessly navigating diverse modalities—from natural language and scientific figures in literature to complex scientific data such as molecular structures and cellular profiles in knowledge bases. Moreover, their reasoning process is deeply reflective and deliberate; they “think before talk”, consistently applying critical thinking to generate new hypotheses. In this talk, I will discuss how AI algorithms can be designed by drawing inspiration from the scientific discovery process itself. For example, recent advances in block chemistry involve the manual design of drugs and materials by decomposing molecules into graph substructures—i.e., functional modules—and reassembling them into new molecules with desired functions. However, the process of discovering and manufacturing functional molecules has remained highly artisanal, slow, and expensive. Most importantly, there are many instances of known commercial drugs or materials that have well-documented functional limitations that have remained unaddressed. Inspired by scientists who frequently “code-switch”, we aim to teach computers to speak two complementary languages: one that represents molecular subgraph structures indicative of specific functions, and another that describes these functions in natural language, through a function-infused and synthesis-friendly modular chemical language model (mCLM). In experiments on 430 FDA-approved drugs, we find mCLM significantly improved 5 out of 6 chemical functions critical to determining drug potentials. More importantly, mCLM can reason on multiple functions and improve the FDA-rejected drugs (“fallen angels”) over multiple iterations to greatly improve their shortcomings. Preliminary animal testing results further underscore the promise of this approach.
Bio: Heng Ji is a Professor of Computer Science at Siebel School of Computing and Data Science, and a faculty member affiliated with Electrical and Computer Engineering Department, Coordinated Science Laboratory, and Carl R. Woese Institute for Genomic Biology of University of Illinois Urbana-Champaign. She is an Amazon Scholar. She is the Founding Director of Amazon-Illinois Center on AI for Interactive Conversational Experiences (AICE), and the Founding Director of CapitalOne-Illinois Center on AI Safety and Knowledge Systems (ASKS). She received Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing, especially on Multimedia Multilingual Information Extraction, Knowledge-enhanced Large Language Models and Vision-Language Models, AI for Science, and Science-inspired AI. The awards she received include Outstanding Paper Award at ACL2024, two Outstanding Paper Awards at NAACL2024, "Young Scientist" by the World Laureates Association in 2023 and 2024, "Young Scientist" and a member of the Global Future Council on the Future of Computing by the World Economic Forum in 2016 and 2017, "Women Leaders of Conversational AI" (Class of 2023) by Project Voice, "AI's 10 to Watch" Award by IEEE Intelligent Systems in 2013, NSF CAREER award in 2009, PACLIC2012 Best paper runner-up, "Best of ICDM2013" paper award, "Best of SDM2013" paper award, ACL2018 Best Demo paper nomination, ACL2020 Best Demo Paper Award, NAACL2021 Best Demo Paper Award, Google Research Award in 2009 and 2014, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-2018. She has coordinated the NIST TAC Knowledge Base Population task 2010-2020. She served as the associate editor for IEEE/ACM Transaction on Audio, Speech, and Language Processing, and the Program Committee Co-Chair of many conferences including NAACL-HLT2018 and AACL-IJCNLP2022. She was elected as the North American Chapter of the Association for Computational Linguistics (NAACL) secretary 2020-2023.

Luc De Raedt
Neurosymbolic AI : combining Data and Knowledge
Abstract: The focus in AI today is very much on using just data for learning, but one should not learn what one already knows. The challenge therefore is to use the avaible knowledge to guide and constrain the learning, and to reason with the resulting models in a trustworthy manner. This requires the integration of symbolic AI with machine learning, which is the focus of neurosymbolic AI, often touted as the next wave in AI. I will argue that Neurosymbolic AI = Logic + Probability + Neural Networks. This will allow me to specify a high-level recipe for incorporating background knowledge into any neural network approach. The recipe starts from neural networks, interprets them at the symbol level by viewing them as “neural predicates” (or relations) and adding logical knowledge layers on top of them. The glue is provided by a probabilistic interpretation of the logic. Probability is interpreted broadly, it provides the quantitative differentiable component necessary to connect the symbolic and subsymbolic levels. Finally, I will show how the recipe and its ingredients can be used to develop the foundations of neurosymbolic AI.
Bio: Prof. Dr. Luc De Raedt is Director of Leuven.AI, the KU Leuven Institute for AI, full professor of Computer Science at KU Leuven, and guest professor at Örebro University (Sweden) at the Center for Applied Autonomous Sensor Systems in the Wallenberg AI, Autonomous Systems and Software Program. He is working on the integration of machine learning and machine reasoning techniques, also known under the term neurosymbolic AI. He has chaired the main European and International Machine Learning and Artificial Intelligence conferences (IJCAI, ECAI, ICML and ECMLPKDD) and is a fellow of EurAI, AAAI and ELLIS, and member of Royal Flemish Academy of Belgium. He received ERC Advanced Grants in 2015 and 2023.

Aditya Grover
🏆 IJCAI-25 Computers and Thought Award
Generative AI for Scientific Superintelligence: A Bittersweet Perspective
Abstract: The remarkable progress of today’s generative AI—from winning math olympiads to writing complex software and award-winning literary works—offers an encouraging glimpse towards powerful general-purpose AI systems. Yet, the promise of next-generation superintelligent AI systems lies in extending the frontiers of human knowledge rather than mimicking it, catalyzing transformative scientific breakthroughs across disciplines. To elevate generative AI into an efficient engine for science, this talk will explore three critical lines of my ongoing research. We will explore how to create AI systems that can seamlessly learn from all forms of data without any scale and modality barriers. Next, we'll address the crucial challenge of efficiency, ensuring these generative models can operate at a speed and cost that makes large-scale inference practical and sustainable. Finally, we will discuss how to enable these systems to generalize beyond human supervision, continuously advancing their knowledge through active interaction and experimentation within digital and physical environments. Together, these research pillars provide a tangible pathway toward scientific superintelligence.
Bio: Aditya Grover is an assistant professor of computer science at UCLA and co-founder of Inception. His research interests are at the intersection of generative modeling and reinforcement learning, and grounded in applications for accelerating science. Aditya’s research has been recognized with a best paper award (NeurIPS), the Forbes 30 Under 30 List, the AI Researcher of the Year Award by Samsung, the GOLD Award for Distinguished Young Alumni at IIT Delhi, the Kavli Fellowship by the US National Academy of Sciences, and the ACM SIGKDD Doctoral Dissertation Award. Aditya received his postdoctoral training at UC Berkeley, PhD from Stanford, and bachelors from IIT Delhi.

Yoshua Bengio
Avoiding catastrophic risks from uncontrolled AI agency
Abstract: AI agentic capabilities are rising exponentially, driven by scientific advances incorporating system 2 cognition into deep networks as well as by the commercial value of automating numerous human tasks. Besides bodily control, this may be the most significant gap that remains to human-level intelligence. Unfortunately, a series of recent scientific observations raise a major red flag: as AIs become better at reasoning and planning, more occurrences of deceptive and self-preservation behaviors are observed. We have not solved the problem of making sure that advanced AIs will follow our instructions, and in some circumstances they are found to cheat, lie, hack computers and try to escape our control, against their alignment training and instructions. Is it wise to design AIs that will soon be smarter than us across many cognitive abilities and could compete with us and try to avoid our control? We propose a safer path going forward: the design of non-agentic but fully trustworthy AIs modeled after a selfless platonic scientist trying to understand the world rather than trying to imitate or please us. For example, such non-agentic Scientist AIs could be used as monitors that reject potentially dangerous inputs or outputs of untrusted AI agents.
Bio:

Rina Dechter
🏆 IJCAI-25 Award for Research Excellence
Graphical Models Meet Heuristic Search: A Personal Journey into Automated Reasoning
Abstract: A natural intuition in AI is that smart agents should tackle hard problems by building on solutions to easier ones. This idea has inspired what's known as the tractable islands paradigm: focus on parts of a problem that are computationally manageable and use them as stepping stones toward solving the whole. In this talk, I’ll focus on probabilistic reasoning with graphical models and give an overview of algorithms that follow this approach. I’ll introduce the Bucket Elimination, Mini-Bucket Elimination, and AND/OR search frameworks, and explain how they navigate the tradeoff between time and memory. I’ll then show how heuristics grounded in tractable islands can guide both heuristic search and Monte Carlo sampling, leading to anytime algorithms —solvers that provide increasingly accurate approximations over time, with guaranteed bounds, and converge to exact solutions if given enough time.
Bio: Rina Dechter is a Distinguished Professor of Computer Science at the University of California, Irvine. She holds a Ph.D. in computer science from UCLA (1985), an M.S. in applied mathematics from the Weizmann Institute (1975), and a B.S. in mathematics and statistics from the Hebrew University in Jerusalem (1973). Dechter’s research centers on computational aspects of automated reasoning and knowledge representation including search, constraint processing and probabilistic reasoning. She is the author of Constraint Processing published by Morgan Kaufmann (2003), and of Reasoning with Probabilistic and Deterministic Graphical Models: Exact Algorithms published by Morgan and Claypool Publishers (2013, second ed. 2019). She co-edited (with Hector Geffner and Joe Halpern) the ACM book Probabilistic and Causal Inference: The Works of Judea Pearl (2022). She has authored and co-authored close to 200 research papers. Dechter was awarded the Presidential Young investigator award in 1991, is a Fellow of AAAI (1994) and of ACM (2013) and was a Radcliffe Fellow during 2005–2006. She received the Association of Constraint Programming (ACP) Research Excellence Award (2007). She is a Fellow of the American Association of the Advancement of Science (AAAS, 2022), has been elected as a member of the American Academy of Arts and Sciences in 2025, and is the winner of the IJCAI Research Excellence Award in 2025. She served as a co‐Editor‐in‐Chief of Artificial Intelligence from 2011 to 2018. She also served on the editorial boards of several AI journals (AIJ, JAIR, JMLR) and served as a program chair or co-chair of several AI conferences (CP-2000, AAAI-2002, UAI-2006). She was the conference chair of IJCAI‐2022.

Toby Walsh
Road blocks towards AGI: computational creativity
Abstract: The road to AGI is no straight highway—it’s riddled with road blocks. Two hundred years ago, a century before the first computer, Ada Lovelace suggested that creativity might be one such road block since the computer “can only do what we tell it to do”. Based on my own and many other people’s work, I’ll explore progress towards computational creativity and make some predictions about the future. The talk is dedicated to the memory of another great female pioneer in this area, the late Maggie Boden (1936-2025).
Bio: Toby Walsh is Scientia Professor of Artificial Intelligence at the University of New South Wales. He has a B.A. from the University of Cambridge and a M.Sc. and Ph.D. degree from the University of Edinburgh. He has been elected a Fellow of the Australian Academy of Science, of the ACM, and of the Association for the Advancement of Artificial Intelligence. He was named on the international "Who's Who in AI" list of influencers, and his twitter account was voted in the top ten to follow to keep abreast of developments in AI. He has won both an Eureka Prize, the Humboldt Prize and the NSW Premier's Prize for Excellence in Engineering and ICT.

Harry Shum
Exploring the Low Altitude Airspace: From Natural Resource to Economic Engine
Abstract: Harry Shum is the Council Chairman of Hong Kong University of Science and Technology. He was previously Executive Vice President of Microsoft Corporation, responsible for AI and Research. He received his PhD in Robotics from School of Computer Science, Carnegie Mellon University. He is a Fellow of IEEE and ACM, and an international member of National Academy of Engineering and Royal Academy of Engineering.
Bio: The low altitude airspace, generally defined as the region below 1000 meters above ground level, remains a frontier ripe for exploration and economic exploitation. With advancing technology, this domain is poised to become a crucible for diverse economic activities, transmuting a mere natural resource into a potent economic asset. This presentation offers a comprehensive overview of the burgeoning low altitude economy (LAE), bolstered by first-hand insights into the infrastructure developments enabling LAE's realization. Specifically, I will delve into the research and development towards constructing a smart integrated infrastructure for the LAE. At the core of this infrastructure lies the Smart Integrated Low Altitude System (SILAS), an operating system designed to address the multifaceted needs of operations, regulations, and end-users. Similar to conventional operating systems such as Windows, SILAS orchestrates resource management, activity coordination, and user administration within the low altitude airspace. This comprehensive management spans from the registration and operation of drones to the establishment of landing posts and the seamless orchestration of communication channels, ensuring all airborne activities are scheduled efficiently in both space and time. SILAS is engineered to perform real-time spatiotemporal flow computing for numerous flying objects, a critical capability to ensure safety within the low altitude airspace. This advanced system must adeptly manage the intricate and high-frequency flying activities, from observation to proactive guidance, overcoming numerous technological hurdles. Designed to handle one million daily flights in a major city, with a peak online presence of one hundred thousand, SILAS sets a new benchmark for airspace management. In comparison, contemporary metropolitan airports currently manage only a few thousand commercial flights daily. The volume and complexity of future flights in the low altitude airspace surpass the capabilities of traditional airspace management systems employed in commercial airports, underscoring the necessity of SILAS.

Yew Soon Ong
Physically Grounded AI for Scientific Discovery: From Prediction to Generative Design
Abstract: This talk presents the role of AI in science and engineering, from learning for prediction to optimization for precision, and onward to generative models that potentially reshape solution spaces. It highlights the possible shift from purely data-driven abstraction to physically grounded intelligence, where AI systems are increasingly aligned with the laws of nature. Advances in generalizable physics-informed neural networks, guided diffusion model generation, and prompt evolution are empowering AI to simulate, predict, plan, and design within real-world constraints. As language models become engines of scientific exploration, navigating complex design spaces and abstract knowledge landscapes, they converge with physics-based signals and evolutionary principles such as multifactorial optimization to achieve high-fidelity modeling and creative physical design. The talk ends with a vision of AI not merely as a tool but as a co-explorer, fusing data, theory, and imagination to uncover latent structures, accelerate discovery, and deliver real-world impact.
Bio: Fellow of IEEE and the National Academy of Engineering, Singapore, Professor Yew-Soon Ong received his Ph.D. in Artificial Intelligence for Complex Design from the University of Southampton, UK., in 2003. He is currently a President’s Chair Professor in Computer Science at Nanyang Technological University (NTU), Singapore, and the Chief Artificial Intelligence Scientist at the Agency for Science, Technology and Research (A*STAR), Singapore. Professor Ong previously served as Chair of the School of Computer Science and Engineering at NTU. His research interests span artificial intelligence, statistical machine learning, and optimization. He has held key leadership roles in international AI conferences, including serving as General Co-Chair of the 2024 IEEE Conference on Artificial Intelligence, and has delivered invited keynote speeches and participated in high-level panels at AI events He is the founding Editor-in-Chief of the IEEE Transactions on Emerging Topics in Computational Intelligence, and serves as Senior Associate Editor or Associate Editor for IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Evolutionary Computation, and IEEE Transactions on Artificial Intelligence. Additionally, he contributes as an Area Chair for several top-tier AI conferences. Professor Ong has received five IEEE Outstanding Paper Awards and was named a Thomson Reuters Highly Cited Researcher and one of the World’s Most Influential Scientific Minds in 2016. He chairs the 2024 and 2025 IEEE Computational Intelligence Society Fellow Evaluation Committee.

Shing-Tung Yau
Advancing Artificial Intelligence through Modern Mathematical Theories
Abstract: The evolution of artificial intelligence has been marked by remarkable empirical successes, yet its mathematical foundation remains insufficiently explored. This talk outlines how advanced mathematical theories can provide a rigorous, unified framework for understanding and advancing artificial intelligence. Key mathematical tools—ranging from conformal geometry and topology to nonlinear partial differential equations—offer new perspectives on neural network architecture, dataefficient learning, model interpretability, and decision-making under uncertainty. These theories not only deepen the understanding of existing artificial intelligence systems but also inspire the creation of new algorithms rooted in geometric and analytical principles. The integration of modern mathematics into artificial intelligence holds great promise for scientific discovery, robust system design, and the development of intelligent systems grounded in both theory and applications.
Bio: Professor Shing-Tung Yau, born in 1949 in Shantou, Guangdong Province of China, is a Chinese and naturalized American mathematician. He is a member of the NAS, AAAS and the foreign academician of CAS. In 1966, he was admitted to the Department of Mathematics of the Chinese University of Hong Kong. In 1969, he was recommended to study at the University of California, Berkeley where he completed his PhD degree two years later in 1971 (at the age of 22) under the supervision of Prof. Shiing-Shen Chern. Professor Shing-Tung Yau currently serves as a chair professor at Tsinghua University. He is the Director of Yau Mathematical Sciences Center, Dean of Qiuzhen College at Tsinghua University and the Director of Beijing Yanqi Lake Beijing Institute of Mathematical Sciences and Applications. Besides, he is also the Distinguished Visiting Professor-at-Large of The Chinese University of Hong Kong and the Director of the Institute of Mathematical Science at CUHK. Since 1987, he has been a Professor of Mathematics at Harvard University, until 2022, he becomes a professor emeritus at Harvard University. Professor Yau has made extremely significant contributions to differential geometry. He was the first person to combine differential geometry and analysis, and used their interaction to solve longstanding problems in both subjects. Yau's work opened up new directions, set foundations and changed people's perspectives towards mathematics and their applications in physics and computer science. In 1976, his proof of the Calabi conjecture gave solutions of multiple well-known open problems in algebraic geometry and also allowed physicists to show that string theory is a viable candidate for a unified theory of nature. Calabi–Yau manifolds are among the ‘standard toolkit’ for string theorists today. In 1979, Yau and Richard Schoen solved the Positive Mass Conjecture in General Relativity. Thereafter, Yau continued to make a number of achievements in geometry, topology and theoretical physics. Professor Yau was awarded the Fields Medal (1982) which is the highest honor of the international mathematics, the MacArthur Fellowship (1985), the Crafoord Prize (1994), the Wolf Prize in Mathematics (2010), the Marcel Grossmann Award (2018), and the Shaw Prize in Mathematical Sciences (2023), becoming the only mathematician to receive all six of the world's top scientific awards. He and his teacher Professor ShiingShen Chern, as the most outstanding and influential representatives of contemporary mathematicians, actively care about the education and research of Chinese mathematics and have made great contributions to promoting the cause of Chinese mathematics.

Xin Yao
When Evolutionary Computation Meets Trustworthy Artificial Intelligence
Abstract: Trustworthiness is a critical issue in artificial intelligence (AI), especially for real-world applications. It is impossible to apply AI in the real world without its being trustworthy. However, the connotation and extension of trustworthiness are not entirely clear to the scientific community. There has not been a single definition that is accepted by all researchers. Nevertheless, the vast majority of researchers agree that AI trustworthiness should include at least accuracy, reliability, robustness, safety, security, privacy, fairness, transparency, controllability, maintainability, etc. Firstly, this talk reviews very briefly AI ethics, which is closely related to AI trustworthiness. Secondly, the talk examines the fairness and explainability issues of machine learning models. It is argued that many aspects of trustworthiness, such as fairness and explainability, are inherently multi-dimensional. In other words, there are many dimensions to properties like fairness and explainability. They cannot be defined by any single numerical measures. Multi-objective thinking is needed. This talk advocates multi-objective evolutionary learning as an approach to enhancing AI trustworthiness. Fairness and explainability are used as two examples to demonstrate how multi-objective evolutionary learning can be used to improve fairness and explainability of learned models. Thirdly, the talk discusses more fundamental issues in the current research of AI explainability. Finally, the talk ends with some concluding remarks.
Bio: Xin Yao is the Vice President (Research and Innovation) and Tong Tin Sun Chair Professor of Machine Learning at Lingnan University, Hong Kong SAR, China. He is an IEEE Fellow and was a Distinguished Lecturer of the IEEE Computational Intelligence Society (CIS). He served as the President (2014-15) of IEEE CIS and the Editor-in-Chief (2003-08) of IEEE Transactions on Evolutionary Computation. His major research interests include evolutionary computation, neural network ensembles, and multi-objective learning. Recently, he has been working on trustworthy AI, especially on fair machine learning and explainable AI. His work won the 2001 IEEE Donald G. Fink Prize Paper Award; 2010, 2016 and 2017 IEEE Transactions on Evolutionary Computation Outstanding Paper Awards; 2011 IEEE Transactions on Neural Networks Outstanding Paper Award; 2010 BT Gordon Radley Award for Best Author of Innovation (Finalist); and many other best paper awards at conferences. He received the 2012 Royal Society Wolfson Research Merit Award, the 2013 IEEE CIS Evolutionary Computation Pioneer Award, and the 2020 IEEE Frank Rosenblatt Award
