Skip to main content

Keynote Speakers

Exciting Keynotes at KI2022

Manuela Veloso

Managing Director, Head, J. P. Morgan Chase AI Research

Title: AI in Robotics and AI in Finance: Challenges, Contributions, and Discussion

Abstract: My talk will follow up on my many years of research in AI and robotics and my few recent years of research in AI in finance. I will present challenges and solutions on the two areas, in data processing, reasoning, including planning and learning, and execution. I will conclude with a discussion of the future towards a lasting human-AI seamless interaction.

Bio: Manuela Veloso is Head of J.P. Morgan Chase AI Research and Herbert A. Simon University Professor Emerita at Carnegie Mellon University, where she was previously Faculty in the Computer Science Department and Head of the Machine Learning Department. Veloso researches in Artificial Intelligence (AI) with focus on autonomous robots and recently in AI in Finance. She is past president of the Association for the Advancement of Artificial Intelligence (AAAI), and the co-founder and a Past President of the RoboCup Federation. Veloso is a Fellow of AAAI, AAAS, ACM, and IEEE. She is a member of the National Academy of Engineering.

Eyke Hüllermeier

Institut für Informatik, Ludwig-Maximilians-Universität München

Title: Representation and Quantification of Uncertainty in Machine Learning

Abstract: Due to the steadily increasing relevance of machine learning for practical applications, many of which are coming with safety requirements, the notion of uncertainty has received increasing attention in machine learning research in the recent past. This talk will address questions regarding the representation and adequate handling of (predictive) uncertainty in (supervised) machine learning. A specific focus will be put on the distinction between two important types of uncertainty, often referred to as aleatoric and epistemic, and how to quantify these uncertainties in terms of suitable numerical measures. Roughly speaking, while aleatoric uncertainty is due to randomness inherent in the data generating process, epistemic uncertainty is caused by the learner’s ignorance about the true underlying model. Going beyond purely conceptual considerations, the use of ensemble learning methods will be discussed as a concrete approach to uncertainty quantification in machine learning.

Bio: Eyke Hüllermeier is a full professor at the Institute of Informatics at LMU Munich, Germany, where he heads the Chair of Artificial Intelligence and Machine Learning. He studied mathematics and business computing, received his PhD in computer science from Paderborn University in 1997, and a Habilitation degree in 2002. Prior to joining LMU, he held professorships at several other German universities and spent two years as a Marie Curie fellow at the IRIT in Toulouse (France). Currently, he is also a Chief Scientist at the Fraunhofer Institute for Mechatronic Systems Design. His research interests are centered around methods and theoretical foundations of artificial intelligence, with a specific focus on machine learning and reasoning under uncertainty. Besides, he is interested in the application of AI methods in other disciplines, ranging from the natural sciences and engineering to the humanities and social sciences.  He has published more than 400 articles on related topics in top-tier journals and major international conferences, and several of his contributions have been recognized with scientific awards.

YouTube Live Stream:

Ahmad-Reza Sadeghi

Head of System Security Lab, Technische Universität Darmstadt & Plattform Lernende Systeme

Title: Federated Learning: Promises, Opportunities and Security Challenges

Abstract: Federated Learning (FL) is a collaborative machine learning approach allowing the involved participants to jointly train a model without having to mutually share their private, potentially sensitive local datasets. As an enabling technology FL can benefit a variety of sensitive distributed applications in practice.

However, despite its benefits, FL is shown to be susceptible to so-called backdoor attacks, in which an adversary injects manipulated model updates into the federated model aggregation process so that the resulting model provides targeted predictions for specific adversary-chosen inputs.

In this talk, we present our research and experiences, also with industrial partners, in utilizing FL to enhance the security of large scale systems and applications, as well as in building FL systems that are resilient to backdoor attacks.

Bio: Ahmad-Reza Sadeghi is a professor of Computer Science and the head of the System Security Lab at Technical University of Darmstadt, Germany. He has been leading several Collaborative Research Labs with Intel since 2012, and with Huawei since 2019.  He has studied both Mechanical and Electrical Engineering and holds a Ph.D. in Computer Science from the University of Saarland, Germany. Prior to academia, he worked in R&D of IT-enterprises, including Ericsson Telecommunications. He has been continuously contributing to security and privacy research field. He was Editor-In-Chief of IEEE Security and Privacy Magazine, and currently serves on the editorial board of ACM TODAES, ACM TIOT, and ACM DTRAP. For his influential research on Trusted and Trustworthy Computing he received the renowned German “Karl Heinz Beckurts” award. This award honors excellent scientific achievements with high impact on industrial innovations in Germany. In 2018, he received the ACM SIGSAC Outstanding Contributions Award for dedicated research, education, and management leadership in the security community and for pioneering contributions in content protection, mobile security and hardware-assisted security. In 2021, he was honored with Intel Academic Leadership Award at USENIX Security conference for his influential research on cybersecurity and in particular on hardware-assisted security. Since 2021 he is member of the working group IT-Security and Privacy, Law and Ethics at Plattform Lernende Systeme, the German expert platform for Artificial Intelligence.

YouTube Live Stream:

Sven Körner

thingsTHINKING GmbH, Karlsruhe

Title: The First Rule of AI: Hard Things are Easy, Easy Things are Hard

Abstract: Artificial intelligence is not only relevant for high-tech large corporations, but can be a game changer for different companies of all sizes. Nevertheless, smaller companies in particular do not use artificial intelligence in their value chain, and effective use tends to be rare, especially in the midmarket. Why is that? Often, there is a lack of appropriate know-how on how and in which processes the technology can be used at all. In my talk, I will discuss, how academia and industry can grow together, need each other, and should cooperate in making AI the pervasive technology that it already is.

Bio: Sven is an award winning cognitive/semantic computing researcher and – according to the media – top 20 global expert in AI technologies with focus on natural language processing and semantics. He is also a cloud expert, and an early adopter of new technologies. In his studies, he collaborates with universities and research facilities world-wide, spanning from Canada to Australia. Sven is also an outgoing Technology Evangelist with speaker slots at the world’s largest conferences (Dreamforce, CeBit, IEEE, ACM) and likes to get hands-on with customers and peers alike. In his corporate career, Sven worked with the leading world-wide cloud vendors such as Salesforce, Amazon, Microsoft, Google, etc. He was technical contact and evangelist for global business partners like Accenture, Deloitte, CapGemini, etc. as well as for large customers and communities. Sven is a co-founder and the thought leader of thingsTHINKING where he and his team apply his research results to real-world problems. They built a machine that has common sense and can therefore take over “human tasks” in different domains.

YouTube Live Stream:

Bruce Edmonds

Centre for Policy Modelling, Manchester Metropolitan University

Title: Prospects for Using Context to Integrate Reasoning and Learning

Abstract: Whilst the AI and ML communities are no longer completely separate (as they where for 3 decades), principled ways of integrating them are still not common. I suggest that a kind of context-dependent cognition, that is suggested by human cognitive abilities could play this role. This approach is sketched, after briefly making clear what I mean by context. This move would also: make reasoning more feasible, belief revision more feasible, and provide principled strategies for dealing with the cases with over- or under-determined conclusions.

Bio: TBD

YouTube Live Stream: