List of Workshops:
- W1: 8th Workshop on Formal and Cognitive Reasoning (FCR-2022)
- W2: Deduktionstreffen (DT-2022)
- W3: Robust AI for High-Stakes Applications (RAIHS-2022)
- W4: AI utilization to increase resilience in society and economy (RSE-2022)
- W5: Explainable and Interpretable Machine Learning (XI-ML-2022)
- W6: Text Mining and Generation (TMG-2022)
- W7: 2nd Workshop on Humanities-Centred AI (CHAI-2022)
- W8: AI and Cyber-Physical Process Systems Workshop 2022 (AI-CPPS)
- W9: 36th Workshop on (Constraint) Logic Programming (WLP-2022)
- W10: Workshop on AI & Digital Twins for Smart Cities (WAISC-2022)
- W11: Generating synthetic image data for AI (GSID-AI-2022)
- W12: 33rd Workshop Planning, Scheduling, Design and Configuration (PuK-2022)
Abstract: Information for real-life AI applications is usually pervaded by uncertainty and subject to change, thus demands non-classical reasoning approaches. At the same time, psychological findings indicate that human reasoning cannot be completely described by classical logical systems. Sources of explanations are incomplete knowledge, incorrect beliefs, or inconsistencies. A wide range of reasoning mechanisms has to be considered, such as analogical or defeasible reasoning, possibly in combination with machine learning methods. The field of knowledge representation and reasoning offers a rich palette of methods for uncertain reasoning both to describe human reasoning and to model AI approaches.
Sébastien Konieczny (CNRS - CRIL)
Improvements of iterated belief revision (tentative title)
Abstract: Automated Reasoning is a core field of artificial intelligence research with a focus on the development of models, procedures and software for the computer-assisted automation of logical reasoning. Automated deduction specializes on the problem of correctly and efficiently deriving deductive conclusions from given assumptions, usually formulated in a formal logical language. Applications include, but are not limited to, automated theorem proving, software verification and synthesis, term rewrite systems, unification theory, planning and logic programming.
The annual Deduktionstreffen (german, roughly translates to ''deduction meeting'') is the prime activity of the Special Interest Group on Deduction Systems (Fachgruppe Deduktionssysteme, https://fg-dedsys.gi.de/) of the AI Chapter of the German Society of Informatics (Fachbereich KI der Gesellschaft für Informatik). Deduktionstreffen is a meeting with an informal and friendly atmosphere, where everyone (not only the German community) interested in automated reasoning, deduction systems and related topics can report on their work in an accessible setting.
A special focus of the workshop is on young researchers and students, who are particularly encouraged to present their ongoing research projects to a wider audience, and to receive constructive feedback from more experienced participants. Another goal of the meeting is to stimulate networking effects and to foster collaborative research projects.
Abstract: Robustness refers to the capability of coping with unforeseen phenomena or situations. Gearing AI towards robustness has always been an aim for open-world AI, and it becomes a pressing requirement as AI makes its way into control of high-stake applications. AI is already applied in real-world applications such as autonomous mobile systems (self-driving cars, autonomous drones, service robots etc.), automated surgical assistants, electrical grid management systems, control of critical infrastructure, to name a few. However, for such an integration to constitute a beneficial socio-technical system, safety and reliability are key, and robustness is essential to avert potential catastrophic events due to unconsidered phenomena or situations. Robustness is addressed in many sub-fields of AI using various working definitions, and various measures. This workshop aims to bring together researchers from all sub-fields of AI working on robust methods, ranging from machine learning to logical reasoning, and especially welcomes contributions on the interplay between these sub-fields.
Abstract: Current events have clearly uncovered the fragility of society and economy. Once again, the importance of resilient structures and countermeasures has been demonstrated. Artificial Intelligence (AI) can be an important tool to enable and improve resilience, for example with respect to the healthcare system or supply chains of critical products. Within this workshop we aim at approaches that intend to increase the resilience in society and the economy with the help of AI technologies.
Abstract: The XI-ML workshop on explainable and interpretable machine learning tackles the general theme of explainable AI (XAI), algorithmic transparency, interpretability, accountability and finally explainability of algorithmic models and decisions, specifically from the modeling and learning perspective, i.e. targeting interpretable methods and models that are able to explain themselves and their output, respectively. The workshop aims to provide an interdisciplinary forum to investigate fundamental issues in explainable and interpretable machine learning as well as to discuss recent advances, trends, and challenges in this area.
Abstract: Digital text data is available in large amounts and different granularities. Typical sources include social media posts, books, news articles, web pages or company reports, etc. A major challenge this text data imposes is that it is unstructured and must first be processed to make further analysis possible. At the same time, there are also many situations in which only structured data is available that is to be verbally explained—for instance, by Explainable AI. These contrasting scenarios lead to two complementary application areas: text mining and text generation. The aim of text mining is to analyze the content of unstructured text and extract (useful) structured information. In contrast, text generation attempts to (automatically) create text from structured information or knowledge that is for example stored in large language models. The goal of the TMG workshop is to bring these two perspectives together by eliciting research paper submissions that aim for bridging the gap between knowledge extraction and text generation. Since recent approaches to text mining and text generation are predominantly based on artificial intelligence (AI) methodologies, KI 2022 is a relevant venue to bring together AI researchers working on these two tasks. We welcome any submissions that deal with transforming the representation of data using techniques of natural language processing (NLP): (applied) research papers, theoretical papers, user studies or prospective papers.
Abstract: Inferring ancient cultural traditions from written artefacts, AI offers many opportunities to assist humanities scholars in their work. Editorial projects and computer-aided evaluations, such as text and data mining or linguistic analyses, require the collecting, storing, and linking of data in order to quickly identify core information of the written artefacts under investigation. Time-consuming procedures like the creation of dictionaries or the use of bibliographies can be facilitated, abridged and designed more efficiently through the automatic linking of data, which enables to create extensive data sets and to generate additional information. In this way, AI supports scholars with time-saving methods for their research, hence leaving more room for core tasks and questions. To ensure that the use of AI methods in the humanities remains not only abstract and theoretic, the applicability of algorithms in respective research needs to be specifically examined and intentionally developed with a clear focus on humanities research.
Abstract: The workshop focuses on the topic of AI and cyber-physical process systems. It is organized through the research training group (Forschungskolleg) “AI-based Self-Adaptive Cyber-Physical Process Systems”, whose main goal it is to extend classical cyber-physical systems in such a way that they enable an adaptive integration of complex processes. Due to the large heterogeneity of IoT devices, the diversity of processes and the environmental factors acting on them, as well as the dynamic usage and interaction contexts, high complexity and dynamics arise. The systems must therefore be designed to be self-learning and adaptive, so that the data generated during their use can be continuously used to improve themselves (processes, topologies, resource requirements, etc.). This is enabled using AI and machine learning methods, but also considers the interaction between actors ("human-in-the-loop"), systems and processes.
Typical application scenarios of such AI-based cyber-physical process systems (AI-CPPS) can be found in knowledge- and planning-intensive work processes from areas such as logistics, robotics, resilient supply chains, production, service or agriculture. Heterogeneous environments, such as in robotics or intelligent mobility, require precisely fitting processes that optimize themselves based on the available resources.
The aim of the workshop is to discuss the current topics of the research training group and to establish an interdisciplinary exchange with the AI-community.
Abstract: Declarative approaches – especially in combination with other AI technologies and disruptive non-AI technologies – have an increasing relevance for digitalization projects in many sectors. The Workshop on (Constraint) Logic Programming provides a forum for exchanging ideas on declarative logic programming, non-monotonic reasoning, and knowledge representation. Thus it facilitates interactions between research in theoretical foundations and in the design and implementation of logic-based programming systems. In addition, the WLP serves as the scientific forum and the annual meeting of the Society of Logic Programming (GLP e.V.), and brings together researchers interested in logic programming, constraint programming, and related areas like databases, artificial intelligence, and operations research.
Abstract: With the rising importance of the concept of smart cities as a collective term describing the use of information technology in urban contexts, digital twins for cities are a promising topic that offers potential that has not been fully explored yet. Research and concepts related to both are influenced by different spheres of artificial intelligence: acquisition and evaluation of large quantities of data, semantic technologies, agent-based modeling/distributed simulation, but also the human-computer interaction challenges that arise from the work with digital twins. As such, both topics are deeply nested in the context of AI. This involves many sophisticated and interdisciplinary research questions that go far beyond application scenarios in the Industry 4.0 environment, where the concept of digital twins has emerged in the area of manufacturing processes. For instance, there are open questions of representation per se (what constitutes a smart city in the first place?), a variety of fields of applications (mobility, health, education, energy supply, etc.), their architecture and implementation as distributed artificial intelligence systems, as well as aspects relating to the integration and protection of data and other related ethical questions. Further, making such systems accessible to users through visualization and interaction is another topic of interest. In this workshop, we aim to examine the different angles and technologies from which the smart city concept can benefit from AI and digital twins.
Abstract: AI offers attractive solutions to challenging image processing tasks by replacing tailor-suited algorithmic solutions with training a suitable convolutional neural network. Such solutions can have higher generalization capabilities but must be carefully designed to avoid bias. However, their reliability depends on the quality and availability of a large amount of annotated representative image data. The ground truth is often annotated manually, which takes time and has inconsistent precision. Machine vision tackles a variety of problems where the solution reliability is of high importance, but the training data is not readily available due to rare subject occurrence (e.g. safety critical defects) or unreliable data annotation (e.g. segmentation of complex structures such as one-pixel thick cracks in 3D images). Synthetic image generation offers an elegant solution to the dataset problem. Using computer graphics in combination with mathematical modelling, a wide variety of images and scenes can be generated – from material microstructure patterns, over surface texture to large scale traffic scenes. All of them with pixel-precise automatic ground truth generation. This workshop brings together researchers from various fields (machine vision, mathematical modelling, computer graphics, physics…) to explore questions arising with the simulated data. When is it realistic enough? How is its quality measured? Which characteristics of the synthetic data are decisive for successful training?
Abstract: Planning, Scheduling, Design and Configuration are still gaining interest, not only in the Artificial Intelligence community but also in different application areas. With the actual hype on machine learning in several disciplines we will focus the workshop on „Reinforcement Learning in Planning, Scheduling and Design“. But also all the other topics of our special interest group are still of interest, e.g. Practical applications of configuration, planning or scheduling systems; AI Planning, Scheduling and Design in education; Architectures for planning, scheduling or configuration systems; Knowledge representation and problem-solving techniques for planning, scheduling and design.
Abstract: Let's merge domain-specific models with generic learning. With Julia it is easy to combine these different worlds, differential equations on the one hand and (deep) neural networks on the other. The combination is known as Universal Differential Equations, and published in the paper “Universal Differential Equations for Scientific Machine Learning” by Rackauckas et al. in November 2021. This new modelling flexibility of scientific machine learning brings immense benefits to all kinds of computational sciences.
We will explain the theory behind Universal Differential Equations, including the subclass of Neural Differential Equations, why Julia is particularly suited to this, and see the power of such models in practice.
Everything will be interactive and beginner-friendly, with the opportunity to recreate such state-of-the-art models live.