Explainable Logic-Based Knowledge Representation (XLoKR 2020)

Workshop at KR 2020 (17th International Conference on Principles of Knowledge Representation and Reasoning, September 12-18, 2020, Rhodes, Greece)

Organizing Committee


XLoKR20

Motivation and Topic

Embedded or cyber-physical systems that interact autonomously with the real world, or with users they are supposed to support, must continuously make decisions based on sensor data, user input, knowledge they have acquired during runtime as well as knowledge provided during design-time. To make the behavior of such systems comprehensible, they need to be able to explain their decisions to the user or, after something has gone wrong, to an accident investigator.

While systems that use Machine Learning (ML) to interpret sensor data are very fast and usually quite accurate, their decisions are notoriously hard to explain, though huge efforts are currently being made to overcome this problem. In contrast, decisions made by reasoning about symbolically represented knowledge are in principle easy to explain. For example, if the knowledge is represented in (some fragment of) first-order logic, and a decision is made based on the result of a first-order reasoning process, then one can in principle use a formal proof in an appropriate calculus to explain a positive reasoning result, and a counter-model to explain a negative one. In practice, however, things are not so easy also in the symbolic KR setting. For example, proofs and counter-models may be very large, and thus it may be hard to comprehend why they demonstrate a positive or negative reasoning result, in particular for users that are not experts in logic. Thus, to leverage explainability as an advantage of symbolic KR over ML-based approaches, one needs to ensure that explanations can really be given in a way that is comprehensible to different classes of users (from knowledge engineers to laypersons).

The problem of explaining why a consequence does or does not follow from a given set of axioms has been considered for full first-order theorem proving since at least 40 years, but there usually with mathematicians as users in mind. In knowledge representation and reasoning, efforts in this direction are more recent, and were usually restricted to sub-areas of KR such as AI planning and description logics. The purpose of this workshop is to bring together researchers from different sub-areas of KR and automated deduction that are working on explainability in their respective fields, with the goal of exchanging experiences and approaches. A non-exhaustive list of areas to be covered by the workshop are the following:

Program committee

Important dates

Paper submission: 08 June 2020 22 June 2020
Paper notification: 13 July 2020 21 July 2020
Workshop dates: 13 September (full day) and 14 September (morning) 2020
Early registration: 7 August 2020

For information regarding the COVID-19 situation and how it effects KR 2020 and the associated workshops, please consult the KR 2020 website on COVID-19 relevant information for the venue.

Paper submission

Researchers interested in participating in the workshop should submit extended abstracts of 2-5 pages on topics related to explanation in logic-based KR. The papers should be formatted in Springer LNCS Style and must be submitted via EasyChair (go to the KR 2020 submission site, select “enter as an author”, and choose the Track “WS on Explainable Logic-Based Knowledge Representation”). The workshop will have informal proceedings, and thus, in addition to new work, also papers covering results that have recently been published or will be published at other venues are welcome.

Invited Speaker

Speaker: Tim Miller

Title: Explainable artificial intelligence: beware the inmates running the asylum (or How I learnt to stop worrying and love the social and behavioural sciences)

Abstract: In his seminal book The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy And How To Restore The Sanity, Alan Cooper argues that a major reason why software is often poorly designed (from a user perspective) is that programmers are in charge. As a result, programmers design software that works for themselves, rather than for their target audience; a phenomenon he refers to as the ‘inmates running the asylum’. In this talk, I argue that explainable AI risks a similar fate if AI researchers and practitioners do not take a multi-disciplinary approach to explainable AI. I further assert that to do this, we must understand, adopt, implement, and improve models from the vast and valuable bodies of research in philosophy, psychology, and cognitive science; and focus evaluation on people instead of just technology. I paint a picture of what I think the future of explainable AI will look like if we went down this path, and give some concrete examples from our recent research in explainable reinforcement learning.

Bio: Tim is an associate professor of computer science in the School of Computing and Information Systems at The University of Melbourne, and Co-Director for the Centre of AI and Digital Ethics. His primary area of expertise is in artificial intelligence, with particular emphasis on human-AI interaction and collaboration and Explainable Artificial Intelligence (XAI). His work is at the intersection of artificial intelligence, interaction design, and cognitive science/psychology.

(Slides)

XLoKR Program

The program of XLoKR can be found here.

Recording of the talks can be found on YouTube.

Accepted papers



sponsored by  CPEC