International Workshop on Explainable and Interpretable Machine Learning (XI-ML)
We are pleased to announce the third International Workshop on Explainable and Interpretable Machine Learning (XI-ML)
co-located with ECAI 2023, 30 September 2023, Krakow, Poland.
Submission deadline: July 29, 2023 (23:59 AoE Time).
Submit!
Objectives
With the current scientific discourse on explainable AI (XAI), algorithmic transparency, interpretability, accountability and finally explainability of algorithmic models and decisions, this workshop on explainable and interpretable machine learning tackles these themes from the modeling and learning perspective; it targets interpretable methods and models being able to explain themselves and their output, respectively. The workshop aims to provide an interdisciplinary forum to investigate fundamental issues in explainable and interpretable machine learning as well as to discuss recent advances, trends, and challenges in this area.
Overall, we are interested in receiving papers related to the following topics which include but are not limited to:
- Rule learning for explainable and interpretable machine learning
- Assessment of interpretable and explainable models
- Cognitive approaches and human concept learning
- Human-centered learning and explanation
- Causality of machine learning models
- Local pattern mining for explanation
- Interpretation of black box models
- Explanation of black box models
- Model-agnostic explanation
- Case-based explanation
- Evaluation metrics
- Empirical research on explainability
- Regulations and legal aspects of XAI
- Applications of all of the above
Program
- 9:00 - 10:30 Session 1: Tomas Kliegr
- 9:00 - 9:10 Introduction
- 9:10 - 9:30 Julia Herbinger, Susanne Dandl, Fiona Katharina Ewald, Sofia Maria Loibl and Giuseppe Casalicchio Leveraging: Model-based Trees as Interpretable Surrogate Models for Model Distillation
- 9:30 - 9:50 Tanmay Chakraborty, Christian Wirth and Christin Seifert: Post-hoc Rule Based Explanations for Black Box Bayesian Optimization
- 9:50 - 10:10 Dan Hudson and Martin Atzmueller: Subgroup Discovery with SD4Py
- 10:10 - 10:30 Avi Rosenfeld: Optimizing Decision Trees for Enhanced Human Comprehension
- 11:00 - 12:30 Session 2: Kirill Bykov
- 11:00 - 11:20 Emanuel Slany, Stephan Scheele and Ute Schmid: Bayesian CAIPI: A Probabilistic Approach to Explanatory and Interactive Machine Learning
- 11:20 - 11:40 Munkhtulga Battogtokh, Michael Luck, Cosmin Davidescu and Rita Borgo: Simple Framework for Interpretable Fine-grained Text Classification
- 11:40 - 12:00 Francisco N. F. Q. Simoes, Thijs van Ommen and Mehdi Dastani: Causal Entropy and Information Gain for Measuring Causal Control
- 12:00 - 12:20 Nghia Duong-Trung, Duc-Manh Nguyen and Danh Le-Phuoc: Temporal Saliency Detection Towards Explainable Transformer-based Timeseries Forecasting
- 13:30 - 15:00 Session 3: Dan Hudson
- 13:30 - 13:50 Stefanie Krause and Frieder Stolzenburg: Commonsense Reasoning and Explainable Artificial Intelligence Using Large Language Models
- 13:50 - 14:10 Foivos Charalampakos and Iordanis Koutsopoulos: Exploring Multi-Task Learning for Explainability
- 14:10 - 14:30 Ondřej Vadinský and Petr Zeman: Towards Evaluating Policy Optimisation Agents using Algorithmic Intelligence Quotient Test
- 14:30 - 14:50 Federico Sabbatini and Roberta Calegari: Achieving Complete Coverage with Hypercube-Based Symbolic Knowledge-Extraction Techniques
- 15:30 - 17:00 Session 4: Ute Schmid
- 15:30 - 15:50 Kirill Bykov, Klaus-Robert Müller and Marina Höhne: Mark My Words: Dangers of Watermarked Images in ImageNet
- 15:50 - 16:10 Fatemeh Nazary, Yashar Deldjoo and Tommaso Di Noia: ChatGPT-HealthPrompt. Harnessing the Power of XAI in Prompt-Based Healthcare Decision Support using ChatGPT
- 16:10 - 16:30 Eric Loff, Sören Schleibaum, Jörg P. Müller and Benjamin Säfken: Explaining Taxi Demand Prediction Models based on Feature Importance
- 16:30 - 16:50 Meike Nauta, Johannes H. Hegeman, Jeroen Geerdink, Jörg Schlötterer, Maurice van Keulen and Christin Seifert: Interpreting and Correcting Medical Image Classification with PIP-Net
- 16:50 - 17:00 Discussion
Submission
We will solicit full papers (up to 15 pages, excluding references) as well as short papers (up to 7 pages, excluding references); for submission, the Springer LNCS Latex template hould be used for all workshop submissions.
All submitted papers must
- be written in English;
- contain author names, affiliations, and email addresses;
- be in PDF; make sure that the PDF can be viewed on any
platform
Authors should chose the workshop when submitting through the Easy Chair System: https://easychair.org/conferences/?conf=ximl23
We intend to publish the proceedings in the
Springer CCIS book series.
Important dates
- Submission Deadline: Jul 29, 2023 (23:59 AoE Time)
- Notification of Acceptance: Aug 11, 2023
- Camera-Ready Versions Due: Aug 18, 2023
- Workshop date: 30 September, 2023
Committee
Workshop Chairs
- Martin Atzmueller, Osnabrück University & DFKI, Germany
- Marine Höhne, University of Potsdam & ATB, Germany
- Tomáš Kliegr, University of Economics Prague, Czech Republic
- Ute Schmid, University of Bamberg, Germany
Contact
If you have questions regarding the workshop, please contact
Martin Atzmueller: martin.atzmueller@uni-osnabrueck.de