We are pleased to announce the first International Workshop on Explainable and Interpretable Machine Learning (XI-ML)
co-located with KI 2020, Sept. 21, 2020, Bamberg, Germany.
Submission deadline (extended): Jul 23, 2020 (23:59 AoE Time).
With the current scientific discourse on explainable AI (XAI), algorithmic transparency, interpretability, accountability and finally explainability of algorithmic models and decisions, this workshop on explainable and interpretable machine learning tackles these themes from the modeling and learning perspective; it targets interpretable methods and models being able to explain themselves and their output, respectively. The workshop aims to provide an interdisciplinary forum to investigate fundamental issues in explainable and interpretable machine learning as well as to discuss recent advances, trends, and challenges in this area.
Overall, we are interested in receiving papers related to the following topics which include but are not limited to:
After many decades of research, we have become fairly skilled at modeling and predicting some aspects of simpler model organisms, such as HIV, E. coli, or S. cerevisiae. However, the COVID-19 pandemic has shown that this is far from true for novel species. In fact, even for model organisms the data and tools required to build highly collaborative and integrated models are still being developed. Although we cannot make reliable predictions for most biological problems, in many cases we have large collections of data with different modalities that have not been interlinked or fully exploited. These data are often siloed by namespaces, APIs, and formats. They are also affected by data modeling and analysis choices. Based on a core set of design principles, we have developed a framework for constructing knowledge graphs which allows to harmonize biological entities and their relationships across many disparate data sources. We applied this framework to COVID-19 as well as environmental genomics projects. Using new graph learning methods we can leverage complex data to compute similarities between different biological entities, something that has been difficult thus far. We are also applying graph embeddings to perform link prediction tasks, focusing on target identification and drug repurposing for COVID-19. While large knowledge graphs and associated methods are exciting developments, their complexity requires new tools including for data search, introspection, and visualization. Advances in these areas will be critical to achieving explainability for both more traditional and new learning methods.
Short Bio: P. Joachimiak, PhD, is a staff researcher and software developer in the Environmental Genomics and Systems Biology Division at Lawrence Berkeley National Laboratory (LBNL).
Submissions: We will solicit short position papers (2-4 pages) and peer-reviewed research papers (8-16 pages) in LNCS format. Papers should be submitted in EasyChair to https://easychair.org/conferences/?conf=ximl2020
All submitted papers must
All submissions must be entered into the reviewing system: https://easychair.org/conferences/?conf=ximl2020
The proceedings will be published in the CEUR Workshop Proceedings series. We will also consider publishing a selection of extended papers in a special issue of an international journal..
If you have questions regarding the workshop, please contact Martin Atzmueller: m.atzmuller@uvt.nl