Workshop on Explainable AI for Healthcare (WEAH)

Workshop on Explainable AI for Healthcare (WEAH) is a one-day workshop co-located with FLICS 2026 in Valencia, Spain (9–12 June 2026). The exact workshop day will be announced with the FLICS workshop schedule.

Practical information

Dates: 9–12 June 2026

Location: Valencia, Spain

Content: one-day workshop hosted by Halmstad University co-located with FLICS 2026

FLICS 2026 External link.

Important dates (in sync with FLICS 2026)

Workshop Paper Submission, 28 April, 2026

Paper Notification, 5 May, 2026

Camera-Ready Submission, 15 May, 2026

Scope and objectives

As applications of machine learning are being adopted into healthcare settings and evaluated by clinicians the need for interpretable and explainable machine learning models arises. These requirements for transparent models and decision support increase as many predictive models in healthcare are based on time-series data making the task of explainability challenging.

Moreover, these time-series are often nonuniformly sampled or consisting of sparse events as in log-based data, including hospital visits from electronic health records (EHR) where the data modality changes depending on the type of visits.

To address these challenges The full-day Workshop on Explainable AI for Healthcare (WEAH) welcomes contributions in the domain of explainable AI for healthcare applications including clinical, primary, specialised and care in the home.

WEAH aims to bridge theory and practice by discussing novel algorithms and experiments. Especially we invite work related to improving transparency in AI systems used for patient care, diagnosis, prognosis, and health management. We encourage contributions to include novel algorithms, studies with clinicians, reproducible experiments, toolkits and libraries, datasets and challenge tasks, case studies of real‑world deployments, and interdisciplinary work.

Topics of interest

Topics of interest include, but are not limited to:

  • Post-hoc, model-agnostic and model-specific explainable AI methods for machine learning-based time-series predictors.
  • Ante-hoc, white-box methods for machine learning time-series predictors.
  • Explainable AI in a federated learning setting.
  • Explainable AI methods for models trained on multimodal EHR data.
  • Privacy‑preserving Explainable AI.
  • Case studies of Explainable AI deployment in specialised and primary care.
  • Explainable AI for Healthcare evaluation metrics.
  • Counterfactual explanations in a healthcare setting.
  • Surrogate models as explanations in a healthcare setting.
  • Explainable AI for Language Models.

Submission format

Formats include full papers (8 pages), short papers (6 pages) and posters. All accepted contributions will have a designated presentation at the workshop.

Submission guidelines External link.

Submit via EasyChair External link.

Organisers and workshop chairs

Farzaneh Etminani.

Farzaneh Etminani

Farzaneh Etminani currently works at CAISR Health within the ISDD department at Halmstad University, Sweden. In parallel, she serves as an AI strategist at the Department for Research and Development in Region Halland, Sweden. Her professional work bridges academia and the public sector, with a strong focus on applying artificial intelligence in real-world healthcare settings. Her research interests include data mining, algorithm development, and artificial intelligence, particularly in relation to healthcare innovation, decision support, and data-driven improvement of clinical and organizational processes.

Amira Soleman.

Amira Soliman

Amira Soliman is an associate professor of artificial intelligence and machine learning at Halmstad University, specializing in data science, graph analytics, and federated learning. Her areas of expertise include healthcare informatics, social network analysis, and distributed systems. Amira is teaching the courses of Artificial Intelligence for Healthcare, Smart Healthcare with Applications, Big Data Parallel Programming, Applied Data Mining, and Thesis supervision (MSc and PhD).

Grzegorz Nalepa.

Grzegorz Nalepa

Grzegorz J. Nalepa obtained his PhD in Computer Science from in 2004, with a dissertation titled “Meta-Level Approach to Integrated Process of Design and Implementation of Rule-Based Systems.” In 2015, he was appointed Docent at AGH University of Science and Technology. His academic achievements led to his promotion to Professor of Artificial Intelligence in 2019 by the President of the Republic of Poland. In 2020, he was further promoted to Full Professor at Jagiellonian University in Kraków, Poland. Most recently, in 2024, he was appointed Professor of Machine Learning at Halmstad University, funded by the ELLIIT Excellence Centre in Sweden.

Jens Lundström.

Jens Lundström

Jens Lundström is a researcher and teacher in the domain of Machine Learning at Halmstad University. Specifically, he is interested in applied and theoretical ML research for improving healthcare, patient experience and quality of life. Moreover, Jens is the deputy department head at the Intelligent Systems and Digital Design (ISDD department). Currently he focus on research about explainable AI, representation learning and privacy-preserving machine learning.

published

Updated


share