New methods for explaining survival models’ predictions
Hello there, Abdallah Alabdallah! You recently finished your PhD in Information Technology at Halmstad University, with a thesis titled “Towards trustworthy survival analysis with machine learning models”. What is the thesis about?

“My thesis focuses on survival analysis using machine learning models, particularly addressing challenges in explainability and performance. It introduces new methods for explaining survival models’ predictions and optimising their performance while maintaining explainability, crucial for applications like medical studies and predictive maintenance. I collaborated with researchers from Halmstad University and other institutions, including AGH University of Science and Technology and Jagiellonian University in Kraków, both in Poland.”

Abdallah Alabdallah defended his PhD thesis in January 2025.
What is the result of your research?
“My research led to several findings, such as effective post-hoc methods to explain survival models, enhanced performance of survival models, as well as development of an inherently explainable survival model that balances transparency and predictive power. An unexpected discovery was that the gradient of loss functions for censored cases significantly improves model concordance, and this was pivotal in developing better-performing survival models.”
Some of Abdallah Alabdallah’s findings
- Effective post-hoc methods to explain survival models, including feature attribution and counterfactual explanations
- Enhanced performance of survival models through novel loss functions and algorithms to handle noisy data
- An inherently explainable survival model (CoxSE) that balances transparency and predictive power.
How can your research benefit society?
“My research can improve decision-making in critical areas, such as healthcare and industrial safety, by providing reliable and explainable survival predictions that foster trust in AI systems. My work bridges the gap between high-performance machine learning survival models and the need for their explainability, which is increasingly crucial in high-stakes applications.”

Abdallah Alabdallah’s research can improve decision-making in areas such as industrial safety.
And lastly, what are your plans for the future?
“I aim to continue researching trustworthy AI, focusing on practical implementations in healthcare and industry, while exploring more dimensions related to trustworthiness like robustness and uncertainty.”
Abdallah Alabdallah’s educational background
Abdallah Alabdallah holds a Bachelor’s degree in Software Engineering from Damascus University, Syria, and a Master’s degree in Intelligent Systems from Halmstad University. His interest in the subject of explainability began while working on his Master’s thesis, which focused on explaining AI models designed to process and analyse images. He chose Halmstad University for his PhD studies because of its research in machine learning and its interdisciplinary approach to AI applications, as well as the University’s collaboration with important industry partners.
Abdallah Alabdallah’s PhD studies have improved how AI predicts and explains the expected length of time until an event occurs, like how long a machine will work or how a patient might respond to treatment. He has created tools to make these predictions more accurate and easier to understand, helping people trust AI in critical areas like healthcare and industrial safety.
Text: Emma Swahn
Photo: Magnus Karlsson, Pixabay, iStock
Note. Post-hoc methods are methods that applied after predictions are made.
More information
Research at the School of Information Technology
Read Abdallah Alabdallah’s thesis: “Towards Trustworthy Survival Analysis with Machine Learning Models” External link.
På svenska
Nya metoder för att förklara överlevnadsmodellers förutsägelser External link.