Search

Summary of ECML PKDD

The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases took place online in September 2021. Below is a summary of this event, which is the premier European machine learning and data mining conference.

Awards

In ECML PKDD 2021, the “Test of Time Award” went to Influence and Passivity in Social Media paper by Daniel M. Romero, Wojciech Galuba, Sitaram Asur and Bernardo A. Huberman. The award aims to recognise decade-old papers that have made the most impactful contribution to contemporary research. The paper proposes an algorithm to determine the influence and passivity of users based on their information forwarding activity and shows that high popularity does not necessarily imply high influence and vice-versa. It has been cited more than 900 times and inspired a shift from focus to influencers only towards perceiving the role of passive social media users.

The “Best Student Machine Learning Paper Award” went to Reparameterized Sampling for Generative Adversarial Networks, which proposes a novel sampling method for GANs. It allows general dependent proposals by reparameterizing the Markov chains into the latent space of the generator. Theoretical results demonstrate a closed-form Metropolis-Hastings acceptance ratio, and extensive experiments on synthetic and real datasets demonstrate improvement in the sample efficiency. The “Best Student Data Mining Paper Award” went to Conditional Neural Relational Inference for Interacting Systems, which learns to model the dynamics of similar yet distinct groups of interacting objects, which follow some common physical laws. A new model allows for conditional generation from any such group given its vectorial description, and is evaluated in the setting of modeling human gait and, in particular pathological human gait.

Keynotes

The Value of Data for Personalization by Susan Athey, the Economics of Technology Professor from Stanford Graduate School of Business, presented methods for assessing the economic value of data in specific contexts. In particular, it analyzed the value of different types of data in the context of several empirical applications. Susan touched on topics such as privacy and security related to the data, especially big data, but also on the competitive advantage the data can bring, also highlighting the other side, namely the barrier to entry for new actors. In different scenarios the value of data, and especially the diminishing returns of getting more of it, come out very differently. The usefulness of the data is, very clearly, also linked to the prediction versus causal inference questions: in the former, the key part is the statistical power, while the latter adds further complexity in terms of experimental setting and sufficient variability within the historical records. One key finding across my different research studies was that collecting data from users over a longer period of time was often more valuable than collecting data from a larger number of users.

Pretrain the World by Jie Tang, Professor at Tsinghua University, presented how large-scale pretrained models on web texts have substantially advanced the state of the art in various AI tasks, such as natural language understanding and text generation, and image processing, multimodal modeling. The downstream task performances have also constantly increased in the past few years. Jie has described three primary families: autoregressive models (e.g., GPT), autoencoding models (e.g., BERT), and encoder-decoder models. Then, he introduced China’s first homegrown super-scale intelligent model system, with the goal of building an ultra-large-scale cognitive-oriented pretraining model to focus on essential problems in general artificial intelligence from a cognitive perspective. In particular, as an example, he elaborated a novel pretraining framework GLM (General Language Model) to address this challenge.

AI fairness in practice by Joaquin Quiñonero Candela, Distinguished Tech Lead at Facebook, shared learnings from Joaquin’s journey from deploying ML at Facebook scale to understanding questions of fairness in AI. He used multiple examples to illustrate how there is not a single definition of AI fairness, but several ones that are in contradiction and that each corresponds to a different moral interpretation of fairness. AI fairness is a process, and it's not primarily an AI issue. It therefore requires a multidisciplinary approach. Especially at the large and very large scale, AI offers huge opportunities, it also poses huge ethical challenges to people and society. It includes tradeoff such as safety vs mass surveillance, freedom of speech vs misinformation and manipulation, personalisation vs bias and discrimination, efficient decision making vs inequity and insensitivity, and more.

Safety and robustness for deep learning with provable guarantees by Marta Kwiatkowska, Professor of Computing Systems, University of Oxford, discusses how computing systems are becoming ever more complex, with decisions increasingly often based on deep learning components. A wide variety of applications are being developed, many of them safety-critical, such as self-driving cars and medical diagnosis. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning components. This lecture described progress with developing automated verification and testing techniques for deep neural networks to ensure safety and robustness of their decisions with respect to input perturbations. The techniques exploit Lipschitz continuity of the networks and aim to approximate, for a given set of inputs, the reachable set of network outputs in terms of lower and upper bounds, in an anytime manner, with provable guarantees. Novel algorithms are based on feature-guided search, games, global optimisation and Bayesian methods, and evaluated on state-of-the-art networks. The lecture concluded with an overview of the challenges in this field.

Challenges

The conference features three different challenges.

The Ariel Space mission Mission Challenge: The Ariel Space mission is a European Space Agency mission to be launched in 2028. Ariel will observe the atmospheres of 1000 extrasolar planets - planets around other stars - to determine how they are made, how they evolve and how to put our own Solar System in the galactic context. However, space mission data analysis is not easy. Especially if you need to observe a planet passing in front of its star that is often 100s of lightyears away. At such a distance, one of the main issues is differentiating what is planet, what is star and what is the instrument. In this challenge we try to identify and correct for the effects of spots on the star (aptly called star-spots) from the faint signals of the exoplanets' atmospheres in the presence of signal distortions by the instrument. This is a data challenge that cannot be solved by conventional astrophysics methods, hence a machine learning data challenge is in order!

Discover the mysteries of the Maya Challenge: Remote sensing has greatly accelerated traditional archaeological landscape surveys in the forested regions of the ancient Maya. Typical exploration and discovery attempts, besides focusing on whole ancient cities, also focus on individual buildings and structures. Recently, there have been recent successful attempts of utilizing machine learning for identifying ancient Maya settlements. These attempts, while relevant, focus on narrow areas and rely on high-quality aerial laser scanning data which covers only a fraction of the region where ancient Maya were once settled. Satellite image data, on the other hand, is abundant and, more importantly, publicly available. The data from the optical sensors is heavily dependent on the presence of cloud cover, therefore combining it with radar data from the Sentinel-1 satellites provides an additional benefit. Integrating Sentinel data has been shown to lead to improved performance for different tasks of land-use and land-cover classification. This is the goal of the challenge: Explore the potential of the Sentinel satellite data, in combination with the available lidar data, for integrated image segmentation in order to locate and identify “lost” ancient Maya settlements (aguadas, buildings and platforms), hidden under the thick forest canopy.

Farfetch Fashion Recommendations Challenge: The importance of online sales in the luxury fashion space has been growing at an accelerated pace in the last few years, as consumers of this once traditional industry now expect easy access to a worldwide network of brands and retailers. FARFETCH operates in this space and has the mission to bring together creators, curators, and consumers of fashion, all over the world. To be successful in this landscape, it's necessary to provide a tailored, personalised and authoritative fashion shopping experience. Recommendation systems play an important role in the user journey, allowing customers to discover products that speak to their style, complement their choices, or challenge them with bold new ideas. FARFETCH continuously works to improve its own recommender system with this ambitious goal in mind. In this challenge, you will attempt to solve this problem by building your own recommendations algorithms while working with a real-world dataset.

XAI (eXplainable AI)

Among the papers accepted in the “research track,” around 5% (11 papers) were directly focused on XAI research. Three papers have proposed novel XAI methods [1,2,3]. Six papers used explainable AI methods to solve a particular task [4,6,8,9,10,11]. One paper presented an evaluation method for saliency-based XAI methods [7] and one paper studied how to use explanations to improve model performance, and quantify the correlation between model accuracy and explanation quality [5].

Two workshops and tutorials (6% of all) were directly targeting XAI and interpretable machine learning (XKDD 3rd International Workshop and Tutorial on eXplainable Knowledge Discovery in Data Mining [12] and AIMLAI Advances in Interpretable Machine Learning and Artificial Intelligence [13]). Their main research topics were not only about advanced XAI methods, but also ethical and legal aspects as well as user-centric interpretable approaches.

References

[1] Duval, A., & Malliaros, F. D. (2021). GraphSVX: Shapley Value Explanations for Graph Neural Networks. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases.

[2] Debbi, H. (2021, September). Causal Explanation of Convolutional Neural Networks. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 633-649). Springer, Cham.

[3] Looveren, A. V., & Klaise, J. (2021, September). Interpretable counterfactual explanations guided by prototypes. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 650-665). Springer, Cham.

[4] Saadallah, A., Jakobs, M., & Morik, K. (2021, September). Explainable Online Deep Neural Network Selection Using Adaptive Saliency Maps for Time Series Forecasting. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 404-420). Springer, Cham.

[5] Jia, Y., Frank, E., Pfahringer, B., Bifet, A., & Lim, N. (2021, September). Studying and Exploiting the Relationship Between Model Accuracy and Explanation Quality. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 699-714). Springer, Cham.

[6] Komárek, T., Brabec, J., & Somol, P. (2021, September). Explainable Multiple Instance Learning with Instance Selection Randomized Trees. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 715-730). Springer, Cham.

[7] Lu, X., Tolmachev, A., Yamamoto, T., Takeuchi, K., Okajima, S., Takebayashi, T., ... & Kashima, H. (2021, September). Crowdsourcing Evaluation of Saliency-based XAI Methods. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 431-446). Springer, Cham.

[8] Nguyen, A., Krause, F., Hagenmayer, D., & Färber, M. (2021, September). Quantifying Explanations of Neural Networks in E-Commerce Based on LRP. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 251-267). Springer, Cham.

[9] Lin, T. W., Sun, R. Y., Chang, H. L., Wang, C. J., & Tsai, M. F. (2021, September). XRR: Explainable Risk Ranking for Financial Reports. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 253-268). Springer, Cham.

[10] Wich, M., Mosca, E., Gorniak, A., Hingerl, J., & Groh, G. (2021, September). Explainable Abusive Language Classification Leveraging User and Network Data. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 481-496). Springer, Cham.

[11] Prasse, P., Brabec, J., Kohout, J., Kopp, M., Bajer, L., & Scheffer, T. (2021, September). Learning Explainable Representations of Malware Behavior. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 53-68). Springer, Cham.

[12] https://kdd.isti.cnr.it/xkdd2021/ External link.

[13] https://project.inria.fr/aimlai/ External link.

updated

contact

share

Contact