Trace similarity is a prerequisite for several process mining tasks, e.g. identifying process variants and anomalies. Many similarity metrics have been presented in the literature, but the similarity metric itself is seldom subject to controlled evaluation. Instead, they are usually demonstrated in conjunction with downstream tasks, e.g. process model discovery, and often evaluated qualitatively or with limited comparison. In this paper, we isolate similarity metrics from downstream tasks and compare them wrt. evaluation measures adapted from metric learning and clustering literature. We present a comparison of 18 similarity metrics across 4 evaluation measures and 12 event logs. Friedman and Nemenyi tests for statistical significance show that certain similarity metrics consistently outperform on some evaluation measures, but their mean rank varies across evaluation measures. One similarity metric based on a weighted eventually-follows relation does stand out as consistently outperforming, and the simplest n-gram similarity metrics also perform well. Our results demonstrate that choice of evaluation measures will determine the contours of the metric that are revealed. This study may be harnessed as a baseline for benchmarking future work on trace similarity, and describes tools for quantitative evaluation that we hope will inspire empirical rigor in future work.
CITATION STYLE
Back, C. O., & Simonsen, J. G. (2023). Comparing Trace Similarity Metrics Across Logs and Evaluation Measures. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13901 LNCS, pp. 226–242). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-34560-9_14
Mendeley helps you to discover research relevant for your work.