Deep Specification Mining with Attention

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we improve the method of specification mining based on deep learning proposed in[16]. In that neural network model, we find that if the length of a single trace exceeds 25 and the number of the tracking methods exceeds 15, the output of the original model will decrease significantly. Accordingly, we propose a new model with attention mechanism to solve the forgetting problem of the original model for long sequence learning. First of all, test cases are used to generate as many as possible program traces, each of which covers a complete execution path. The trace set is then used for training a language model based on Recurrent Neural Networks (RNN) and attention mechanism. From these trajectories, a Prefix Tree Acceptor (PTA) is built and features are extracted using the new proposed model. Then, these features are used by clustering algorithms to merge similar states in the PTA to build multiple finite automata. Finally, a heuristic algorithm is used to evaluate the quality of these automata and select the one with the highest as the final specification automaton.

Cite

CITATION STYLE

APA

Cao, Z., & Zhang, N. (2020). Deep Specification Mining with Attention. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12273 LNCS, pp. 186–197). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58150-3_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free