The longest common subsequence problem for small alphabet size between many strings

21Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Given two or more strings (for example, DNA and amino acid sequences), the longest common subsequence (LCS) problem is to determine the longest common subsequence obtained by deleting zero or more symbols from each string. The algorithms for computing an LCS between two strings were given by many papers, but there is no efficient algorithm for computing an LCS between more than two strings. This paper proposes a method for computing efficiently the LCS between three or more strings of small alphabet size. Specifically, our algorithm computes the LCS of d(≥ 3) strings of length n on alphabet of size s in O(usd + Dsd(logd-3n + logd-2s)) time, where D is the number of dominant matches and is much smaller than nd. Through computational experiments, we demonstrate the effectiveness of our algorithm.

Cite

CITATION STYLE

APA

Hakata, K., & Imai, H. (1992). The longest common subsequence problem for small alphabet size between many strings. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 650 LNCS, pp. 469–478). Springer Verlag. https://doi.org/10.1007/3-540-56279-6_99

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free