Twin extreme learning machine (TELM) is a phenomenon of symmetry that improves the performance of the traditional extreme learning machine classification algorithm (ELM). Although TELM has been widely researched and applied in the field of machine learning, the need to solve two quadratic programming problems (QPPs) for TELM has greatly limited its development. In this paper, we propose a novel TELM framework called Lagrangian regularized twin extreme learning machine (LRTELM). One significant advantage of our LRTELM over TELM is that the structural risk minimization principle is implemented by introducing the regularization term. Meanwhile, we consider the square of the l2-norm of the vector of slack variables instead of the usual l1-norm in order to make the objective functions strongly convex. Furthermore, a simple and fast iterative algorithm is designed for solving LRTELM, which only needs to iteratively solve a pair of linear equations in order to avoid solving two QPPs. Last, we extend LRTELM to semi-supervised learning by introducing manifold regularization to improve the performance of LRTELM when insufficient labeled samples are available, as well as to obtain a Lagrangian semi-supervised regularized twin extreme learning machine (Lap-LRTELM). Experimental results on most datasets show that the proposed LRTELM and Lap-LRTELM are competitive in terms of accuracy and efficiency compared to the state-of-the-art algorithms.
CITATION STYLE
Ma, J., & Yu, G. (2022). Lagrangian Regularized Twin Extreme Learning Machine for Supervised and Semi-Supervised Classification. Symmetry, 14(6). https://doi.org/10.3390/sym14061186
Mendeley helps you to discover research relevant for your work.