In many real-world applications, the samples/features acquired are in spatial or temporal order. In such cases, the magnitudes of adjacent samples/features are typically close to each other. Meanwhile, in the high-dimensional scenario, identifying the most relevant samples/features is also desired. In this paper, we consider a regularized model which can simultaneously identify important features and group similar features together. The model is based on a penalty called Absolute Fused Lasso (AFL). The AFL penalty encourages sparsity in the coefficients as well as their successive diffierences of absolute values|i.e., local constancy of the coeffcient components in absolute values. Due to the non-convexity of AFL, it is challenging to develop effecient algorithms to solve the optimization problem. To this end, we employ the Difference of Convex functions (DC) programming to optimize the proposed non-convex problem. At each DC iteration, we adopt the proximal algorithm to solve a convex regularized sub-problem. One of the major contributions of this paper is to develop a highly efficient algorithm to compute the proximal operator. Empirical studies on both synthetic and real-world data sets from Genome-Wide Association Studies demonstrate the efficiency and effectiveness of the proposed approach in simultaneous identifying important features and grouping similar features.
CITATION STYLE
Yang, T., Liu, J., Gong, P., Zhang, R., Shen, X., & Ye, J. (2016). Absolute fused lasso and its application to genome-wide association studies. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Vol. 13-17-August-2016, pp. 1955–1964). Association for Computing Machinery. https://doi.org/10.1145/2939672.2939827
Mendeley helps you to discover research relevant for your work.