The subspace selection problem seeks a subspace that maximizes an objective function under some constraint. This problem includes several important machine learning problems such as the principal component analysis and sparse dictionary selection problem. Often, these problems can be (exactly or approximately) solved using greedy algorithms. Here, we are interested in why these problems can be solved by greedy algorithms, and what classes of objective functions and constraints admit this property. In this study, we focus on the fact that the set of subspaces forms a lattice, then formulate the problems as optimization problems on lattices. Then, we introduce a new class of functions on lattices, directional DR-submodular functions, to characterize the approximability of problems. We prove that the principal component analysis, sparse dictionary selection problem, and these generalizations are monotone directional DR-submodularity functions. We also prove the “quantum version” of the cut function is a non-monotone directional DR submodular function. Using these results, we propose new solvable feature selection problems (generalized principal component analysis and quantum maximum cut problem), and improve the approximation ratio of the sparse dictionary selection problem in certain instances. We show that, under several constraints, the directional DR-submodular function maximization problem can be solved efficiently with provable approximation factors.
CITATION STYLE
Nakashima, S., & Maehara, T. (2019). Subspace selection via dr-submodular maximization on lattices. In 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019 (pp. 4618–4625). AAAI Press. https://doi.org/10.1609/aaai.v33i01.33014618
Mendeley helps you to discover research relevant for your work.