REPR: Rule-Enhanced Penalized Regression

  • Eckstein J
  • Kagawa A
  • Goldberg N
N/ACitations
Citations of this article
2Readers
Mendeley users who have this article in their library.

Abstract

This article describes a new rule-enhanced penalized regression procedure for the generalized regression problem of predicting scalar responses from observation vectors in the absence of a preferred functional form. It enhances standard L 1 -penalized regression by adding dynamically generated rules, that is, new 0-1 covariates, corresponding to multidimensional “box” sets. In contrast to prior approaches to this class of problems, we draw heavily on standard (but non-polynomial-time) mathematical programming techniques, enhanced by parallel computing. We identify and incorporate new rules using a form of classical column generation and solve the resulting pricing subproblem, which is NP-hard, either exactly by a specialized parallel branch-and-bound method or by a greedy heuristic based on Kadane’s algorithm. The resulting rule-enhanced regression method can be computation intensive when we solve the subproblems exactly, but our computational tests suggest that it outperforms prior methods at making accurate and stable predictions from relatively small data samples. Through selective use of our greedy heuristic, we can make our method’s run time generally competitive with some established methods, without sacrificing prediction performance. We call our method’s pricing subproblem rectangular maximum agreement.

Cite

CITATION STYLE

APA

Eckstein, J., Kagawa, A., & Goldberg, N. (2019). REPR: Rule-Enhanced Penalized Regression. INFORMS Journal on Optimization, 1(2), 143–163. https://doi.org/10.1287/ijoo.2019.0015

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free