Learning Implicitly with Noisy Data in Linear Arithmetic

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Robust learning in expressive languages with real-world data continues to be a challenging task. Numerous conventional methods appeal to heuristics without any assurances of robustness. While probably approximately correct (PAC) Semantics offers strong guarantees, learning explicit representations is not tractable, even in propositional logic. However, recent work on so-called “implicit” learning has shown tremendous promise in terms of obtaining polynomial-time results for fragments of first-order logic. In this work, we extend implicit learning in PAC-Semantics to handle noisy data in the form of intervals and threshold uncertainty in the language of linear arithmetic. We prove that our extended framework keeps the existing polynomial-time complexity guarantees. Furthermore, we provide the first empirical investigation of this hitherto purely theoretical framework. Using benchmark problems, we show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice.

Cite

CITATION STYLE

APA

Rader, A. P., Mocanu, I. G., Belle, V., & Juba, B. (2021). Learning Implicitly with Noisy Data in Linear Arithmetic. In IJCAI International Joint Conference on Artificial Intelligence (pp. 1410–1417). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/195

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free