Widening is a method where parallel resources are used to find better solutions from greedy algorithms instead of merely trying to find the same solutions more quickly. To date, every example of Widening has used some form of communication between the parallel workers to maintain their distances from one another in the model space. For the first time, we present a communication-free, widened extension to a standard machine learning algorithm. By using Locality Sensitive Hashing on the Bayesian networks’ Fiedler vectors, we demonstrate the ability to learn classifiers superior to those of standard implementations and to those generated with a greedy heuristic alone.
CITATION STYLE
Sampson, O. R., Borgelt, C., & Berthold, M. R. (2018). Communication-free widened learning of bayesian network classifiers using hashed Fiedler Vectors. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11191 LNCS, pp. 264–277). Springer Verlag. https://doi.org/10.1007/978-3-030-01768-2_22
Mendeley helps you to discover research relevant for your work.