A Survey on Accelerating the Classifier Training Using Various Boosting Schemes Within Cascades of Boosted Ensembles

9Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we study how to handle current issues encountered once training classifiers at intervals within the defined framework, i.e., the cascades of boosted ensembles (CoBEs). This CoBEs framework became more popular ones they are more successful in the face detector. From there on researchers started improvising the present procedure by implementing new procedures through various directions such as another possible boosting methods and attributes sets. In the present scenario, a big challenge faced by this framework is unreasonable classifier training runtimes. The lengthy runtimes are an obstacle for this framework, for the broader usage. In addition to this they deter the method of creating effective object detection applications. They also considered as verifications of other analyses when creating the testing of the latest theories and algorithms, which is an extreme challenge. One more limitation of this CoBEs framework is its reduced capability to train existing classifiers gradually. At present, the highest guarantee technique of combining latest data set into a live data set classifier is, to train a classifier from starting point using the combination of old and new data sets. It is unworthy and it is short of a defined modularity and removes important data obtained in earlier training. To work with the existing issues, the current paper studies and compares alternative CoBEs framework for training classifiers. The alternative framework reduces runtimes of training by a degree of enormity than regular CoBEs framework and introduces other accountability to the existing procedure. They obtain this, while keeping alive the generalization capability of existing classifiers. This paper also studies a new process for raising training of CoBEs classifiers and presents, how existing classifiers executed without retraining them from the start, i.e., boosting chain learning. Even though it is capable to increase the successful detection rates of existing classifiers (AdaBoost, FloatBoost), presently it is not capable to lower the negative detection rates.

Cite

CITATION STYLE

APA

Venkata Praneel, A. S., Srinivasa Rao, T., & RamaKrishna Murty, M. (2020). A Survey on Accelerating the Classifier Training Using Various Boosting Schemes Within Cascades of Boosted Ensembles. In Smart Innovation, Systems and Technologies (Vol. 169, pp. 809–825). Springer. https://doi.org/10.1007/978-981-15-1616-0_79

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free