Large datasets: A mixed method to adapt and improve their learning by neural networks used in regression contexts

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The purpose of this work is to further study the relevance of accelerating the Monte-Carlo calculations for the gamma rays external radiotherapy through feed-forward neural networks. We have previously presented a parallel incremental algorithm that builds neural networks of reduced size, while providing high quality approximations of the dose deposit [4]. Our parallel algorithm consists in an optimized decomposition of the initial learning dataset (also called learning domain) in as much subsets as available processors. However, although that decomposition provides subsets of similar signal complexities, their sizes may be quite different, still implying potential differences in their learning times. This paper presents an efficient data extraction allowing a good and balanced training without any loss of signal information. As will be shown, the resulting irregular decomposition permits an important improvement in the learning time of the global network. © 2011 International Federation for Information Processing.

Cite

CITATION STYLE

APA

Sauget, M., Henriet, J., Salomon, M., & Contassot-Vivier, S. (2011). Large datasets: A mixed method to adapt and improve their learning by neural networks used in regression contexts. In IFIP Advances in Information and Communication Technology (Vol. 363 AICT, pp. 182–191). Springer New York LLC. https://doi.org/10.1007/978-3-642-23957-1_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free