Feed Forward Neural Networks

  • Ketkar N
N/ACitations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A central problem in machine learning is how to make an algorithm that will perform well not just on the training data, but also on new inputs. Many strategies used in machine learning are explicitly designed to reduce the test error, possibly at the expense of increased training error. These strategies are known collectively as regularization. As we will see there are a great many forms of regularization available to the deep learning practitioner. In fact, developing more effective regularization strategies has been one of the major research efforts in the field. Chapter introduced the basic concepts of generalization, underfitting, overfit-5 ting, bias, variance and regularization. If you are not already familiar with these notions, please refer to that chapter before continuing with this one. In this chapter, we describe regularization in more detail, focusing on regular-ization strategies for deep models or models that may be used as building blocks to form deep models. Some sections of this chapter deal with standard concepts in machine learning. If you are already familiar with these concepts, feel free to skip the relevant sections. However, most of this chapter is concerned with the extension of these basic concepts to the particular case of neural networks. In Sec. , we defined regularization as " any modification we make to a 5.2.2 learning algorithm that is intended to reduce its generalization error but not its training error. " There are many regularization strategies. Some put extra constraints on a machine learning model, such as adding restrictions on the parameter values. Some add extra terms in the objective function that can be thought of as corresponding to a soft constraint on the parameter values. If chosen carefully, these extra constraints and penalties can lead to improved performance 228

Cite

CITATION STYLE

APA

Ketkar, N. (2017). Feed Forward Neural Networks. In Deep Learning with Python (pp. 17–33). Apress. https://doi.org/10.1007/978-1-4842-2766-4_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free