Unsupervised Learning and Clustering Algorithms

  • Rojas R
N/ACitations
Citations of this article
148Readers
Mendeley users who have this article in their library.
Get full text

Abstract

5 Unsupervised Learning and Clustering Algorithms 5.1 Competitive learning The perceptron learning algorithm is an example of supervised learning. This kind of approach does not seem very plausible from the biologist's point of view, since a teacher is needed to accept or reject the output and adjust the network weights if necessary. Some researchers have proposed alternative learning methods in which the network parameters are determined as a result of a self-organizing process. In unsupervised learning corrections to the net-work weights are not performed by an external agent, because in many cases we do not even know what solution we should expect from the network. The network itself decides what output is best for a given input and reorganizes accordingly. We will make a distinction between two classes of unsupervised learning: reinforcement and competitive learning. In the first method each input pro-duces a reinforcement of the network weights in such a way as to enhance the reproduction of the desired output. Hebbian learning is an example of a rein-forcement rule that can be applied in this case. In competitive learning, the elements of the network compete with each other for the " right " to provide the output associated with an input vector. Only one element is allowed to answer the query and this element simultaneously inhibits all other competitors. This chapter deals with competitive learning. We will show that we can conceive of this learning method as a generalization of the linear separation methods discussed in the previous two chapters. 5.1.1 Generalization of the perceptron problem A single perceptron divides input space into two disjoint half-spaces. However, as we already mentioned in Chap. 3, the relative number of linearly separable Boolean functions in relation to the total number of Boolean functions con-verges to zero as the dimension of the input increases without bound. There-R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996 102 5 Unsupervised Learning and Clustering Algorithms fore we would like to implement some of those not linearly separable functions using not a single perceptron but a collection of computing elements. P N Fig. 5.1. The two sets of vectors P and N Figure 5.1 shows a two-dimensional problem involving two sets of vectors, denoted respectively P and N . The set P consists of a more or less compact bundle of vectors. The set N consists of vectors clustered around two different regions of space. cluster A cluster B cluster C w 1 w 2 w 3 Fig. 5.2. Three weight vectors for the three previous clusters This classification problem is too complex for a single perceptron. A weight vector w cannot satisfy w · p ≥ 0 for all vectors p in P and w · n < 0 for all vectors n in N . In this situation it is possible to find three different vectors w 1 ,w 2 and w 3 which can act as a kind of " representative " for the vectors in each of the three clusters A, B and C shown in Figure 5.2. Each one of

Cite

CITATION STYLE

APA

Rojas, R. (1996). Unsupervised Learning and Clustering Algorithms. In Neural Networks (pp. 99–121). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-61068-4_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free