A discussion of image processing must begin by considering how images are represented. The most commonly used format is that of a 2-dimensional array for monochrome images, while colour images are usually represented by three such arrays. The array format is generated by most cameras. While other representations present advantages in certain, rather restricted situations, they must usually be derived from an image array. Grey-scale image processing functions can be arranged into several major categories. The simplest, called monadic operators, process one image at a time, by performing a fixed intensity mapping from each of its pixels to the pixel at the corresponding position in the output image. Dyadic functions operate on two images; the intensities of pixels at corresponding addresses are combined to calculate a value for the pixel at the same address in the output image. Local operators process a single image. In order to calculate the intensity at a point (i,j) in the output image, several pixels surrounding point (i,j) in the input image are combined. For example, the average, maximum, or minimum intensity within the 3 * 3 pixel neighbourhood surrounding point (i,j) might be computed. Such a process is repeated for all (i,j). Numerous filtering functions can be invented by extending the concept of ''neighbourhood''. The function that is applied to the pixel values within a given neighbourhood can be linear or non-linear. The latter gives rise to a useful family, given the name morphology. Another group is based on the application of a monadic function that is derived in some way from the intensity histogram, or other statistical features, of the input image. Another, varied family is based on Fourier, and other integral transforms of the 2-dimensional intensity function. Image warping and geometric transforms form another family of grey-scale image processing function. Some grey-scale operators have obvious counterparts for binary images. However, the latter also require functions such as object counting, feature detection, measurement of position, orientation, size and shape. Binary image processing operators do not fall neatly into well-defined categories but can often be described functionally by drawing on everyday concepts such as that of skeleton, rubber band, compass, ruler and our intuitive understanding of shape fitting. The processing of colour images is not covered here as it is discussed in detail elsewhere in this book (>Chap. 16). For the moment, let it suffice to say that grey-scale operators can be applied to the individual RGB, HSI or CMYK components of a colour image. The purpose of this chapter is to outline some of the basic techniques used in the development of industrial machine vision systems. These are discussed in sufficient detail to understand the key ideas outlined elsewhere in this book. In the following discussion we will frequently indicate the equivalent QT operators for the vision techniques described. (QT is an interactive image processing system and is described in >Chap. 21. >Chap. 41 describes the function of each QT operator.) QT commands appear in square brackets. In certain cases, sequences of QT commands are needed to perform an operation and these are similarly listed.
CITATION STYLE
Batchelor, B. G., & Whelan, P. F. (2012). Basic machine vision techniques. In Machine Vision Handbook (pp. 566–623). Springer London. https://doi.org/10.1007/978-1-84996-169-1_14
Mendeley helps you to discover research relevant for your work.