Reflectance Modeling in Machine Vision: Applications in Image Analysis and Synthesis

  • Gruna R
  • Irgenfrie S
N/ACitations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Imaging and analyzing objects under different illumination and viewing directions has longbeen an active research area in both machine vision and computer graphics. While computer graphics aims to synthesize realistic images from abstract scene descriptions, machine vision is concerned with the problem of deducing properties of a scene based on its interaction with light. For this reason, many algorithm from both disciplines rely on an accurate modeling of how light reflects off surfaces in a physical correct way. In this chapter we show how machine vision can greatly benefit from reflectance modeling in image analysis as well as image synthesis. In the first part we show how the reflectance measurement and modeling of real-world materials can be used to simulate machine vision systems by synthesizing images with computer graphics. The design process of machine vision systems requires a lot of domain specific experience and is also often based on the ``try and error'' principle. Many aspects have to be taken into account and often the construction of a prototype system is inevitable. Finding the right camera position(s) and achieving a satisfying illumination of the scene and the inspected objects and surfaces is a difficult process as well as the training of the classification and decision algorithms. Simulation of machine vision systems using computer graphics can support and shorten this process and even lead to better results than using a manual setup Traditionally, computer graphics systems are designed to create images that are presented to human eyes. The goal is, to make the viewer judge the images as, e.g., believable, logical or beautiful. But for machine vision systems, the physical correctness is the most important factor. To achieve this goal, several areas of computer graphics have to be investigated under the perspective, what is required to create synthetic images that can be used as ground truth data for image processing, classification and decision algorithms? Crucial parameters for machine vision systems are the reflection properties of the object surfaces in the scene. Modeling, how light behaves when it hits an object surface has been an important research area in computer graphics since the beginning. Measuring the Bidirectional Reflectance Function (BRDF) of real world materials and fitting the data to mathematical models opened up the path to very realistic looking images and also introduced more physical correctness to synthetic images. We present a machine vision simulator that can create synthetic images showing the inspected objects under varying perspectives and illumination conditions and compare them with images captured using a real camera. The synthetic images and real world images are compared on an image pair basis, but also the results of different image processing algorithms applied to the images are investigated. To achieve a high degree of realism, we use camera calibration to calculate the intrinsic and extrinsic parameters of the real camera and use those parameters in the camera model of the simulator. Also conversion algorithms between real world camera parameters, like focal length or sensor dimensions, and computer graphics camera model parameters have been developed. Our simulator uses measured BRDF-values applied to CAD models to achieve a high quality in simulating diffuse and specular behavior for isotropic and even anisotropic materials. Our work shows, that modern computer graphics hardware and algorithms, combined with measured properties of materials and light sources, is able to create synthetic images that can very helpful for design and optimization of machine vision systems. In the second part of this chapter we show how reflectance modeling is used in the analysis of illumination series. Illumination series are obtained by imaging objects under varying illumination directions and they provide the input for various machine vision algorithms like photometric stereo and algorithms for image fusion. Since illumination series contain much more information about the reflectance properties of the illuminated surface than a single intensity image, they facilitate image processing and analysis. For instance, by fitting parametric models to the illumination series, the fitted model parameters represent meaningful features of the reflectance data. We show how these from illumination series extracted features can be used for material-based segmentation. However, capturing a large number of images with densely sampled illumination directions is time consuming and impractical for industrial applications. Obviously, images obtained from similar illumination directions are highly correlated and hence, the image series contains a large amount of redundant information. Therefore, it is desirable to determine the relevant illumination directions for some given task in order to accelerate image acquisition and to reduce the computational costs for data analysis. By fitting parametric models to the illumination series, the uncertainty in the estimated model parameters can be used to determine advantageous illumination directions. We show how the uncertainty in a parametric reflectance model can be used for the efficient acquisition planning of illumination series.

Cite

CITATION STYLE

APA

Gruna, R., & Irgenfrie, S. (2012). Reflectance Modeling in Machine Vision: Applications in Image Analysis and Synthesis. In Machine Vision - Applications and Systems. InTech. https://doi.org/10.5772/26554

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free