home |
personnel |
research |
facilities |
collaborations |
publications |
image database |
feedback
|
The Colour Problem Research Areas References |
Object Recognition Using Colour Indexing
One of the reasons humans might need colour constancy is for object recognition. Colour is an obvious distinguishing feature of an object, but without colour descriptors that remain independent of the colour of the incident illumination, colour will be of limited use. We cannot expect to recognize an object based on its collection of colours if those colours vary dramatically with the illumination.
Although colour can be an important feature of an object, it was not until Swain and Ballard’s [SWAIN91] Colour Indexing (CI) work that it was shown how useful colour can be in object identification. Their method identifies an object by comparing its colour histogram to the colour histograms of known objects stored in a database of colour histograms. A colour histogram describes an object by the amount of image area each of its colours occupies. An object matches a known model in the database when its distribution of colour areas resembles those of the model. Swain and Ballard’s method works well even though it ignores all geometric and shape information. Recognition based only on colour makes the method invariant to object rotation, translation and deformation.
Swain and Ballard’s colour indexing method does have one significant failing: it is sensitive to changes in the colour of the incident illumination. When the illumination changes, all the colours in the image change. The colours change enough that the colour histograms of an object and its model in the database no longer match. SFU Computational Vision Lab members Funt and Finlayson [FUNT95] developed a method called Colour Constant Colour Indexing (CCCI) which uses relative colour rather than absolute colour for indexing. In the CCCI method relative colour is defined by the ratio of colours at nearby pixels. Relative colours are quite stable to changes in illumination because the colours at nearby pixels are influenced by the same illumination so that the ratio of nearby colours factors out most (but not all) of the illumination. Sensor response (either a camera or the eye) is described by (k=1, ..., 3) (1) where is the response of the kth sensor class, is the spectrum of the incident illumination, is the percent spectral reflectance of the surface and is the relative sensitivity function of the kth sensor class. For two nearby pixels, A and B, the ratio of responses for the kth sensor type at the two locations is (2) where and represent the surface reflectance at the scene points corresponding to pixels A and B. While it is tempting to simplify equation (2) by cancelling out on the top and bottom, obviously we cannot legitimately do so in general since appears within the integral. However, experimental evidence shows such ratios are found to be quite illumination independent. One way that could be cancelled out is if the sensor sensitivity function were extremely narrowband (i.e., the Dirac delta) so that its response was only non-zero at a single wavelength, . By the sifting theorem, (2) then becomes (3) in which case the illuminant’s effect can be cancelled out and eliminated. The idea that narrow band sensors result in illumination independent ratios led Finlayson et. al. [FINLAY94a] to consider the general conditions that lead to stable ratios. See the Sensor Sharpening section for a discussion. Assuming that the sensors are sufficiently narrow band and (3) holds, the ratio of neighbouring pixels will be illumination independent so long as the illumination is not varying spatially between the two pixels. The CCCI algorithm uses these illumination independent ratios representing relative colour in place of Swain and Ballard’s absolute colour. CCCI histograms the colour ratios in place of the histogramming the absolute colours and then matches ratio histograms instead of colour histograms. In other words, in CCCI the R,G,B colour triplets are replaced by the ratio triplets RA/RB, GA/GB, BA/BB which are then quite stable with respect to changes in illumination colour. These ratio triplets are histogrammed and compared with ratio histograms stored in the database of model objects. For computational efficiency CCCI’s ratios are computed by taking the finite difference approximation to the laplacian applied to the logarithm of the image, which is equivalent to the ratios of neighbouring pixels. CCCI results with ratio histograms were comparable in object recognition
accuracy with those of absolute colour histogram matching. When the illumination
was varied, CCCI continued to perform well; whereas, regular colour indexing
failed completely. [FUNT95]
| |||||||||||||
Computational Vision Lab Computing Science, Simon Fraser University, Burnaby, BC, Canada, V5A 1S6 | ||||||||||||||
Fax: (778) 782-3045 Tel: (778) 782-4717 email: colour@cs.sfu.ca Office: ASB 10865, SFU |
|