Centre de Visió per Computador - Universitat Autònoma de Barcelona

CVC | UAB | Bristol
subglobal1 link | subglobal1 link | subglobal1 link | subglobal1 link | subglobal1 link | subglobal1 link | subglobal1 link
subglobal2 link | subglobal2 link | subglobal2 link | subglobal2 link | subglobal2 link | subglobal2 link | subglobal2 link
subglobal3 link | subglobal3 link | subglobal3 link | subglobal3 link | subglobal3 link | subglobal3 link | subglobal3 link
subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link
subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link
subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link
subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link
subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link

The Barcelona Calibrated Images Database

small logo

Why a device-independent dataset?

Almost all current image databases in Computer Vision are determined by the imaging device that gathered its component pictures. There are off course very good calibrated databases of hyperspectral images, but these tend to be very difficult and time consuming to produce and in consequence they usually consist of just a few examples. Other databases are more specialised (they are defined in some specific colour space) or they consist of some special kind of imagery (such as man-made objects under specific illuminants, etc). This lack of a device-independent, all-purpose and open database becomes an important drawback when we need very accurate, statistically representative depictions of the colours present in ordinary images. Our dataset of calibrated images seeks to close this gap. It contains two main improvements over previous databases: first, it has been gathered using a calibrated camera (whose calibration process is explained in this website) and second, it accommodates a grey ball of known reflectance on the lower-left side of the image. By calibrating the camera we aim at defining the chromatic properties of the light hitting each sensor unit (pixel) in a device-independent colour space such as the CIE1931 XYZ and by adding this ball, we expect the user to be able to infer the chromatic characteristics of the illuminant at the moment each picture was taken (so the dataset could be used in colour constancy studies or surface reflectance studies). The grey ball was attached to the camera by a metallic pole which allows us to vary the position and the size of the ball in the picture. We experimented with these parameters to try to position the ball so that it samples the light falling over the whole of the scene.

Why a ball and not a card? Most applications which require a sample of the illuminant use a flat grey card (such as the standard Kodak grey card or similar). There are two main problems with this: first the card has to face the illuminant in every circumstance, which is cumbersome and time consuming and second, it reinforces the illusion of a perfectly uniform illuminant. We know that in most circumstances, and certainly in natural conditions, part of the illumination comes from the sun and part comes from the sky or through inter-reflections (mostly in shaded areas). Having a ball does not solve this issue but at least it tells us when differences in the illuminant’s chromaticity become significant.

A grey ball to estimate the chromaticity of the illuminant

Ball & PaintFor the grey ball to fulfil its purpose it should have certain characteristics:

a) it has to be painted of a relatively dark colour (so it does not saturate under sunlight)
b) it has to reflect light in a spectrally uniform manner within the camera' s sensor's spectral sensitivity range (420-670 nm)
c) It should not produce strong specularities, even under sunlight (this type of reflection is called "diffuse" or "Lambertian")
d) Its colour should last for at least one photographic session
e) It should be relatively easy to replace once its colour has faded (cheap to make)

Taking into account all these conditions and constraints, we built a grey ball from a plastic sphere of 35mm in diameter and fixed it at the end of a supporting rod (about 475 mm away from the camera). The ball was painted with several thin coats of Revell mate dust grey RAL 7012, with an albedo of 0.18 (see spectral reflectance below). Click here to download the values in Excel format.

spectral reflectance

The Dataset

A table linking to the actual database is shown below. It is possible to inspect each image by clicking on the figures at the end of this page. Individual images are for inspection purposes only (all .jpg files are not calibrated!). To download the calibrated imagery you have to download the large .zip files from the table and decompress them. Then you will find two directories, one with uncalibrated pictures (called pics) and another with files in matlab (.mat) format containing individual calibrated images.

Calibrated Images database: click on the dataset you want to download.
WARNING: some individual files are larger than 1Gb
Urban scenery CIE1931XYZ Stockman & Sharpe (2000) LMS Smith & Pokorny (1975) LMS Uncalibrated JPG
Forest / Motorways CIE1931XYZ Stockman & Sharpe (2000) LMS Smith & Pokorny (1975) LMS Uncalibrated JPG
Snow & Seaside CIE1931XYZ Stockman & Sharpe (2000) LMS Smith & Pokorny (1975) LMS Uncalibrated JPG
Natural objects 01 CIE1931XYZ Stockman & Sharpe (2000) LMS Smith & Pokorny (1975) LMS Uncalibrated JPG
Natural objects 02 CIE1931XYZ Stockman & Sharpe (2000) LMS Smith & Pokorny (1975) LMS Uncalibrated JPG
Natural objects 03 CIE1931XYZ Stockman & Sharpe (2000) LMS Smith & Pokorny (1975) LMS Uncalibrated JPG
Natural objects 04 CIE1931XYZ Stockman & Sharpe (2000) LMS Smith & Pokorny (1975) LMS Uncalibrated JPG

Information about all picture's original headers (in matlab .mat format) can be downloaded from here.

References (please cite the most appropriate reference when using these images):

Parraga, C. A., Baldrich, R. & Vanrell, M. Accurate Mapping of Natural Scenes Radiance to Cone Activation Space: A New Image Dataset. in CGIV 2010/MCS'10 - 5th European Conference on Colour in Graphics, Imaging, and Vision - 12th International Symposium on Multispectral Colour Science. (Society for Imaging Science and Technology ).

Parraga, C. A., Vazquez-Corral, J., & Vanrell, M. (2009). A new cone activation-based natural images dataset. Perception, 36(Suppl), 180.

Vazquez-Corral, J., Parraga, C. A., Vanrell, M., & Baldrich, R. (2009). Color Constancy Algorithms: Psychophysical Evaluation on a New Dataset. Journal of Imaging Science and Technology, 53(3), 0311051-0311059.

Databases available here (click on a picture to inspect the corresponding database):

Urban Scenery Urban Scenery

Forest & Motorways Urban Scenery

Snow & Seaside Urban Scenery

Naturalistic 01 Urban Scenery

Naturalistic 02Urban Scenery

Naturalistic 03Urban Scenery

Naturalistic 04 Urban Scenery  
About Us | Contact Us