The Citadel Paint App: spectral reflectance (and a little machine learning)

Games Workshop (GW) recently released their revamped citadel paint app and overall it appears to be a good little app. However, what really caught my attention was the feature whereby the app will analyze a photo and try to match the colours in the photo to the Citadel paint range. How they are doing this is quite likely to be magic… But my other guess is that they are using a classic technique that is fundamental to geospatial science and remote sensing of the Earth (spectral analysis). And probably are not using, but probably should use, a technique that’s application is growing all the time to all fields of science and life (machine learning).

  The Citadel Paint App: all rights reserved to Games Workshop©. I do not own any of the assets depicted here nor have I worked on this app or for Games Workshop in any capacity. This article is for educational purposes only. Here we look at the camera function of the ‘Paint by Colour’ portion of the app (dash circled in red).

The Citadel Paint App: all rights reserved to Games Workshop©. I do not own any of the assets depicted here nor have I worked on this app or for Games Workshop in any capacity. This article is for educational purposes only. Here we look at the camera function of the ‘Paint by Colour’ portion of the app (dash circled in red).

Spectral analysis is the use of the spectral reflectance of a surface to find out something interesting about the object. Spectral reflectance is the energy of the sun (or light source) that hits an object and is then bounced back to your eye, or in this case, camera. The amount of energy that is bounced back is different depending on the object’s chemical structure. A camera, your eye or the eye of a family pet- they are all detecting the spectral reflectance of an object in the visible portion of the electromagnetic spectrum and then turning this information into the picture or image you see before you.

  The electromagnetic spectrum, here we are concerned about the visible spectrum portion of various wave-lengths (image from Wikimedia Commons).

The electromagnetic spectrum, here we are concerned about the visible spectrum portion of various wave-lengths (image from Wikimedia Commons).

The picture on your phone is expressed as a combination of red, green and blue (RGB), with the RGB information recorded on a scale of values that goes from 0 to 256. A picture of this sort is what is known as a ‘raster’ in the computing world. A raster is essentially a grid of numbers that the computer turns into colors- and therefore an image. Thus, in summary, your camera is a sensor that is detecting three distinct wavelengths of light and storing that detected information in a raster.

How does this relate to the Citadel Paint app? Well, different coloured paints, as we perceive them, are chemicals that reflect the energy of light in different ways. The Citadel Paint app seems to be using the spectral signature of this reflectance to try and match the colours that the camera is seeing to the recorded spectral signature of their various paint combinations. A spectral signature is in effect the ‘energy fingerprint’ of a colour; if you have a record of that fingerprint it is a relatively simple task to match the values you see to the values in your database. In geospatial science, we use this to do things like tell the difference between water and grass. GW is perhaps doing the same basic thing here; but looking for colour and matching to the digital colour signature, the combination of RGB values, in a database.

  Surface discrimination using visible and near- to min-infrared wavelengths (image from the SEOS-project.eu).

Surface discrimination using visible and near- to min-infrared wavelengths (image from the SEOS-project.eu).

However, this colour matching task is not so simple once it meets reality, as is the case with so many things in life! The two biggest reality hurdles I can see to using the spectral signature from a mobile phone camera are; i) how does the app know which portion of the image is the model you want to know the color of? And, ii) the different light conditions that people will be taking pictures of their model in.

The first of these problems is where the machine-learning portion of the title of this article comes in. I will cover machine learning and its application to this sort of problem in more detail in a later article due to the complexity of the topic area. So for now let us just consider the broad outline of the problem; if I take a picture of model… what part of the raster is actually the colors that I want to match to?

Judging by the result of some initial experiments with the app, they have taken an approach based upon a sample area where the user hits the screen with their finger. Whilst this seems to work not too badly, quite often the colors returned will be those of a dominant background or even a blend of the colours near the finger strike (or indeed stylus strike). In the basic colour target experiment a number of finger strikes were sometimes needed to get the app to select the target colour correctly and stop blending the pink and the green into a mauve shade (for example). This suggests that the sample area is quite large and/or inaccurate around the point at which the finger strike is detected. Also of interest, despite detecting a color the app would sometimes fail to return matching paint range colours. This final point is possibly due to the strong, digitally produced, nature of the test card colours not linking up with the colour combinations they have stored in their database.

  Colour match simple test on an SMPTE colour bar (test card from Wikipedia.com). Each sample point was test selected by stylus three times and finger three times. Taking the most common (majority) result. Note that B point failed to match any colours at all, despite managing to detect a colour.

Colour match simple test on an SMPTE colour bar (test card from Wikipedia.com). Each sample point was test selected by stylus three times and finger three times. Taking the most common (majority) result. Note that B point failed to match any colours at all, despite managing to detect a colour.

How could this colour selection problem be improved? The answer may lie in the application of machine learning. GW could teach the app to ‘learn’ what a model looks like as compared to typical backgrounds that people photograph their miniatures against. Given the vast number of hobbyists out there, and therefore the big data they would generate, I suspect that we could probably teach the app to do this pretty well. This allow us to both detect from a photo where the model is and then to segment this into the different colours for spectral analysis, thereby removing my clumsy fingers from the process all-together.

Further testing of the app in terms of specific color detection shows that in well-controlled light conditions it does a good job of matching GW colours to colour palette suggestions that include said colour within them. Although the blue was a bit of a struggle. This generally good performance is as expected given that they likely have set the app up based on the spectral signatures of their paints in combination. The app also does a fair job of matching random colours to a close shade in the paint-range in the good lighting conditions. It is worth noting here that even with the paint-pot filling much of the image, more often than not the desk was the object colour analyzed.

  Paint-pot-colour-test. The frequency with which the desk colour was picked up surprised me, given how dominant the pot is in the picture. It might also be the case that they are taking some mean value across the image to determine what to match.

Paint-pot-colour-test. The frequency with which the desk colour was picked up surprised me, given how dominant the pot is in the picture. It might also be the case that they are taking some mean value across the image to determine what to match.

However, in poorly controlled light conditions the performance of the colour matching degrades rapidly. The app struggles to cope with the two extremes of lighting, dull and bright. In dull light the returns are all of drab shades, even if the targeted colours are bright, whereas in bright conditions the returns are all of white or very bright highlight colours. This indicates that they have not controlled for the relative brightness/saturation of a picture and that the app is therefore only analyzing the raw image. A case of what you see is what you get!

Overall, given the likely development budget, the range of user’s phones and the general focus for the app; the colour matching function is good. It uses a simple principle in a fun new way that could be genuinely useful to some people. Forgotten what paints you used to paint that mini with- no longer a problem! However, with machine learning and pre-processing of the input raster, it could go from good to brilliant. Anyone fancy coding up a better solution?

Disclaimer: I have guessed a lot of what the app might be doing here, based on the physical principles I use in my field of science. I could of course be entirely wrong!