However, this colour matching task is not so simple once it meets reality, as is the case with so many things in life! The two biggest reality hurdles I can see to using the spectral signature from a mobile phone camera are; i) how does the app know which portion of the image is the model you want to know the color of? And, ii) the different light conditions that people will be taking pictures of their model in.
The first of these problems is where the machine-learning portion of the title of this article comes in. I will cover machine learning and its application to this sort of problem in more detail in a later article due to the complexity of the topic area. So for now let us just consider the broad outline of the problem; if I take a picture of model… what part of the raster is actually the colors that I want to match to?
Judging by the result of some initial experiments with the app, they have taken an approach based upon a sample area where the user hits the screen with their finger. Whilst this seems to work not too badly, quite often the colors returned will be those of a dominant background or even a blend of the colours near the finger strike (or indeed stylus strike). In the basic colour target experiment a number of finger strikes were sometimes needed to get the app to select the target colour correctly and stop blending the pink and the green into a mauve shade (for example). This suggests that the sample area is quite large and/or inaccurate around the point at which the finger strike is detected. Also of interest, despite detecting a color the app would sometimes fail to return matching paint range colours. This final point is possibly due to the strong, digitally produced, nature of the test card colours not linking up with the colour combinations they have stored in their database.