A while ago, I wrote this blog post on creating and comparing UIImages. That code allowed me to develop the image processing part of the app against my unit tests, which was really, really helpful given that I rewrote it about four times to make it performant enough.
So, when I started writing Android code it was one of the first things I ported. Firstly let me say – way easier on Android than iOS. A tiny difference in the API was a gotcha, iOS takes arguments x, y, width, height, and a comparable function on Android takes x1, y1, x2, y2. But other than that it was much more straightforward, basically because of how much easier it is to delve into the pixels.
To create a one color image, you can just set the color and draw to the canvas:
But if you want to set individual pixels you can just call setPixel(). So to create a 3×3 2-color-alternating image (I find this a really useful test image):
This is similar to the way we can create an image from an array of colors:
On iOS creating from an array was sufficiently complicated that I felt like the simpler creation methods were also worthwhile. On Android I’m less certain! I may refactor them to just call the array function.
But now we have made our test images, we need to be able to compare them. As before, I’m defining two images as the same iff (if and only if) they have the same width, height, and the pixels are the same color. For now I’m able to do an exact comparison, but I may add some kind of tolerance here as the code evolves.
These helper methods have been a really important part of my testing strategy on both platforms – the image processing is the core of the app, and I want to be sure it works really well.