To find tile boundaries, I originally used a hue, saturation and value test to see if a pixel was a tile or not. Unfortunately, it got confused by meeples that hung over the edge of a tile; indicated with red circles below:

Rather than looking at every pixel, I used recursion to find the edges of the tile placement positions, where the function that tries to find a corner initially starts with a step size of 16 pixels, but then calls itself with half that step size when it overshoots. This way, if the edge was 63 pixels away for example, instead of checking all 63 pixels, it would only check at positions 16, 32, 48, 64, 56, 60, 62 and 63, so at just 8 positions. The tricky bit was tailoring the algorithm to ignore when meeples overhang the edge of tiles, looking ahead to see if it went back to being the background.
Ultimately, however, it was easier just to look for the dark grey placement positions:

I could then pick a couple and look for the smallest dimension and that was the tile size. I did have to go one step further, however, because BGA must calculate the corner positions in floating point, but then use them in integer form. This had the effect of sometimes causing tiles sizes to fluctuate between two integer sizes, such as between 54 and 55 pixels. This meant I couldn't rigidly use one size as, going across the playing area, the error would increase sufficiently to throw the tile boundaries off.
If any of you find this sort of stuff interesting, I'll create a new subject for it, but I know it's not everyone's cup of tea and I don't want to distract here from
@DIN0's sterling work on his notation projects.