Fast Variance calculation in OpenCV

04/07/2016 13:04 Shadow992#1
I try to keep my question as short as possible.
I have got an Image of around 4048 x 3040 pixels. My task is to find some regions of interest.
These regions can be easily found by computing the variance of border of a rectangle.
I have to do this for every pixel in image with rectangle size of around 10x10.

Just imagine this would be my image (1 are the pixels I want to get variance of, x is my current pixel):

Quote:
00000
01110
01x10
01110
Now I want to calculate variance for this small rectangle (width=3, height=3). Therefore I have to iterate over each pixel in rectangle border and get mean and then variance.
Doing this for 4048 x 3040 pixels will end in summing up (and dividing) around 4048 * 3040 * 36 * 2 = 886026240 pixels.
This sounds like a huge number and in fact it is a huge number but there is no possibility to avoid checking every pixel.
However at the moment iterating these much pixels takes around 60seconds (no multithreading, using only one core).
This is "ok" for my application because I do not have to do some kind of real-time processing.
But 60s are still not good at all.
Because this project is not for personal purpose, I have to rewrite my code so it uses OpenCV.
I am not that familar with OpenCV but I did not find any kind of "Get Variance/Mean of Pixel-Rectangle".
Of coz I can iterate again over all pixels but OpenCV uses on default the GPU this means if there is some built-in function which calculates this for me. It would speed up the code enourmos (From 60s to around 3s or even better).

So my main question is:
Is there any kind of function which has low overhead but gives me what I want to have?

If such a function is not existent I will use OpenCL and write my own function, however it would be cool if I could avoid this step. :D

Thanks in advance.

Edit:
drawContours seems not to solve the problem. Some timings said:
15.7ms for 1280*720 for around 100 conturs.
However I would have around 12000000 conturs (one for each pixel). This would result in total time of:
12000000/100*15.7ms = 1884s
This would be even worse...
04/10/2016 23:42 Mysthik#2
I never used opencv or opencl but this might help you.

I found for opencv the function meanstddev [Only registered and activated users can see links. Click Here To Register...] but I don't know if it uses the gpu.



If opencl works the same way a pixelshader does you could use it to directly manipulate the image. You could just apply a "filter" to each pixel and calculate the mean and variance for each pixel at the same time. You should look up how a gaussian blur filter works in opencl as far as I know it also needs the neighbouring pixels

You could probably also use openGL and the openGL shading language with [Only registered and activated users can see links. Click Here To Register...]. With this method you might need to calculate the mean and covariance separately but you could use the created mean-image to calculate the variance-image.
Code:
1. Create an object the same size of your image
2. Add your input-image as a texture
3. Use a pixelshader to calculate the mean
4. Output1: a new image where each x,y value is the mean of the surrounding pixels
5. Create an object the same size of your image
6. Use your input-image and the mean-image (output1) as textures for the new object*
7. use a pixelshader to calculate the variance
8. Output2: a new image where each x,y value is the variance.
* I think you can use glBlendFunc for this but I'm not sure.
04/11/2016 14:57 Shadow992#3
Quote:
Originally Posted by Mysthik View Post
I never used opencv or opencl but this might help you.

I found for opencv the function meanstddev [Only registered and activated users can see links. Click Here To Register...] but I don't know if it uses the gpu.



If opencl works the same way a pixelshader does you could use it to directly manipulate the image. You could just apply a "filter" to each pixel and calculate the mean and variance for each pixel at the same time. You should look up how a gaussian blur filter works in opencl as far as I know it also needs the neighbouring pixels

You could probably also use openGL and the openGL shading language with [Only registered and activated users can see links. Click Here To Register...]. With this method you might need to calculate the mean and covariance separately but you could use the created mean-image to calculate the variance-image.
Code:
1. Create an object the same size of your image
2. Add your input-image as a texture
3. Use a pixelshader to calculate the mean
4. Output1: a new image where each x,y value is the mean of the surrounding pixels
5. Create an object the same size of your image
6. Use your input-image and the mean-image (output1) as textures for the new object*
7. use a pixelshader to calculate the variance
8. Output2: a new image where each x,y value is the variance.
* I think you can use glBlendFunc for this but I'm not sure.
Thanks for your help.
meanstddev should use the GPU. But the problem is the mask you have to create is much overheat as you always have the same mask but shifted. However OpenCV does not allow to "shift" a mask easily you would have to create a new mask.

But I guess sticking to OpenCL seems to be the best solution. OpenCL is working not exactly like a Pixelshader but more general. Overall because it works more general it is possible to create a pixelshader with it.

I did not test OpenGL but I am not sure if it will be much faster than my original approach.
I think OpenCL is really the perfect solution, too bad there is no built in function for this.