Thursday, December 26, 2013

Camera Input in Matlab GUI



Accessing camera through Matlab is very much wanted for doing  real time image/video processing via matlab. So here I will explain how to access camera input via Matlab and create a GUI with camera view.

First let's begin with the image acquisition tool of Matlab. Via this tool we can learn lot of about our camera.

So start matlab and just enter imaqtool command in comand editor.
command   imaqtool   opens a GUI.

Image Acquisition Tool

Here you can find your hardware in hardware browser panel. Select your hardware and choose available and preferred  format. And then click start preview. You will see a preview from camera input and you can see matlab code which does the magic in session log panel in right down, here it is.
Session Log


So you can enter these commands in command editor when ever you want to preview video without using imaqtool.  You can change image resolution by changing the format from available format window.
So copy commands from session log and paste into comand editor. now you will see a workspace variable named vid. double click on the workspace varialbe vid and you will see property list of vid object.
You can change some of these properties in order to get desired video input.
Ok, now we know how to get camera input, change its properties. Now lets design the GUI to display this camera input and to capture image.
type guide in command editor and create a new empty GUI. Then design following GUI with a axes, and a push button. double click on the axes and open its properties.

in properties  select tag property and give it a name, this name will be name you are to use in code. I gave the name ' cameraview '.



In GUI designer go to view-> M-file editor and open source code for your GUI.  Here we can write matlab code to get the camera input. In the opening function of your GUI (function whose name is ended with OpeningFcn) insert this code. This will create a video object and binds it to the handles object of the GUI. So change according to your device as previously mentioned.


you can download source code here.

First we create a video object and we changed its properties according to what we need. Then we create a image object of size of camera resolution . To learn much about it see help on image function.
And then we use preview function to draw preview on previously created image.
Now when you run your GUI you can see video input viewing on camera axes.




Next our target is to capture an image when you press capture button. To do that go to call back function of the capture button.  Here we can get captured image from using getsnapshot method and here I will save image data to base workspace so that we can use that image for another tasks. This is the code for capture button.


Click the capture button,Now you can see a variable named imqs in your base workspace. To understand more about assigning see matlab help on assignin
and in command editor enter 
imshow(imqs); and you will see captured image.
 Now we have camera input. It is processing time.
Download M-files here .

Wednesday, November 13, 2013

Edge Detection in Images


In Image Processing edge detection plays a major role since it helps to extract features what we are interested in. If we are able to find edges of an object then we can segment the object.
The fundamental idea in edge detection is using an appropriate kernel to find the edges we are interested in. We can represent an image in its grayscale form rather than using rgb or another representation since it makes our job easy.
Think there is a 5 * 5 image as shown below.
250   240   250   30   20
230   240   250   30   25
240   240   250   30   20
250   240   250   30   20
250   240   250   30   20
 
Clearly we can see a edge which is going from high value to a low value in vertical direction. This is<strong> vertical edge.</strong> To identify these type of vertical edges, we can create a new image by taking the difference of each pixel value and its adjacent right pixel. I can create a new Image I2 from original image I, such that
I2(x,y) = I(x,y) - I(x+1,y)
So the resultant image is

 10    -10    220    10   20
-10   -10    220      5   25
   0    -10    220    10   20
 10    -10    220    10   20
 10    -10    220    10   20  

So our image I1 has sharp values along the edge. What we did is we take the difference between two pixels in horizontal directions. Consider this 1 by 2 matrix, G = [1,-1] . Mathematically what we have done here can be represented as a <strong>convolution</strong> between matrix G and original image I, so mathematically
I2 = I * G , here * is for convolution. And we named our matrix G as kernel. if our G is u by v matrix Convolution operation can be written as
I2(x,y) = sum( I(x-u,y-v) * G(u, v) ) here * is for normal multiplication. This is what we have done above.
So by convolve our image I and kernel G == [1,-1] we were able to find vertical edges. So if we define our kernel as
[ 1
-1 ]
then we will be able to find horizontal edges. So to find edges is convolve a suitable kernel with the source image.
Here is sample output of detecting horizontal edges by using the second kernel I mentioned.




 And here is the sample output using kernel [1, -1]. This kernel detects the vertical edges as explained    earlier.



Note that in our second image vertical edges are high lighted than in the previous one, why ? It is because kernel we used. So we have identified horizontal and vertical edge finding methods. So if we wanted to find edges in any direction or gradient image then we can use previously defined two images take I2 as horizontal edge image and I3 as vertical edge image then our gradient image E can be found by
E(x,y) = sqrt( square(I2(x,y)) + square(I3(x,y)) )
Here is the result from previous defined two kernels.



And note that we can achieve better result if we applied a Gaussian filter before edge detection, anyway we can see the difference in each images.
Next I will tell about implementation in java.
Download Project Source code here.

First I load Image from the file using ImageIO and take a BufferedImage object.

image = ImageIO.read(new File("raiway.jpg")); 

Next I converted this image to gray scale image by taking average of R,G,B values and I take raw byte data to perform calculation so I represented image by using a 2 dimensional byte array. Here is How I converted RGB image to gray level data.



After that I wrote a convolve operation as a java method so that I could convolve any matrix with my image data.

Finally the Gradient image finding method which uses two kernels.




So define your own kernels and find edges. Just convolve them.