In this part of the site I have put information related to my research, publications, tutorials and so on. If you have any questions, do not hesitate to contact me!
3D-reconstruction and Stereo Disparity
3D reconstruction based on stereo disparity is somewhat similar to how we humans perceive depth: since we have two eyes facing in the same general direction, we have two different vantage points of the scene being viewed. Therefore, corresponding image features, between left and right cameras (or eyes), differ in position as a function of depth. This difference is called disparity. Put it another way, if we know the stereo disparity, we can deduce something about the 'depth' or distance to the objects being observed. In machine vision we calculate/approximate disparity maps using quite a few different techniques like block matching, phase based methods and so on. Once the stereo disparity known, if external and internal parameters of the cameras are known, it is possible to do a 3D-reconstruction of the scene. Typically the images are rectified (using epipolar geometry) meaning that point correspondences are on horizontal lines.
Optical-flow refers to apparent movement of pixels in the camera plane. While in the stereo disparity case corresponding points are found on horizontal lines, in optical-flow case the search space typically is two dimensional.
Segmentation methods try to 'group' pixels into meaningful groups based on similarity metric(s). What is meaningful group is based on the task at hand.
If you find the above ideas/concepts interesting, you can take a look at each of the individual topics using the corresponding menu options, or you can use some of the Matlab/MEX (written in C) codes that can be found at the Code page!