I'm doing some research in computer vision using fasct cv on a windows phone platform. I manage to use: fcvCornerFast9Scoreu8 to get corners from a reference image. Then for each detected corner I construct a descriptor using fcvDescriptor17x17u8To36s8 by cropping a 17x17 pixels from the luminance source image. I then constructed from the descriptors for the source image a KD Tree to ease the search by using fcvKDTreeCreate36s8f32. Now I got stuck by tring to find this source object in other images. All points I query for in the KD tree seams to be wrong. So I guess that the descriptor depends on the scale and rotation of the image (source and the image where the search occurs). But as I saw on other posts in the forums there is not a word in docs about what kind of descriptor fcvDescriptor17x17u8To36s8 returns and if it is invariant in regard to scale or orientation. So what will be a good way to do object matching by using fastcv functions? Do I need to build code for scale invariance or the desciptor used is scale invariant? As for orientation invariance what functions from the fastcv would make good case?
Thanks for your time.
Hi,
The descriptors created by fcvDescriptor17x17u8To36s8 are not scale-invariant, although they can tolerate small rotation of images. You can try fcvTrackLKOpticalFlowu8 to see which target image has most features (e.g. corner points) tracked. Or you can use fcvTrackBMOpticalFlow16x16u8.
Cheers,
-Jeff