Tutorials - Repeatability

# Repeatability benchmark tutorial

The repeatability benchmark reimplements the protocol introduced by Mikolajczyk et al. [1]. It defines two feature extractor tests. The first test measures the feature extractor repeatability. The repeatability measures to what extent do detected regions overlap exactly the same scene region by comparing detected features in two images of the same scene. It is based only on the feature geometry.

The second test computes the matching score that includes also the local features descriptors. The second test helps to asses detected regions distinctiveness in planar scenes.

In this tutorial, it is shown how to perform both tests together with visualisation of the computed scores.

The source code of this tutorial is available in repeatabilityDemo.m function, this text shows only selected parts of the code.

## Image features detection

VLBenchmarks contains a few built-in image feature extractors such as VLFeat SIFT, VLFeat MSER, VLFeat Covdet and a random features generator. Each feature extractor is represented by a MATLAB object which unifies the feature detection. All these feature extractors are implemented in a Matlab package localFeatures. For example, an instance of a class of VLFeat SIFT feature extractor can be obtained by:

sift = localFeatures.VlFeatSift() ;


The feature extractor object manages the values of feature extractor parameters. Default values are set in the constructor of the object however any parameter can be changed. For example we can create VLF-SIFT feature extractor with different peak threshold parameter.

thrSift = localFeatures.VlFeatSift('PeakThresh',11);


Let's generate a test image.

ellBlobs = datasets.helpers.genEllipticBlobs();
ellBlobsPath = fullfile('data','ellBlobs.png');
imwrite(ellBlobs,ellBlobsPath);


To extract features from an image, each feature extractor implements method frames = detObj.extractFeatures(imgPath).

siftFrames = sift.extractFeatures(ellBlobsPath);
bigScaleSiftFrames = bigScaleSift.extractFeatures(ellBlobsPath);


The detected features can be visualises with their regions using vl_plotframe function.

imshow(ellBlobs);
sfH = vl_plotframe(siftFrames,'g');
bssfH = vl_plotframe(bigScaleSiftFrames,'r');
legend([sfH bssfH],'SIFT','Big Scale SIFT');


The feature extractors cache the detected features in a cache, thus when you run extractFeatures(ellBlobsPath) again, features are loaded from the cache. You can disable caching by calling feature extractor method obj.disableCaching().

## Repeatability test

The feature extractor repeatability is calculated for two sets of feature frames FRAMESA and FRAMESB detected in a reference image IMAGEA and a second image IMAGEB. The two images are assumed to be related by a known homography H, mapping pixels in the domain of IMAGEA to pixels in the domain of IMAGEB. The test assumes static camera, no parallax, or moving camera looking at a flat scene.

A perfect co-variant feature extractor would detect the same features in both images regardless of a change in viewpoint (for the features that are visible in both cases). A good feature extractor will also be robust to noise and other distortion. The repeatability is the percentage of detected features that survive a viewpoint change or some other transformation or disturbance in going from IMAGEA to IMAGEB and is calculated only based on the frames overlap. For detail about this test see [1].

For measuring feature extractors repeatability there is a class benchmarks.RepeatabilityBenchmarks(). To measure the repeatability as it is defined in [1] the benchmark object needs the following configuration:

import benchmarks.*;
repBenchmark = RepeatabilityBenchmark('Mode','Repeatability');


To test a feature extractor, benchmark object has a method testFeatureExtractor(featExtractor, tf, imageAPath,imageBPath). The remaining parameters can be obtained from the VGG Affine dataset class which contains sets of six images with known homographies. Let's take the graffiti scene:

dataset = datasets.VggAffineDataset('Category','graf');


Now we define set of feature extractors which we want to test.

mser = localFeatures.VlFeatMser();
featureExtractors = {sift, thrSift, mser};


And finally loop over the feature extractors and selected images.

imageAPath = dataset.getImagePath(1);
for detIdx = 1:numel(featureExtractors)
featExtractor = featureExtractors{detIdx};
for imgIdx = 2:dataset.numImages
imageBPath = dataset.getImagePath(imgIdx);
tf = dataset.getTransformation(imgIdx);
[rep(detIdx,imgIdx) numCorr(detIdx,imgIdx)] = ...
repBenchmark.testFeatureExtractor(featExtractor, tf, ...
imageAPath,imageBPath);
end
end


This loop can be easily executed in parallel using parfor. Computed results are usually plotted in a graph showing together repeatability and number of correspondences.

detNames = {'SIFT','SIFT PT=10','MSER'};
plot(rep'.*100,'LineWidth',2); legend(detNames);
...
plot(numCorr','LineWidth',2); legend(detNames);
...


## Displaying the correspondences

It is useful to see the feature frames correspondences. Let's see what correspondences have been found between the features detected by VLF-SIFT feature extractor in the first and the third image. We can get the cropped and reprojected features and the correspondences itself by:

imgBIdx = 3;
imageBPath = dataset.getImagePath(imgBIdx);
tf = dataset.getTransformation(imgBIdx);
[r nc siftCorresps siftReprojFrames] = ...
repBenchmark.testFeatureExtractor(sift, tf, imageAPath,imageBPath);


The repeatability results are also cached, thus the data are loaded from cache and nothing is recalculated for successive calls. To visualise the correspondences call:

imshow(imread(imageBPath));
benchmarks.helpers.plotFrameMatches(siftCorresps,siftReprojFrames,...
'IsReferenceImage',false,'PlotMatchLine',false);


## Matching score

The computation of the matching score differs from the repeatability score. The one-to-one correspondences are not only based on the feature frames geometry (overlaps) but also the distance in the descriptor space. Therefore the feature extractor must be able to extract feature descriptors. This is not the case of MSER feature extractor, so it has to be coupled with a feature extractor which supports descriptor calculation. Unfortunately none of the built-in descriptors is affine invariant so only similarity invariant SIFTs is used.

mserWithSift = localFeatures.DescriptorAdapter(mser, sift);
featureExtractors = {sift, thrSift, mserWithSift};


The matching benchmark object can be constructed.

matchingBenchmark = RepeatabilityBenchmark('Mode','MatchingScore');


The rest is the same as for repeatability.

matching = zeros(numel(featureExtractorss),dataset.numImages);

for imgIdx = 2:dataset.numImages
imageBPath = dataset.getImagePath(imgIdx);
tf = dataset.getTransformation(imgIdx);
[matching(detIdx,imgIdx) numMatches(detIdx,imgIdx)] = ...
matchingBenchmark.testFeatureExtractor(featExtractor, ...
tf, imageAPath,imageBPath);
end
end

detNames = {'SIFT','SIFT PT=10','MSER with SIFT'};

plot(matching'.*100,'LineWidth',2); legend(detNames);
...
plot(numMatches','LineWidth',2); legend(detNames);
...


As for repeatability we can also show the matched frames itself.

## References

1. K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. Van Gool. A comparison of affine region detectors. IJCV, 1(65):43–72, 2005.