The VLFeat open
source library implements popular computer vision algorithms
including
SIFT, MSER, k-means, hierarchical
k-means, agglomerative information bottleneck, and quick
shift. It is written in C for efficiency and compatibility,
with interfaces in MATLAB for ease of use, and detailed
documentation throughout. It supports
Windows, Mac OS X, and Linux. The latest version of
VLFeat is .
Tutorials
Example applications
Citing
@misc{vedaldi08vlfeat,
Author = {A. Vedaldi and B. Fulkerson},
Title = {{VLFeat}: An Open and Portable Library
of Computer Vision Algorithms},
Year = {2008},
Howpublished = {\url{http://www.vlfeat.org/}}
Acknowledgments
UCLA Vision
Lab, Oxford VGG.
News
&nsbp;
- 14/06/2010 - VLFeat 0.9.9 released
- VLFeat 0.9.9 adds a new sample application (SIFT matching) and
minor refinements.
[Details].
- 14/06/2010 - Open Source Vision Software Tutorial
- VLFeat presented at the CVPR 2010 Open Source Vision Software
Tutorial. Slides of the presentation are available from
the tutorial web page.
- 10/05/2010 - VLFeat 0.9.8 released
- VLFeat 0.9.8 adds new tutorials, (hierarchical) k-means support for
floating point data, homogeneous kernel maps, a basic implementation
of PEGASOS for SVM learning, and many other improvements.
[Details].
- 16/01/2010 - VLFeat 0.9.7 released
- VLFeat 0.9.7 updates the binary distribution to be backward
compatible with Mac OS X 10.5 (Leopard).
[Details].
- 10/01/2010 - VLFeat 0.9.6 released
- VLFeat 0.9.6 contains minor improvements to the binary
distribution. Specifically, it makes VLFeat GNU/Linux distribution
compatible with the older GLIBC version 2.3.
[Details].
- 30/11/2009 - VLFeat 0.9.5 released
- VLFeat 0.9.5 adds a fast kd-tree implementation and
SSE-acelerated vector/histogram comparison. The dense SIFT (dsift)
implementation has also been improved. Binaries and compilation
support for Mac OS 10.6 (Snow Leopard) and MATLAB R2009b (32 and 64
bit) have been added
[Details].
MATLAB 7.0 and earlier require recompling the MEX files by
the provided vl_compile
command.
Acknowledgments
Part of this work was supported by
the UCLA Vision Lab and
the Oxford VGG
Lab. The authors would like to thank the many colleagues that
have contributed to VLFeat by testing and providing helpful
suggestions and comments.