Andrea Vedaldi, Ph.D. email@example.com
University Lecturer in Engineering Science
Information Engineering Building 30.05, Parks Road, Oxford, OX1 3PJ
Visual Geometry Group (directions)
Tel. +44 1865 273 127
Résumé Google Scholar
I am University Lecturer in Engineering Science, Tutorial Fellow at New College, and member of the VGG group at the University of Oxford. My research interests include machine learning and invariant visual representations with applications to the classification and detection of object categories. I am one of the main authors of the VLFeat library.
- A new SVM^struct tutorial (slide and example code) is available here.
- I am now University Lecturer in Engineering Science at Oxford and Tutorial Fellow at New College.
- PASCAL Harvest grant for VLFeat development.
- svm-struct-matlab 1.1 adds support for Windows (thansk to Iasonas Kokkinos!).
- svm-struct-matlab 1.0 released! This new project is a MATLAB wrapper of SVMstruct.
- VLFeat wins the ACM Multimedia Open Source Software Competition
- VLFeat presented at the ECCV10 Tutorial on Computer Vision and 3D Perception for Robotics
- CVPR10 Tutorial on Open Source Vision Software.
- New contributed Python interface to siftpp.
Sparse kernel maps and faster product quantization learning. We propose sparse feature map representations of kernels similar to sparse expansions such as matching pursuit, linking previous work on dense and sparse explicit feature maps. The sparse maps can be smaller and faster in certain cases. One of these is Product Quantisation (PQ), that we reinterpret as a sparse kernel encoding. By doing so, we show for the first time that PQ can accelerate learning in addition to compressing the data.
Learning Equivariant Structured Output SVM Regressors. We introcude a method to learn equivariant functions with Supprot Vector Machines (SVMs). Examples include: a transformation-invariant multi-class classifier, learning to detect a rotating object without searching for the rotation, and learning to rank images of pedestrians invariantly to jitter and articulation.
Efficient additive kernels: The homogeneous kernel map. We introduce closed-form finite dimensional feature maps approximating the additive kernels (intersection, Hellinger’s, χ2, Jensen-Shannon, ...). By adding onle line to your code you can use non-linear additive kernels as if they were linear, with vastly improved training and testing speed and compactness of the resulting models (code).