Support Vector Machines (SVM)

Table of Contents

Support Vector Machines* (SVMs) are one of the most popular types of discriminate classifiers. VLFeat implements two solvers, SGD and SDCA, capable of learning linear SVMs on a large scale. These linear solvers can be combined with explicit feature maps to learn non-linear models as well. The solver supports a few variants of the standard SVM formulation, including using loss functions other than the hinge loss.

Getting started demonstrates how to use VLFeat to learn an SVM. Information on SVMs and the corresponding optimization algorithms as implemented by VLFeat are given in:

Getting started

This section demonstrates how to learn an SVM by using VLFeat. SVM learning is implemented by the VlSvm object type. Let's start by a complete example:

#include <stdio.h>
#include <vl/svm.h>
int main()
{
vl_size const numData = 4 ;
vl_size const dimension = 2 ;
double x [dimension * numData] = {
0.0, -0.5,
0.6, -0.3,
0.0, 0.5
0.6, 0.0} ;
double y [numData] = {1, 1, -1, 1} ;
double lambda = 0.01;
double * const model ;
double bias ;
x, dimension, numData,
y,
lambda) ;
vl_svm_train(svm) ;
model = vl_svm_get_model(svm) ;
bias = vl_svm_get_bias(svm) ;
printf("model w = [ %f , %f ] , bias b = %f \n",
model[0],
model[1],
bias);
return 0;
}

This code learns a binary linear SVM using the SGD algorithm on four two-dimensional points using 0.01 as regularization parameter.

VlSvmSolverSdca can be specified in place of VlSvmSolverSdca in orer to use the SDCA algorithm instead.

Convergence and other diagnostic information can be obtained after training by using the vl_svm_get_statistics function. Algorithms regularly check for convergence (usally after each pass over the data). The vl_svm_set_diagnostic_function can be used to specify a callback to be invoked when diagnostic is run. This can be used, for example, to dump information on the screen as the algorithm progresses.

Convergence is reached after a maximum number of iterations (vl_svm_set_max_num_iterations) or after a given criterion falls below a threshold (vl_svm_set_epsilon). The meaning of these may depend on the specific algorithm (see Support Vector Machines (SVM) for further details).

VlSvm is a quite powerful object. Algorithms only need to perform inner product and accumulation operation on the data (see Advanced SVM topics). This is used to abstract from the data type and support almost anything by speciying just two functions (vl_svm_set_data_functions).

A simple interface to this advanced functionality is provided by the VlSvmDataset object. This supports natively float and double data types, as well as applying on the fly the homogeneous kernel map (Homogeneous kernel map). This is exemplified in Getting started.