Documentation>C API
Cornerness measures

The goal of a cornerness measure (Detection using a cornerness measure) is to associate to an image patch a score proportional to how strongly the patch contain a certain strucuture, for example a corner or a blob. This page reviews the most important cornerness measures as implemented in VLFeat:

This page makes use of notation introduced in Differential and integral image operations.

Harris corners

This section introduces the fist of the cornerness measure \(\mathcal{F}[\ell]\). Recall (Detection using a cornerness measure) that the goal of this functional is to respond strongly to images \(\ell\) of corner-like structure.

Rather than explicitly encoding the appearance of corners, the idea of the Harris measure is to label as corner any image patch whose appearance is sufficiently distinctive to allow accurate localization. In particular, consider an image patch \(\ell(\bx), \bx\in\Omega\), where \(\Omega\) is a smooth circular window of radius approximately \(\sigma_i\); at necessary condition for the patch to allow accurate localization is that even a small translation \(\ell(\bx+\delta)\) causes the appearance to vary significantly (if not the origin and location \(\delta\) would not be distinguishable from the image alone). This variation is measured by the sum of squared differences

\[ E(\delta) = \int g_{\sigma_i^2}(\bx) (\ell_{\sigma_d^2}(\bx+\delta) - \ell_{\sigma_d^2}(\bx))^2 \,d\bx \]

Note that images are compared at scale \(\sigma_d\), known as differentiation scale* for reasons that will be clear in a moment, and that the squared differences are summed over a window softly defined by \(\sigma_i\), also known as integration scale. This function can be approximated as \(E(\delta)\approx \delta^\top M[\ell;\sigma_i^2,\sigma_d^2] \delta\) where

\[ M[\ell;\sigma_i^2,\sigma_d^2] = \int g_{\sigma_i^2}(\bx) (\nabla \ell_{\sigma_d^2}(\bx)) (\nabla \ell_{\sigma_d^2}(\bx))^\top \, d\bx. \]

is the so called structure tensor.

A corner is identified when the sum of squared differences \(E(\delta)\) is large for displacements \(\delta\) in all directions. This condition is obtained when both the eignenvalues \(\lambda_1,\lambda_2\) of the structure tensor \(M\) are large. The Harris cornerness measure captures this fact:

\[ \operatorname{Harris}[\ell;\sigma_i^2,\sigma_d^2] = \det M - \kappa \operatorname{trace}^2 M = \lambda_1\lambda_2 - \kappa (\lambda_1+\lambda_2)^2 \]

Harris in the warped domain

The cornerness measure of a feature a location \(u\) (recall that locations \(u\) are in general defined as image warps) should be computed after normalizing the image (by applying to it the warp \(u^{-1}\)). This section shows that, for affine warps, the Harris cornerness measure can be computed directly in the Gaussian affine scale space of the image. In particular, for similarities, it can be computed in the standard Gaussian scale space.

To this end, let \(u=(A,T)\) be an affine warp identifying a feature location in image \(\ell(\bx)\). Let \(\bar\ell(\bar\bx) = \ell(A\bar\bx+T)\) be the normalized image and rewrite the structure tensor of the normalized image as follows:

\[ M[\bar\ell; \bar\Sigma_i, \bar\Sigma_d] = M[\bar\ell; \bar\Sigma_i, \bar\Sigma_d](\mathbf{0}) = \left[ g_{\bar\Sigma_i} * (\nabla\bar\ell_{\bar\Sigma_d}) (\nabla\bar\ell_{\bar\Sigma_d})^\top \right](\mathbf{0}) \]

This notation emphasizes that the structure tensor is obtained by taking derivatives and convolutions of the image. Using the fact that \(\nabla g_{\bar\Sigma_d} * \bar\ell = A^\top (\nabla g_{A\bar\Sigma A^\top} * \ell) \circ (A,T)\) and that \(g_{\bar\Sigma} * \bar \ell = (g_{A\bar\Sigma A^\top} * \ell) \circ (A,T)\), we get the equivalent expression:

\[ M[\bar\ell; \bar\Sigma_i, \bar\Sigma_d](\mathbf{0}) = A^\top \left[ g_{A\bar\Sigma_i A^\top} * (\nabla\ell_{A\bar\Sigma_dA^\top})(\nabla\ell_{A\bar\Sigma_d A^\top})^\top \right](A\mathbf{0}+T) A. \]

In other words, the structure tensor of the normalized image can be computed as:

\[ M[\bar\ell; \bar\Sigma_i, \bar\Sigma_d](\mathbf{0}) = A^\top M[\ell; \Sigma_i, \Sigma_d](T) A, \quad \Sigma_{i} = A\bar\Sigma_{i}A^\top, \quad \Sigma_{d} = A\bar\Sigma_{d}A^\top. \]

This equation allows to compute the structure tensor for feature at all locations directly in the original image. In particular, features at all translations \(T\) can be evaluated efficiently by computing convolutions and derivatives of the image \(\ell_{A\bar\Sigma_dA^\top}\).

A case of particular instance is when \(\bar\Sigma_i= \bar\sigma_i^2 I\) and \(\bar\Sigma_d = \bar\sigma_d^2\) are both isotropic covariance matrices and the affine transformation is a similarity \(A=sR\). Using the fact that \(\det\left( s^2 R^\top M R \right)= s^4 \det M\) and \(\operatorname{tr}\left(s^2 R^\top M R\right) = s^2 \operatorname{tr} M\), one obtains the relation

\[ \operatorname{Harris}[\bar \ell;\bar\sigma_i^2,\bar\sigma_d^2] = s^4 \operatorname{Harris}[\ell;s^2\bar\sigma_i^2,s^2\bar\sigma_d^2](T). \]

This equation indicates that, for similarity transformations, not only the structure tensor, but directly the Harris cornerness measure can be computed on the original image and then be transferred back to the normalized domain. Note, however, that this requires rescaling the measure by the factor \(s^4\).

Another important consequence of this relation is that the Harris measure is invariant to pure image rotations. It cannot, therefore, be used to associate an orientation to the detected features.

Hessian blobs

The *(determinant of the) Hessian* cornerness measure is given determinant of the Hessian of the image:

\[ \operatorname{DetHess}[\ell;\sigma_d^2] = \det H_{g_{\sigma_d^2} * \ell}(\mathbf{0}) \]

This number is large and positive if the image is locally curved (peaked), roughly corresponding to blob-like structures in the image. In particular, a large score requires the product of the eigenvalues of the Hessian to be large, which requires both of them to have the same sign and are large in absolute value.

Hessian in the warped domain

Similarly to the Harris measure, it is possible to work with the Hessian measure on the original unnormalized image. As before, let \(\bar\ell(\bar\bx) = \ell(A\bar\bx+T)\) be the normalized image and rewrite the Hessian of the normalized image as follows:

\[ H_{g_{\bar\Sigma_d} * \bar\ell}(\mathbf{0}) = A^\top \left(H_{g_{\Sigma_d} * \ell}(T)\right) A. \]

Then

\[ \operatorname{DetHess}[\bar\ell;\bar\Sigma_d] = (\det A)^2 \operatorname{DetHess}[\ell;A\bar\Sigma_d A^\top](T). \]

In particular, for isotropic covariance matrices and similarity transformations \(A=sR\):

\[ \operatorname{DetHess}[\bar\ell;\bar\sigma_d^2] = s^4 \operatorname{DetHess}[\ell;s^2\bar\sigma_d^2](T) \]

Laplacian and Difference of Gaussians blobs

The Laplacian of Gaussian (LoG) or trace of the Hessian cornerness measure is given by the trace of the Hessian of the image:

\[ \operatorname{Lap}[\ell;\sigma_d^2] = \operatorname{tr} H_{g_{\sigma_d}^2 * \ell} \]

Laplacian in the warped domain

Similarly to the Hessian measure, the Laplacian cornenress can often be efficiently computed for features at all locations in the original unnormalized image domain. In particular, if the derivative covariance matrix \(\Sigma_d\) is isotropic and one considers as warpings similarity transformations \(A=sR\), where \(R\) is a rotatin and \(s\) a rescaling, one has

\[ \operatorname{Lap}[\bar\ell;\bar\sigma_d^2] = s^2 \operatorname{Lap}[\ell;s^2\bar\sigma_d^2](T) \]

Note that, comparing to the Harris and determinant of Hessian measures, the scaling for the Laplacian is \(s^2\) rather than \(s^4\).

Laplacian as a matched filter

The Laplacian is given by the trace of the Hessian operator. Differently from the determinant of the Hessian, this is a linear operation. This means that computing the Laplacian cornerness measure can be seen as applying a linear filtering operator to the image. This filter can then be interpreted as a template of a corner being matched to the image. Hence, the Laplacian cornerness measure can be interpreted as matching this corner template at all possible image locations.

To see this formally, compute the Laplacian score in the input image domain:

\[ \operatorname{Lap}[\bar\ell;\bar\sigma_d^2] = s^2 \operatorname{Lap}[\ell;s^2\bar\sigma_d^2](T) = s^2 (\Delta g_{s^2\bar\sigma_d^2} * \ell)(T) \]

The Laplacian fitler is obtained by moving the Laplacian operator from the image to the Gaussian smoothing kernel:

\[ s^2 (\Delta g_{s^2\bar\sigma_d^2} * \ell) = (s^2 \Delta g_{s^2\bar\sigma_d^2}) * \ell \]

Note that the filter is rescaled by the \(s^2\); sometimes, this factor is incorporated in the Laplacian operator, yielding the so-called normalized Laplacian.

The Laplacian of Gaussian is also called top-hat function and has the expression:

\[ \Delta g_{\sigma^2}(x,y) = \frac{x^2+y^2 - 2 \sigma^2}{\sigma^4} g_{\sigma^2}(x,y). \]

This filter, which acts as corner template, resembles a blob (a dark disk surrounded by a bright ring).

Difference of Gaussians

The Difference of Gaussian (DoG) cornerness measure can be interpreted as an approximation of the Laplacian that is easy to obtain once a scalespace of the input image has been computed.

As noted above, the Laplacian cornerness of the normalized feature can be computed directly from the input image by convolving the image by the normalized Laplacian of Gaussian filter \(s^2 \Delta g_{s^2\bar\sigma_d^2}\).

Like the other derivative operators, this filter is simpe to discriteize. However, it is often approximated by computing the the Difference of Gaussians* (DoG) approximation instead. This approximation is obtained from the easily-proved identity:

\[ \frac{\partial}{\partial \sigma} g_{\sigma^2} = \sigma \Delta g_{\sigma^2}. \]

This indicates that computing the normalized Laplacian of a Gaussian filter is, in the limit, the same as taking the difference between Gaussian filters of slightly increasing standard deviation \(\sigma\) and \(\kappa\sigma\), where \(\kappa \approx 1\):

\[ \sigma^2 \Delta g_{\sigma^2} \approx \sigma \frac{g_{(\kappa\sigma)^2} - g_{\sigma^2}}{\kappa\sigma - \sigma} = \frac{1}{\kappa - 1} (g_{(\kappa\sigma)^2} - g_{\sigma^2}). \]

One nice propery of this expression is that the factor \(\sigma\) cancels out in the right-hand side. Usually, scales \(\sigma\) and \(\kappa\sigma\) are pre-computed in the image scale-space and successive scales are sampled with uniform geometric spacing, meaning that the factor \(\kappa\) is the same for all scales. Then, up to a overall scaling factor, the LoG cornerness measure can be obtained by taking the difference of successive scale space images \(\ell_{(\kappa\sigma)^2}\) and \(\ell_{\sigma^2}\).