EIGENFACES TUTORIAL PDF

We’re going to discuss a popular technique for face recognition called eigenfaces . And at the heart of eigenfaces is an unsupervised. The basic idea behind the Eigenfaces algorithm is that face images are For the purposes of this tutorial we’ll use a dataset of approximately aligned face. Eigenfaces is a basic facial recognition introduced by M. Turk and A. Pentland [9] .. [6] Eigenface Tutorial

Author: Shakakasa Tygogami
Country: Nigeria
Language: English (Spanish)
Genre: Sex
Published (Last): 21 September 2017
Pages: 450
PDF File Size: 5.20 Mb
ePub File Size: 14.32 Mb
ISBN: 752-8-76882-664-2
Downloads: 49208
Price: Free* [*Free Regsitration Required]
Uploader: Melabar

February 11, by Shubhendu Trivedi. Eigenfaces is probably one of the simplest face recognition methods and also rather old, then why worry about it at all? Because, while it is simple it works quite well.

I was thinking of writing a post based on face recognition in Bees next, so this should serve as a basis for the next post too. The idea of this post is to give a simple introduction to the topic with an emphasis on building intuition.

Face Recognition using Eigenfaces and Distance Classifiers: A Tutorial | Onionesque Reality

For more rigorous treatments, look at the references. We are not ttutorial even close to an understanding of how we manage to do it.

Damage to the temporal lobe can result in the condition in which the concerned person can lose the ability to recognize faces. In one of my previous postswhich had links to a series of lectures by Dr Vilayanur RamachandranI did link to one lecture by him in which he talks in detail about this condition. All this aside, not much is known how the perceptual information for a face is coded in the brain too. Eigenfaces has a parallel to one of the most fundamental ideas in mathematics and signal processing — The Fourier Series.

This parallel is also very helpful to build an intuition to what Eigenfaces or PCA sort of does and hence must be exploited. Hence we review the Fourier Series in a few sentences. Representation of a signal in the form of a linear combination of complex sinusoids is called the Fourier Series. The coefficients are given as:. It is common to define the above using. An example that illustrates or the Fourier series is:.

A square wave given in black can be approximated to by using a series of sines and cosines result of this summation shown in blue. Clearly in the limiting case, we could reconstruct the square wave exactly with simply sines and cosines. Though not exactly the same, the idea behind Eigenfaces is similar. The aim is to represent a face as a linear combination of a set of basis images in the Fourier Series the bases were simply sines and cosines.

Where represents the face with the mean subtracted from it, represent weights and the eigenvectors. This was just like mentioning at the start what we have to do.

The big idea is that you want to find a set of images called Eigenfaces, which are nothing but Eigenvectors of the training data that if you weigh and add together should give you back a image that you are interested in adding images together should give you back an image, Right?

The way you weight these basis images i. In the above figure, a face that was in the training database was reconstructed by taking a weighted summation of all the basis faces and then adding to them the mean face.

  HAGADA DE PASCUA PDF

They have just been picked randomly from a pool of 70 by me.

An Information Theory Approach: First of all the idea of Eigenfaces considers face recognition as a 2-D recognition problemthis is based on the assumption that at the time of recognition, faces will be mostly upright and frontal. Because of this, detailed 3-D information about the face is not needed. This reduces complexity by a significant bit. Before the tutoiral for face recognition using Eigenfaces was introduced, most of the face recognition literature dealt with local and intuitive features, such as distance between eyes, ears and similar other features.

Eigenfaces inspired by a method used in an earlier paper was a significant departure from the idea of using only intuitive features. It uses an Information Theory appraoch wherein the most relevant face information is encoded in a group of faces that will best eitenfaces the faces. It transforms the face images in to a set of basis faces, which essentially are the principal components of the face images.

This is particularly useful for reducing the computational effort. This is illustrated by this figure:. Such an information theory approach will encode not only the local features but also the global features.

Such features may or may not be intuitively understandable. When we find the principal components or the Eigenvectors of the image set, each Eigenvector has some contribution from EACH face used in the training set.

So the Eigenvectors also have a face like appearance. These look ghost like and are ghost images or Eigenfaces. Every image in the training set can be represented as a weighted linear combination of these basis faces. The number of Eigenfaces that we would obtain therefore would be equal to the number of images in the training set. Let us take this number to eigenfacea. Like I mentioned one paragraph before, some of these Eigenfaces are more important in encoding the variation in face images, thus we could also approximate faces using only the most significant Eigenfaces.

There are images in the training set. There are most significant Eigenfaces using which we can satisfactorily approximate a face. All images are matrices, which can be represented as dimensional vectors. The same logic would apply to images that are not tutoriql equal length and breadths. To take an example: An image of size x can be represented as a vector of dimension or simply as a point in a dimensional tuttorial. Algorithm for Finding Eigenfaces: Obtain training images…it is very important that the images are centered.

Due to a recent WordPress bug, there is some trouble with constructing matrices with multiple columns. Same goes for some formulae below in the post. Find the average face vector. Subtract the mean face from each face vector to get a set of vectors. Find the Covariance matrix:. Also note that is a matrix and is a matrix. We now need to calculate the Eigenvectors ofHowever note that is a matrix and it would return Eigenvectors each being dimensional.

For an image this number is HUGE. The computations required would easily make your system run out of memory. How do we get around this problem? Instead of the Matrix consider the matrix. Remember is a matrix, thus is a matrix. Now from eigenraces properties of matrices, it follows that: We have found out earlier.

  DR GIANLUIGI ZANGARI PDF

Eigenfaces for Dummies

This implies that using we can calculate the M ttuorial Eigenvectors of. Remember that as M is simply the number of training images. Find the best Eigenvectors of by using the relation discussed above. Also keep in mind that. Select the best Eigenvectors, the selection of these Eigenvectors is done heuristically. The Eigenvectors found at the end of the previous section, when converted to a matrix in a process that is reverse to that in STEP 2, have a face like appearance.

Since these are Eigenvectors and have a face like appearance, they are called Eigenfaces. Sometimes, they are also called as Ghost Images because of their weird appearance. Now each face in the training set minus the meancan be represented as a linear combination of these Eigenvectors:.

Eigenfaces Tutorial | Manfred Zabarauskas’ Blog

Each normalized training image is represented in this basis as a vector. This means we have to calculate such a vector corresponding to every image in the training set and store them as templates.

Now consider we have found out the Eigenfaces for the training imagestheir associated weights after selecting a set of most relevant Eigenfaces and have stored these vectors corresponding to each training image. If an unknown probe face is to be tutprial then:. We normalize the incoming probe as. The normalized probe can then simply be represented as:. After the feature vector weight vector for the probe has been found out, we simply need to classify it. For the classification task we could gutorial use some distance measures or use some classifier like Support Vector Machines something that I would cover in an upcoming post.

In case we use distance measures, classification is done as:. This means we take the weight vector of the probe we have just found out and find its distance with the weight vectors eigenfafes with each of the training image.

And ifwhere is a threshold chosen heuristically, then we can say that the probe image is recognized as the image with which it gives the ttutorial score. If however then the probe does not belong to the database.

I will come to the point on how the threshold should be chosen. For distance measures the most commonly used measure is the Euclidean Distance. The other being the Mahalanobis Distance. The Mahalanobis distance generally gives superior performance. The Euclidean Distance is probably the most widely used distance metric.

It is eigenfaecs special case of a general class of norms and is given as:. The Mahalanobis Distance is a better distance measure when it comes to pattern recognition problems. It takes into account the covariance between the variables eigehfaces hence eivenfaces the problems related to scale and correlation that are inherent with the Euclidean Distance.

It is given as:. Where is the covariance between the variables involved.