- Oct 10, 2000
- 2,689
- 0
- 0
I have an engineering project that deals with a pattern recognition, mainly image recognition and classification. Given a bunch of grayscale images people's face (front) and a list of names, recognize and classify each image and match them to the name assuming that you (the programmer) knows the lineup. The images are m x n pixels...
Let's take a simple example - the English alphabet or the digits 0-9. Humans are "trained" to recognize the those characters and can tell instantly what each letter means. But how would a computer extrapolate "129" if given three images that were 128x128 pixels in size that had dark pixels to form the shape "1" "2" and "9"? That's just a simple example.. what'd I'd like some help on is how to figure out the first problem.
What would be the best way to do it?
I'm familar with neural network theory & the training algorithms, just wondering what the best method would be for facial recognition.
Let's take a simple example - the English alphabet or the digits 0-9. Humans are "trained" to recognize the those characters and can tell instantly what each letter means. But how would a computer extrapolate "129" if given three images that were 128x128 pixels in size that had dark pixels to form the shape "1" "2" and "9"? That's just a simple example.. what'd I'd like some help on is how to figure out the first problem.
What would be the best way to do it?
I'm familar with neural network theory & the training algorithms, just wondering what the best method would be for facial recognition.