《人脸识别文献翻译(中英双文).doc》由会员分享,可在线阅读,更多相关《人脸识别文献翻译(中英双文).doc(10页珍藏版)》请在淘文阁 - 分享文档赚钱的网站上搜索。
1、如有侵权,请联系网站删除,仅供学习与交流人脸识别文献翻译(中英双文)【精品文档】第 10 页4 Two-dimensional Face Recognition4.1 Feature LocalizationBefore discussing the methods of comparing two facial images we now take a brief look at some at the preliminary processes of facial feature alignment. This process typically consists of two stage
2、s: face detection and eye localization. Depending on the application, if the position of the face within the image is known beforehand (for a cooperative subject in a door access system for example) then the face detection stage can often be skipped, as the region of interest is already known. There
3、fore, we discuss eye localization here, with a brief discussion of face detection in the literature review .The eye localization method is used to align the 2D face images of the various test sets used throughout this section. However, to ensure that all results presented are representative of the f
4、ace recognition accuracy and not a product of the performance of the eye localization routine, all image alignments are manually checked and any errors corrected, prior to testing and evaluation.We detect the position of the eyes within an image using a simple template based method. A training set o
5、f manually pre-aligned images of faces is taken, and each image cropped to an area around both eyes. The average image is calculated and used as a template.Figure 4-1 The average eyes. Used as a template for eye detection.Both eyes are included in a single template, rather than individually searchin
6、g for each eye in turn, as the characteristic symmetry of the eyes either side of the nose, provide a useful feature that helps distinguish between the eyes and other false positives that may be picked up in the background. Although this method is highly susceptible to scale (i.e. subject distance f
7、rom the camera) and also introduces the assumption that eyes in the image appear near horizontal. Some preliminary experimentation also reveals that it is advantageous to include the area of skin just beneath the eyes. The reason being that in some cases the eyebrows can closely match the template,
8、particularly if there are shadows in the eye-sockets, but the area of skin below the eyes helps to distinguish the eyes from eyebrows (the area just below the eyebrows contain eyes, whereas the area below the eyes contains only plain skin).A window is passed over the test images and the absolute dif
9、ference taken to that of the average eye image shown above. The area of the image with the lowest difference is taken as the region of interest containing the eyes. Applying the same procedure using a smaller template of the individual left and right eyes then refines each eye position.This basic te
10、mplate-based method of eye localization, although providing fairly precise localizations, often fails to locate the eyes completely. However, we are able to improve performance by including a weighting scheme.Eye localization is performed on the set of training images, which is then separated into t
11、wo sets: those in which eye detection was successful; and those in which eye detection failed. Taking the set of successful localizations we compute the average distance from the eye template (Figure 4-2 top). Note that the image is quite dark, indicating that the detected eyes correlate closely to
12、the eye template, as we would expect. However, bright points do occur near the whites of the eye, suggesting that this area is often inconsistent, varying greatly from the average eye template.Figure 4-2 Distance to the eye template for successful detections (top) indicating variance due to noise an
13、d failed detections (bottom) showing credible variance due to miss-detected features.In the lower image (Figure 4-2 bottom), we have taken the set of failed localizations(images of the forehead, nose, cheeks, background etc. falsely detected by the localization routine) and once again computed the a
14、verage distance from the eye template. The bright pupils surrounded by darker areas indicate that a failed match is often due to the high correlation of the nose and cheekbone regions overwhelming the poorly correlated pupils. Wanting to emphasize the difference of the pupil regions for these failed
15、 matches and minimize the variance of the whites of the eyes for successful matches, we divide the lower image values by the upper image to produce a weights vector as shown in Figure 4-3. When applied to the difference image before summing a total error, this weighting scheme provides a much improv
16、ed detection rate.Figure 4-3 - Eye template weights used to give higher priority to those pixels that best represent the eyes.4.2 The Direct Correlation ApproachWe begin our investigation into face recognition with perhaps the simplest approach, known as the direct correlation method (also referred
17、to as template matching by Brunelli and Poggio) involving the direct comparison of pixel intensity values taken from facial images. We use the term Direct Correlation to encompass all techniques in which face images are compared directly, without any form of image space analysis, weighting schemes o
18、r feature extraction, regardless of the distance metric used. Therefore, we do not infer that Pearsons correlation is applied as the similarity function (although such an approach would obviously come under our definition of direct correlation). We typically use the Euclidean distance as our metric
19、in these investigations (inversely related to Pearsons correlation and can be considered as a scale and translation sensitive form of image correlation), as this persists with the contrast made between image space and subspace approaches in later sections.Firstly, all facial images must be aligned s
20、uch that the eye centers are located at two specified pixel coordinates and the image cropped to remove any background information. These images are stored as grayscale bitmaps of 65 by 82 pixels and prior to recognition converted into a vector of 5330 elements (each element containing the correspon
21、ding pixel intensity value). Each corresponding vector can be thought of as describing a point within a 5330 dimensional image space. This simple principle can easily be extended to much larger images: a 256 by 256 pixel image occupies a single point in 65,536-dimensional image space and again, simi
22、lar images occupy close points within that space. Likewise, similar faces are located close together within the image space, while dissimilar faces are spaced far apart. Calculating the Euclidean distance d, between two facial image vectors (often referred to as the query image q, and gallery image
23、g), we get an indication of similarity. A threshold is then applied to make the final verification decision.4.2.1 Verification TestsThe primary concern in any face recognition system is its ability to correctly verify a claimed identity or determine a persons most likely identity from a set of poten
24、tial matches in a database. In order to assess a given systems ability to perform these tasks, a variety of evaluation methodologies have arisen. Some of these analysis methods simulate a specific mode of operation (i.e. secure site access or surveillance), while others provide a more mathematical d
25、escription of data distribution in some classification space. In addition, the results generated from each analysis method may be presented in a variety of formats. Throughout the experimentations in this thesis, we primarily use the verification test as our method of analysis and comparison, althou
26、gh we also use Fishers Linear Discriminate to analyze individual subspace components in section 7 and the identification test for the final evaluations described in section 8. The verification test measures a systems ability to correctly accept or reject the proposed identity of an individual. At a
27、functional level, this reduces to two images being presented for comparison, for which the system must return either an acceptance (the two images are of the same person) or rejection (the two images are of different people). The test is designed to simulate the application area of secure site acces
28、s. In this scenario, a subject will present some form of identification at a point of entry, perhaps as a swipe card, proximity chip or PIN number. This number is then used to retrieve a stored image from a database of known subjects (often referred to as the target or gallery image) and compared wi
29、th a live image captured at the point of entry (the query image). Access is then granted depending on the acceptance/rejection decision. The results of the test are calculated according to how many times the accept/reject decision is made correctly. In order to execute this test we must first define
30、 our test set of face images. Although the number of images in the test set does not affect the results produced (as the error rates are specified as percentages of image comparisons), it is important to ensure that the test set is sufficiently large such that statistical anomalies become insignific
31、ant (for example, a couple of badly aligned images matching well). Also, the type of images (high variation in lighting, partial occlusions etc.) will significantly alter the results of the test. Therefore, in order to compare multiple face recognition systems, they must be applied to the same test
32、set.However, it should also be noted that if the results are to be representative of system performance in a real world situation, then the test data should be captured under precisely the same circumstances as in the application environment. On the other hand, if the purpose of the experimentation
33、is to evaluate and improve a method of face recognition, which may be applied to a range of application environments, then the test data should present the range of difficulties that are to be overcome. This may mean including a greater percentage of difficult images than would be expected in the pe
34、rceived operating conditions and hence higher error rates in the results produced. Below we provide the algorithm for executing the verification test. The algorithm is applied to a single test set of face images, using a single function call to the face recognition algorithm: Compare Faces (FaceA, F
35、aceB). This call is used to compare two facial images, returning a distance score indicating how dissimilar the two face images are: the lower the score the more similar the two face images. Ideally, images of the same face should produce low scores, while images of different faces should produce hi
36、gh scores.Every image is compared with every other image, no image is compared with itself and no pair is compared more than once (we assume that the relationship is symmetrical). Once two images have been compared, producing a similarity score, the ground-truth is used to determine if the images ar
37、e of the same person or different people. In practical tests this information is often encapsulated as part of the image filename (by means of a unique person identifier). Scores are then stored in one of two lists: a list containing scores produced by comparing images of different people and a list
38、 containing scores produced by comparing images of the same person. The final acceptance/rejection decision is made by application of a threshold. Any incorrect decision is recorded as either a false acceptance or false rejection. The false rejection rate (FRR) is calculated as the percentage of sco
39、res from the same people that were classified as rejections. The false acceptance rate (FAR) is calculated as the percentage of scores from different people that were classified as acceptances.These two error rates express the inadequacies of the system when operating at a specific threshold value.
40、Ideally, both these figures should be zero, but in reality reducing either the FAR or FRR (by altering the threshold value) will inevitably result in increasing the other. Therefore, in order to describe the full operating range of a particular system, we vary the threshold value through the entire
41、range of scores produced. The application of each threshold value produces an additional FAR, FRR pair, which when plotted on a graph produces the error rate curve shown below.Figure 4-5 - Example Error Rate Curve produced by the verification test.The equal error rate (EER) can be seen as the point
42、at which FAR is equal to FRR. This EER value is often used as a single figure representing the general recognition performance of a biometric system and allows for easy visual comparison of multiple methods. However, it is important to note that the EER does not indicate the level of error that woul
43、d be expected in a real world application. It is unlikely that any real system would use a threshold value such that the percentage of false acceptances was equal to the percentage of false rejections. Secure site access systems would typically set the threshold such that false acceptances were sign
44、ificantly lower than false rejections: unwilling to tolerate intruders at the cost of inconvenient access denials.Surveillance systems on the other hand would require low false rejection rates to successfully identify people in a less controlled environment. Therefore we should bear in mind that a s
45、ystem with a lower EER might not necessarily be the better performer towards the extremes of its operating capability. There is a strong connection between the above graph and the receiver operating characteristic (ROC) curves, also used in such experiments. Both graphs are simply two visualizations
46、 of the same results, in that the ROC format uses the True Acceptance Rate (TAR), where TAR = 1.0 FRR in place of the FRR, effectively flipping the graph vertically. Another visualization of the verification test results is to display both the FRR and FAR as functions of the threshold value. This pr
47、esentation format provides a reference to determine the threshold value necessary to achieve a specific FRR and FAR. The EER can be seen as the point where the two curves intersect.Figure 4-6 - Example error rate curve as a function of the score thresholdThe fluctuation of these error curves due to
48、noise and other errors is dependant on the number of face image comparisons made to generate the data. A small dataset that only allows for a small number of comparisons will results in a jagged curve, in which large steps correspond to the influence of a single image on a high proportion of the com
49、parisons made. A typical dataset of 720 images (as used in section 4.2.2) provides 258,840 verification operations, hence a drop of 1% EER represents an additional 2588 correct decisions, whereas the quality of a single image could cause the EER to fluctuate by up to 4 二维人脸识别4.1 特征定位在讨论两幅人脸图像的比较之前,我们先简单看下面部图像特征定位的初始过程。这一过程通常有由两个阶段组成:人脸检测和眼睛定位。根据不同的应用,如果在面部图像是事先所知的(例如在门禁系统主题之中),因为所感知区域是已知的,那么人脸检测阶段通常是可以跳过的。因此,我们讨论眼睛定位的过程中,有一个人脸检测文献的简短讨论。眼睛定位适用于对齐的各种测试二维人脸图像的方法通篇使用于这一节。但是,为了确保所有的结果都代表面部识别准确率,而不是对产品功能的眼睛
限制150内