Sort by: Search:

Author:
Year:

Returned 129 of 129 papers. Show all...

2014

Selectively Guiding Visual Concept Discovery. Maggie Wigness, Bruce A Draper, and J Ross Beveridge. IEEE Winter Conference on Applications of Computer Vision (WACV). March 2014. (PDF)

Finding the Subspace Mean or Median to Fit Your Need. Tim Marrinan, J. Ross Beveridge, Bruce Draper, Michael Kirby, and Chris Peterson. The IEEE Conference on Computer Vision and Pattern Recognition. 2014. (PDF) (Online)

A flag representation for finite collections of subspaces of mixed dimensions. Bruce Draper, Michael Kirby, Justin Marks, Tim Marrinan, and Chris Peterson. Linear Algebra and its Applications. 2014. (Online) (Abstract)

Abstract Given a finite set of subspaces of R n , perhaps of differing dimensions, we describe a flag of vector spaces (i.e. a nested sequence of vector spaces) that best represents the collection based on a natural optimization criterion and we present an algorithm for its computation. The utility of this flag representation lies in its ability to represent a collection of subspaces of differing dimensions. When the set of subspaces all have the same dimension d, the flag mean is related to several commonly used subspace representations. For instance, the d-dimensional subspace in the flag corresponds to the extrinsic manifold mean. When the set of subspaces is both well clustered and equidimensional of dimension d, then the d-dimensional component of the flag provides an approximation to the Karcher mean. An intermediate matrix used to construct the flag can also be used to recover the canonical components at the heart of Multiset Canonical Correlation Analysis. Two examples utilizing the Carnegie Mellon University Pose, Illumination, and Expression Database (CMU-PIE) serve as visual illustrations of the algorithm.

Video Alignment to a Common Reference. Rahul Dutta, Bruce Draper, and J.Ross Beveridge. 2014 IEEE Winter Conference on Applications of Computer Vision (WACV),. March 2014. (PDF) (Online)

2013

On the Existence of Face Quality Measures. P.J. Phillips, J.R. Beveridge, D.S. Bolme, B.A Draper, G.H. Givens, Yui Man Lui, Su Cheng, M.N. Teli, and Hao Zhang. Biometrics: Theory, Applications and Systems (BTAS), 2013 IEEE Sixth International Conference on. Sept 2013. (PDF) (Online) (Abstract)

We investigate the existence of quality measures for face recognition. First, we introduce the concept of an oracle for image quality in the context of face recognition. Next we introduce greedy pruned ordering (GPO) as an approximation to an image quality oracle. GPO analysis provides an estimated upper bound for quality measures, given a face recognition algorithm and data set. We then assess the performance of 12 commonly proposed face image quality measures against this standard. In addition, we investigate the potential for learning new quality measures via supervised learning. Finally, we show that GPO analysis is applicable to other biometrics.

Introduction to Face Recognition and Evaluation of Algorithm Performance. G.H. Givens, J.R. Beveridge, P.J. Phillips, B. Draper, Y.M. Lui, and D. Bolme. Computational Statistics & Data Analysis. 2013. (Online) (Abstract)

Abstract The field of biometric face recognition blends methods from computer science, engineering and statistics, however statistical reasoning has been applied predominantly in the design of recognition algorithms. A new opportunity for the application of statistical methods is driven by growing interest in biometric performance evaluation. Methods for performance evaluation seek to identify, compare and interpret how characteristics of subjects, the environment and images are associated with the performance of recognition algorithms. Some central topics in face recognition are reviewed for background and several examples of recognition algorithms are given. One approach to the evaluation problem is then illustrated with a generalized linear mixed model analysis of the Good, Bad, and Ugly Face Challenge, a pre-eminent face recognition dataset used to test state-of-the-art still-image face recognition algorithms. Findings include that (i) between-subject variation is the dominant source of verification heterogeneity when algorithm performance is good, and (ii) many covariate effects on verification performance are `universal' across easy, medium and hard verification tasks. Although the design and evaluation of face recognition algorithms draw upon some familiar statistical ideas in multivariate statistics, dimension reduction, classification, clustering, binary response data, generalized linear models and random effects, the field also presents some unique features and challenges. Opportunities abound for innovative statistical work in this new field.

The Challenge of Face Recognition from Digital Point-and-Shoot Cameras. J.R. Beveridge, P.J. Phillips, D.S. Bolme, B.A. Draper, G.H. Givens, Yui Man Lui, M.N. Teli, Hao Zhang, W.T. Scruggs, K.W. Bowyer, P.J. Flynn, and Su Cheng. Biometrics: Theory, Applications and Systems (BTAS), 2013 IEEE Sixth International Conference on. Sept 2013. (PDF) (Online)

Biometric Face Recognition: from Classical Statistics to future Challenges. Geof H. Givens, J. Ross Beveridge, Yui Man Lui, David S. Bolme, Bruce A. Draper, and P. Jonathon Phillips. Wiley Interdisciplinary Reviews: Computational Statistics. 2013. (Online)

Are You using the Right Approximate Nearest Neighbor Algorithm?. S. O'Hara, and B.A Draper. Applications of Computer Vision (WACV), 2013 IEEE Workshop on. January 2013. (Online)

2012

Semi-Nonnegative Matrix Factorization for Motion Segmentation with Missing Data. Quanyi Mo, and Bruce A. Draper. 12th European Conference on Computer Vision (ECCV). 2012. (PDF) (Online)

Scalable action recognition with a subspace forest. S. O'Hara, and B.A Draper. 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). June 2012. (PDF) (Online)

Preliminary studies on the Good, the Bad, and the Ugly face recognition challenge problem. Yui Man Lui, D. Bolme, P.J. Phillips, J.R. Beveridge, and B.A Draper. 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). June 2012. (PDF) (Online)

The Good, the Bad, and the Ugly Face Challenge Problem. P. Jonathon Phillips, J. Ross Beveridge, Bruce A. Draper, Geof Givens, Alice J. O'Toole, David Bolme, Joseph Dunlop, Yui Man Lui, Hassan Sahibzada, and Samuel Weimer. Image and Vision Computing. 2012. (PDF) (Online) (Abstract)

The Good, the Bad, and the Ugly Face Challenge Problem was created to encourage the development of algorithms that are robust to recognition across changes that occur in still frontal faces. The Good, the Bad, and the Ugly consists of three partitions. The Good partition contains pairs of images that are considered easy to recognize. The base verification rate (VR) is 0.98 at a false accept rate (FAR) of 0.001. The Bad partition contains pairs of images of average difficulty to recognize. For the Bad partition, the VR is 0.80 at a FAR of 0.001. The Ugly partition contains pairs of images considered difficult to recognize, with a VR of 0.15 at a FAR of 0.001. The base performance is from fusing the output of three of the top performers in the FRVT 2006. The design of the Good, the Bad, and the Ugly controls for posevariation, subject aging, and subject ``recognizability.'' Subject recognizability is controlled by having the same number of images of each subject in every partition. This implies that the differences in performance among the partitions are a result of how a face is presented in each image.

Using a Product Manifold distance for unsupervised action recognition. Stephen O'Hara, Yui Man Lui, and Bruce A. Draper. Image and Vision Computing. 2012. (PDF) (Online) (Abstract)

This paper presents a method for unsupervised learning and recognition of human actions in video. Lacking any supervision, there is nothing except the inherent biases of a given representation to guide grouping of video clips along semantically meaningful partitions. Thus, in the first part of this paper, we compare two contemporary methods, Bag of Features (BOF) and Product Manifolds (PM), for clustering video clips of human facial expressions, hand gestures, and full-body actions, with the goal of better understanding how well these very different approaches to behavior recognition produce semantically relevant clustering of data. We show that PM yields superior results when measuring the alignment between the generated clusters and the nominal class labeling of the data set. We found that while gross motions were easily clustered by both methods, the lack of preservation of structural information inherent to the BOF representation leads to limitations that are not easily overcome without supervised training. This was evidenced by the poor separation of shape labels in the hand gestures data by BOF, and the overall poor performance on full-body actions. In the second part of this paper, we present an unsupervised mechanism for learning micro-actions in continuous video streams using the PM representation. Unlike other works, our method requires no prior knowledge of an expected number of labels/classes, requires no silhouette extraction, is tolerant to minor tracking errors and jitter, and can operate at near real-time speed. We show how to construct a set of training ``tracklets,'' how to cluster them using the Product Manifold distance measure, and how to perform detection using exemplars learned from the clusters. Further, we show that the system is amenable to incremental learning as anomalous activities are detected in the video stream. We demonstrate performance using the publicly-available ETHZ Livingroom data set.

2011

When High-Quality Face Images Match Poorly. J.R. Beveridge, P.J. Phillips, G.H. Givens, B.A Draper, M.N. Teli, and D.S. Bolme. IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG 2011). March 2011. (PDF) (Online) (Abstract)

In face recognition, quality is typically thought of as a property of individual images, not image pairs. The implicit assumption is that high-quality images should be easy to match to each other, while low quality images should be hard to match. This paper presents a relational graph-based evaluation technique that uses match scores produced by face recognition algorithms to determine the ``quality'' of images. The resulting analysis demonstrates that only a small fraction of the images in a well-studied data set (FRVT 2006) are low-quality images. It is much more common to find relationships in which two images that are hard to match to each other can be easily matched with other images of the same person. In other words, these images are simultaneously both high and low quality. The existence of such contrary images represents a fundamental challenge for approaches to biometric quality that cast quality as an intrinsic property of a single image. Instead it indicates that quality should be associated with pairs of images. In exploring these contrary images, we find a surprising dependence on whether elements of an image pair are acquired at the same location, even in circumstances where one would be tempted to think of the locations as interchangeable. The results presented have important implications for anyone designing face recognition evaluations as well as those developing new algorithms.

An Introduction to The Good, The Bad, and The Ugly Face Recognition Challenge Problem. P.J. Phillips, J.R. Beveridge, B.A Draper, G. Givens, AJ. O'Toole, D.S. Bolme, J. Dunlop, Yui Man Lui, H. Sahibzada, and S. Weimer. IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG 2011). March 2011. (PDF) (Online) (Abstract)

The Good, the Bad, & the Ugly Face Challenge Problem was created to encourage the development of algorithms that are robust to recognition across changes that occur in still frontal faces. The Good, the Bad, & the Ugly consists of three partitions. The Good partition contains pairs of images that are considered easy to recognize. On the Good partition, the base verification rate (VR) is 0.98 at a false accept rate (FAR) of 0.001. The Bad partition contains pairs of images of average difficulty to recognize. For the Bad partition, the VR is 0.80 at a FAR of 0.001. The Ugly partition contains pairs of images considered difficult to recognize, with a VR of 0.15 at a FAR of 0.001. The base performance is from fusing the output of three of the top performers in the FRVT 2006. The design of the Good, the Bad, & the Ugly controls for pose variation, subject aging, and subject ``recognizability.'' Subject recognizability is controlled by having the same number of images of each subject in every partition. This implies that the differences in performance among the partitions are result of how a face is presented in each image.

Unsupervised Learning of Human Expressions, Gestures, and Actions. S. O'Hara, Yui Man Lui, and B.A Draper. IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG 2011). March 2011. (PDF) (Online) (Abstract)

This paper analyzes completely unsupervised clustering of human expressions, gestures, and actions in video. Lacking any supervision, there is nothing except the inherent biases of a given technique to guide grouping of video clips along semantically meaningful partitions. This paper evaluates two contemporary behavior recognition methods, Bag of Features (BOF) and Product Manifolds (PM), for clustering video clips of human facial expressions, hand gestures, and full-body actions. Our goal is to better understand how well these very different approaches to behavior recognition produce semantically useful clustering of relevant data. We show that PM yields superior results when measuring the alignment between the generated clusters over a range of K-values (number of clusters) and the nominal class labelling of the data set. A key result is that unsupervised clustering with PM yields accuracy comparable to state-of-the-art supervised classification methods on KTH Actions. At the same time, BOF experiences a substantial drop in performance between unsupervised and supervised implementations on the same data sets, indicating a greater reliance on supervision for achieving high performance. We also found that while gross motions were easily clustered by both methods, the lack of preservation of structural information inherent to the BOF representation leads to limitations that are not easily overcome without supervised training. This was evidenced by the poor separation of shape labels in the hand gestures data by BOF, and the overall poor performance on full-body actions.

Unsupervised Learning of Micro-Action Exemplars using a Product Manifold. S. O'Hara, and B.A Draper. 2011 8th IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS). August 2011. (PDF) (Online) (Abstract)

This paper presents a completely unsupervised mechanism for learning micro-actions in continuous video streams. Unlike other works, our method requires no prior knowledge of an expected number of labels (classes), requires no silhouette extraction, is tolerant to minor tracking errors and jitter, and can operate at near real time speed. We show how to construct a set of training ``tracklets,'' how to cluster them using a recently introduced Product Manifold distance measure, and how to perform detection using exemplars learned from the clusters. Further, we show that the system is amenable to incremental learning as anomalous activities are detected in the video stream. We demonstrate performance using the publicly-available ETHZ Livingroom data set.

Automatically Searching for Optimal Parameter Settings Using a Genetic Algorithm. David S. Bolme, J. Ross Beveridge, Bruce A. Draper, P. Jonathon Phillips, and Yui Man Lui. Computer Vision Systems. 2011. (PDF) (Online) (Abstract)

Modern vision systems are often a heterogeneous collection of image processing, machine learning, and pattern recognition techniques. One problem with these systems is finding their optimal parameter settings, since these systems often have many interacting parameters. This paper proposes the use of a Genetic Algorithm (GA) to automatically search parameter space. The technique is tested on a publicly available face recognition algorithm and dataset. In the work presented, the GA takes the role of a person configuring the algorithm by repeatedly observing performance on a tuning-subset of the final evaluation test data. In this context, the GA is shown to do a better job of configuring the algorithm than was achieved by the authors who originally constructed and released the LRPCA baseline. In addition, the data generated during the search is used to construct statistical models of the fitness landscape which provides insight into the significance from, and relations among, algorithm parameters.

Biometric Zoos: Theory and Experimental Evidence. M.N. Teli, J.R. Beveridge, P.J. Phillips, G.H. Givens, D.S. Bolme, and B.A Draper. 2011 International Joint Conference on Biometrics (IJCB),. Oct 2011. (PDF) (Online) (Abstract)

Several studies have shown the existence of biometric zoos. The premise is that in biometric systems people fall into distinct categories, labeled with animal names, indicating recognition difficulty. Different combinations of excessive false accepts or rejects correspond to labels such as: Goat, Lamb, Wolf, etc. Previous work on biometric zoos has investigated the existence of zoos for the results of an algorithm on a data set. This work investigates biometric zoos generalization across algorithms and data sets. For example, if a subject is a Goat for algorithm A on data set X, is that subject also a Goat for algorithm B on data set Y? This paper introduces a theoretical framework for generalizing biometric zoos. Based on our framework, we develop an experimental methodology for determining if biometric zoos generalize across algorithms and data sets, and we conduct a series of experiments to investigate the existence of zoos on two algorithms in FRVT 2006.

2010

FRVT 2006: Quo Vadis Face Quality. J. Ross Beveridge, Geof H. Givens, P. Jonathon Phillips, Bruce A. Draper, David S. Bolme, and Yui Man Lui. Image and Vision Computing. may 2010. (PDF) (Online) (Abstract)

A study is presented showing how three state-of-the-art algorithms from the Face Recognition Vendor Test 2006 (FRVT 2006) are effected by factors related to face images and people. The recognition scenario compares highly controlled images to images taken of people as they stand before a camera in settings such as hallways and outdoors in front of buildings. A Generalized Linear Mixed Model (GLMM) is used to estimate the probability an algorithm successfully verifies a person conditioned upon the factors included in the study. The factors associated with people are: Gender, Race, Age and whether they wear Glasses. The factors associated with images are: the size of the face, edge density and region density. The setting, indoors versus outdoors, is also a factor. Edge density can change the estimated probability of verification dramatically, for example from about 0.15 to 0.85. However, this effect is not consistent across algorithm or setting. This finding shows that simple measurable factors are capable of characterizing face quality; however, these factors typically interact with both algorithm and setting.

Quantifying How Lighting and Focus Affect Face Recognition Performance. J.R. Beveridge, D.S. Bolme, B.A Draper, G.H. Givens, Yui Man Lui, and P.J. Phillips. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). June 2010. (PDF) (Online) (Abstract)

Recent studies show that face recognition in uncontrolled images remains a challenging problem, although the reasons why are less clear. Changes in illumination are one possible explanation, even though algorithms developed since the advent of the PIE and Yale B data bases supposedly compensate for illumination variation. Edge density has also been shown to be a strong predictor of algorithm failure on the FRVT 2006 uncontrolled images; recognition is harder on images with higher edge density. This paper presents a new study that explains the edge density effect in terms of illumination and shows that top performing algorithms in FRVT 2006 are still sensitive to lighting. This new study also shows that focus, originally suggested as an explanation for the edge density effect, is not a significant factor. The new lighting model developed in this study can be used as a measure of face image quality.

Adaptive Appearance Model and Condensation Algorithm for Robust Face Tracking. Yui Man Lui, J.R. Beveridge, and L.D. Whitley. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on. May 2010. (PDF) (Online) (Abstract)

We present an adaptive framework for condensation algorithms in the context of human-face tracking. We attack the face tracking problem by making factored sampling more efficient and appearance update more effective. An adaptive affine cascade factored sampling strategy is introduced to sample the parameter space such that coarse face locations are located first, followed by a fine factored sampling with a small number of particles. In addition, the local linearity of an appearance manifold is used in conjunction with a new criterion to select a tangent plane for updating an appearance in face tracking. Our proposed method seeks the best linear variety from the selected tangent plane to form a reference image. We demonstrate the effectiveness and efficiency of the proposed method on a number of challenging videos. These test video sequences show that our method is robust to illumination, appearance, and pose changes, as well as temporary occlusions. Quantitatively, our method achieves the average root-mean-square error at 4.98 on the well-known dudek video sequence while maintaining a proficient speed at 8.74 fps. Finally, while our algorithm is adaptive during execution, no training is required.

Visual Object Tracking using Adaptive Correlation Filters. D.S. Bolme, J.R. Beveridge, B.A Draper, and Yui Man Lui. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). June 2010. (PDF) (Online) (Abstract)

Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.

2009

FaceL: Facile Face Labeling. D. S. Bolme, J. R. Beveridge, and B. A. Draper. International Conference on Computer Vision Systems. 2009. (PDF) (Abstract)

FaceL is a simple and fun face recognition system that labels faces in live video from an iSight camera or webcam. FaceL presents a window with a few controls and annotations displayed over the live video feed. The annotations indicate detected faces, positions of eyes, and after training, the names of enrolled people. Enrollment is video based, capturing many images per person. FaceL does a good job of distinguishing between a small set of people in fairly uncontrolled settings and incorporates a novel incremental training capability. The system is very responsive, running at over 10 frames per second on modern hardware. FaceL is open source and can be downloaded from http://pyvision.sourceforge.net/facel.

A Meta-Analysis of Face Recognition Covariates (includes supplemental material). Y. M. Lui, D. S. Bolme, B. A. Draper, J. R. Beveridge, G. H. Givens, and P. J. Phillips. Proceedings of IEEE Conference on on Biometrics: Theory, Applications and Systems. September 2009. (PDF) (Abstract)

This paper presents a meta-analysis for covariates that affect performance of face recognition algorithms. Our review of the literature found six covariates for which multiple studies reported effects on face recognition performance. These are: age of the person, elapsed time between images, gender of the person, the person's expression, the resolution of the face images, and the race of the person. The results presented are drawn from 25 studies conducted over the past 12 years. There is near complete agreement between all of the studies that older people are easier to recognize than younger people, and recognition performance begins to degrade when images are taken more than a year apart. While individual studies find men or women easier to recognize, there is no consistent gender effect. There is universal agreement that changing expression hurts recognition performance. If forced to compare different expressions, there is still insufficient evidence to conclude that any particular expression is better than another. Higher resolution images improve performance for many modern algorithms. Finally, given the studies summarized here, no clear conclusions can be drawn about whether one racial group is harder or easier to recognize than another.

FRVT 2006: Quo Vidas Face Quality. J. R. Beveridge, G. H. Givens, P. J. Phillips, B. A. Draper, D. S. Bolme, and Y. M. Lui. IVCJ. 2009. (Abstract)

This paper summarizes a study of how three state-of-the-art algorithms from the Face Recognition Vendor Test 2006 (FRVT 2006) are effected by factors related to face images and the people being recognized. The recogni- tion scenario compares highly controlled images to images taken of people as they stand before a camera in settings such as hallways and outdoors in front of buildings. A Generalized Linear Mixed Model (GLMM) is used to estimate the probability an algorithm successfully verifies a person conditioned upon the factors included in the study. The factors associated with people are: gender, race, age and whether they wear glasses. The factors associated with images are: the size of the face, edge density and region density. The setting, indoors versus outdoors, is also a factor. Edge density can change the esti- mated probability of verification dramatically, for example from about 0.15 to 0.85. However, this effect is not consistent across algorithm or setting. This finding shows that simple measurable factors are capable of characterizing face quality; however, these factors typically interact with both algorithm and setting.

Canonical Stiefel Quotient and its Application to Generic Face Recognition in Illumination Spaces. Yui Man Lui, J. Ross Beveridge, and Michael Kirby. International Conference on Biometrics : Theory, Applications, and Systems. 2009.

Overview of the Multiple Biometrics Grand Challenge. P. J. Phillips, P. J. Flynn, J. R. Beveridge, W. T. Scruggs, A. J. O'Toole, D. S. Bolme, K. W. Bowyer, B. A. Draper, G. H. Givens, Y. M. Lui, H. Sahibzada, J. A. Scallan III, and S. Weimer. IEEE International Conference on Biometrics. June 2009. (Abstract)

The goal of the Multiple Biometrics Grand Challenge (MBGC) is to improve the performance of face and iris recognition technology from biometric samples acquired under unconstrained conditions. The MBGC is organized into three challenge problems. Each challenge problem relaxes the acquisition constraints in different directions. In the Portal Challenge Problem, the goal is to recognize people from near-infrared (NIR) and high definition (HD) video as they walk through a portal. Iris recognition can be performed from the NIR video and face recognition from the HD video. The availability of NIR and HD modalities allows for the development of fusion algorithms. The Still Face Challenge Problem has two primary goals. The first is to improve recognition performance from frontal and off angle still face images taken under uncontrolled indoor and outdoor lighting. The second is to improve recognition performance on still frontal face images that have been resized and compressed, as is required for electronic passports. In the Video Challenge Problem, the goal is to recognize people from video in unconstrained environments. The video is unconstrained in pose, illumination, and camera angle. All three challenge problems include a large data set, experiment descriptions, ground truth, and scoring code.

Average of Synthetic Exact Filters. D. S. Bolme, B. A. Draper, and J. R. Beveridge. Computer Vision and Pattern Recoginition. June 2009. (PDF) (Online) (Abstract)

This paper introduces a class of correlation filters called Average of Synthetic Exact Filters (ASEF). For ASEF, the correlation output is completely specified for each training image. This is in marked contrast to prior methods such as Synthetic Discriminant Functions (SDFs) which only specify a single output value per training image. Advantages of ASEF training include: insenitivity to over-fitting, greater flexibility with regard to training images, and more robust behavior in the presence of structured backgrounds. The theory and design of ASEF filters is presented using eye localization on the FERET database as an example task. ASEF is compared to other popular correlation filters including SDF, MACE, OTF, and UMACE, and with other eye localization methods including Gabor Jets and the OpenCV Cascade Classifier. ASEF is shown to outperform all these methods, locating the eye to within the radius of the iris approximately 98.5% of the time.

2008

Focus on Quality, Predicting FRVT 2006 Performance. J. Ross Beveridge, Geof H. Givens, P. Jonathon Phillips, Bruce A. Draper, and Yui Man Lui. IEEE International Conference on Automatic Face and Gesture Recognition, Amsterdam, The Netherlands. 2008.

A Novel Appearance Model and Adaptive Condensation Algorithm for Human Face Tracking. Yui Man Lui, J. Ross Beveridge, and Darrell Whitley. IEEE International Conference on Biometrics : Theory, Applications and Systems. 2008.

Image-Set Matching using a Geodesic Distance and Cohort Normalization. Yui Man Lui, J. Ross Beveridge, Bruce A. Draper, and Michael Kirby. IEEE International Conference on Automatic Face and Gesture Recognition. 2008.

Grassmann Registration Manifolds for Face Recognition. Y. M. Lui, and J. R. Beveridge. European Conference on Computer Vision. 2008.

2007

Set-to-set face recognition under variation of pose and illumination. Jen Mei Chang, Michael Kirby, and Chris Peterson. Proceedings of 2007 Biometrics Symposium at The Biometric Consortium Conference. September 2007. (PDF) (Abstract)

Poster: Face recognition under variations in illumination and pose has been recognized as a difficult problem with pose appearing somewhat more challenging to handle than variations in illumination

Performance Characterization in Computer Vision: A Guide to Best Practices. A. Clark, N. A. Thacker, J. L. Barron, J. R. Beveridge, P. Courtney, W. R. Crum, V. Ramesh, and C. Clark. Computer Vision and Image Understanding. June (online) 2007.

Face Detection Algorithm and Feature Performance on FRGC 2.0 Imagery. J. R. Beveridge, P. Flynn, A. Alvarez, J. Saraf, W. Fisher, and J. Gentile. September 2007.

Principal Angles Separate Subject Illumination Spaces in YDB and CMU-PIE. J. Ross Beveridge, Bruce A. Draper, Jen Mei Chang, Michael Kirby, Holger Kley, and Chris Peterson. IEEE Trans. on Pattern Analysis and Machine Intelligence. (under revision) 2007.

Preliminary Covariate Analysis Results for a Fusion of Three FRVT 2006 Algorithms. P. J. Phillips, J. R. Beveridge, G. H. Givens, B. A. Draper, and Y. M. Lui. Unpublished: Presentation at the NIST Biometric Quality Workshop, Gaithersburg, MD. November 2007.

Recognition of Digital Images of the Human Face at Ultra Low Resolution Via Illumination Spaces. J. M. Chang, M. Kirby, H. Kley, C. Peterson, B. A. Draper, and J. R. Beveridge. ACCV. 2007.

Evolution Strategies for Matching Active Appearance Models to Human Faces. Y. M. Lui, J. R. Beveridge, A. E. Howe, and D. Whitley. IEEE International Conference on Biometrics : Theory, Applications and Systems. 2007.

Person Identification Using Text and Image Data. D.S. Bolme, J. Ross Beveridge, and Adele E. Howe. Proceedings of IEEE Conference on on Biometrics: Theory, Applications and Systems. September 2007. (PDF) (Abstract)

This paper presents a bimodal identification system using text based term vectors and EBGM face recognition. Identification was tested on a database of 118 celebrities downloaded from the internet. The dataset contained multiple images and two biographies for each person. Text based identification had a 100% identification rate for the full biographies. When the text data was artificially restricted to six sentences per subject, rank one identification rates were similar to face recognition (approx. 22%). In this restricted case, combining text identification and face identification showed a significant improvement in the identification rate over either method alone.

Factors that Influence Algorithm Performance in the Face Recognition Grand Challenge. J. Ross Beveridge, Geof H. Givens, P. Jonathon Phillips, and Bruce A. Draper. Computer Vision and Image Understanding. December 2007.

FacePerf: Face Recognition Performance Benchmarks. D.S. Bolme, Michelle M. Strout, and J. Ross Beveridge. IEEE International Symposium on Workload Characterization. September 2007. (PDF) (Online) (Abstract)

In this paper we present a collection of C and C++ biometric performance benchmark algorithms called FacePerf. The benchmark includes three different face recognition algorithms that are historically important to the face recognition community: Haar-based face detection, Principal Components Analysis, and Elastic Bunch Graph Matching. The algorithms are fast enough to be useful in realtime systems; however, improving performance would allow the algorithms to process more images or search larger face databases. Bottlenecks for each phase in the algorithms have been identified. A cosine approximation was able to reduce the execution time of the Elastic Bunch Graph Matching implementation by 32%.

Examples of set-to-set image classification. Jen Mei Chang, Michael Kirby, Holger Kley, J. Ross Beveridge, Bruce A. Draper, and Chris Peterson. Seventh International Conference on Mathematics in Signal Processing Conference Digest. December 2007. (PDF) (Abstract)

We present a framework for representing a set of images as a point on a Grassmann manifold. A collection of sets of images for a specific class is then associated with a collection of points on this manifold. Relationships between classes as defined by points associated with sets of images may be determined using the pro jection F-norm, geodesic (or other) distances on the Grassmann manifold. We present several applications of this approach for image classification.

2006

Illumination face spaces are idiosyncratic. Jen Mei Chang, J. Ross Beveridge, Bruce A. Draper, Michael Kirby, Holger Kley, and Chris Peterson. The International Conference on Image Processing, Computer Vision, and Pattern Recognition. 2006. (PDF) (Abstract)

Illumination spaces capture how the appearances of human faces vary under changing illumination. This work models illumination spaces as points on a Grassmann manifold and uses distance measures on this manifold to show that every person in the CMU-PIE and Yale data sets has a unique and identifying illumination space. This suggests that variations under changes in illumination can be exploited for their discriminatory information. As an example, when face recognition is cast as matching sets of face images to sets of face images, subjects in the CMU-PIE and Yale databases can be recognized with 100% accuracy.

A Comparison of Pixel, Edge and Wavelet Features for Face Detection using a Semi-Naive Bayesian Classifier. J. Ross Beveridge, Jilmil Saraf, and Ben Randall. 2006.

Linear and Generalized Linear Models for Analyzing Face Recognition Performance. J. Ross Beveridge, Geof H. Givens, Bruce A. Draper, and P. Jonathon Phillips. Pattern Recognition. (under revision) 2006.

2005

Repeated Measures GLMM Estimation of Subject-Related and False Positive Threshold Effects on Human Face Verification Performance. Geof H. Givens, J. Ross Beveridge, Bruce A. Draper, and P. Jonathon Phillips. Empirical Evaluation Methods in Computer Vision Workhsop: In Conjunction with CVPR 2005. June 2005.

Introduction to the Statistical Evaluation of Face Recognition Algorithms. J. Ross Beveridge, Bruce A. Draper, Geof H. Givens, and Ward Fisher. Face Processing: Advanced Modeling and Methods. 2005.

The CSU Face Identification Evaluation System. J. Ross Beveridge, D.S. Bolme, Bruce A. Draper, and Marcio L. Teixeira.. Machine Vision and Applications. 2005. (Online) (Abstract)

The CSU Face Identification Evaluation System includes standardized image preprocessing software, four distinct face recognition algorithms, analysis tools to study algorithm performance, and Unix shell scripts to run standard experiments. All code is written in ANSII C. The four algorithms provided are principle components analysis (PCA), a.k.a eigenfaces, a combined principle components analysis and linear discriminant analysis algorithm (PCA + LDA), an intrapersonal/extrapersonal image difference classifier (IIDC), and an elastic bunch graph matching (EBGM) algorithm. The PCA + LDA, IIDC, and EBGM algorithms are based upon algorithms used in the FERET study contributed by the University of Maryland, MIT, and USC, respectively. One analysis tool generates cumulative match curves; the other generates a sample probability distribution for recognition rate at recognition rank 1, 2, etc., using Monte Carlo sampling to generate probe and gallery choices. The sample probability distributions at each rank allow standard error bars to be added to cumulative match curves. The tool also generates sample probability distributions for the paired difference of recognition rates for two algorithms. Whether one algorithm consistently outperforms another is easily tested using this distribution. The CSU Face Identification Evaluation System is available through our Web site and we hope it will be used by others to rigorously compare novel face identification algorithms to standard algorithms using a common implementation and known comparison techniques.

2004

How Features of the Human Face Affect Recognition: a Statistical Comparison of Three Face Recognition Algorithms. Geof H. Givens, J. Ross Beveridge, Bruce A. Draper, Patrick Grother, and P. Jonathon Phillips. Proceedings: IEEE Computer Vision and Pattern Recognition 2004. 2004.

Using a Generalized Linear Mixed Model to Study the Configuration Space of PCA+LDA Human Face Recognition Algorithm. Geof H. Givens, J. Ross Beveridge, Bruce A. Draper, and D.S. Bolme. Lecture Notes in Computer Science : Articulated Motion and Deformable Objects. 2004. (PDF) (Online) (Abstract)

A generalized linear mixed model is used to estimate how rank 1 recognition of human faces with a PCA+LDA algorithm is affected by the choice of distance metric, image size, PCA space dimensionality, supplemental training and inclusion of subjects in the training. Random effects for replicated training sets and for repeated measures on people were included in the model. Results indicate between people variation was a dominant source of variability, and that there was moderate correlation within people. Statistically significant effects and interactions were found for all configuration factors except image size. Changes to the PCA+LDA configuration only improved recognition for subjects who had images included in the training data. For subjects not included in training, no configuration changes were helpful. This study is instructive for what it reveals about PCA+LDA. It is also a model for how to conduct such studies. For example, by accounting for subject variation as a random effect and explicitly looking for interaction effects, we are able to discern effects that might otherwise have been masked by subject variation and interaction effects.

2003

A Statistical Assessment of Subject Factors in the PCA Recognition of Human Faces. Geof H. Givens, J. Ross Beveridge, Bruce A. Draper, and D.S. Bolme. Computer Vision and Pattern Recognition. 2003. (PDF) (Online) (Abstract)

Some people's faces are easier to recognize than others, but it is not obvious what subject-specific factors make individual faces easy or difficult to recognize. This study considers 11 factors that might make recognition easy or difficult for 1,072 human subjects in the FERET dataset. The specific factors are: race (white, Asian, African-American, or other), gender, age (young or old), glasses (present or absent), facial hair (present or absent), bangs (present or absent), mouth (closed or other), eyes (open or other), complexion (clear or other), makeup (present or absent), and expression (neutral or other). An ANOVA is used to determine the relationship between these subject covariates and the distance between pairs of images of the same subject in a standard Eigenfaces subspace. Some results are not terribly surprising. For example, the distance between pairs of images of the same subject increases for people who change their appearance, e.g., open and close their eyes, open and close their mouth or change expression. Thus changing appearance makes recognition harder. Other findings are surprising. Distance between pairs of images for subjects decreases for people who consistently wear glasses, so wearing glasses makes subjects more recognizable. Pairwise distance also decreases for people who are either Asian or African-American rather than white. A possible shortcoming of our analysis is that minority classifications such as African-Americans and wearers-of-glasses are underrepresented in training. Followup experiments with balanced training addresses this concern and corroborates the original findings. Another possible shortcoming of this analysis is the novel use of pairwise distance between images of a single person as the predictor of recognition difficulty. A separate experiment confirms that larger distances between pairs of subject images implies a larger recognition rank for that same pair of images, thus confirming that the subject is harder to recognize.

Accelerated Image Procesesing on FPGAs. Bruce A. Draper, J. Ross Beveridge, Wim Bohm, Charlie Ross, and Monica Chawathe. IEEE Transactions on Image Processing. December 2003.

The CSU Face Identification Evaluation System: Its Purpose, Features and Structure. D.S. Bolme, J. Ross Beveridge, Marcio L. Teixeira., and Bruce A. Draper. Proc. 3rd International Conf. on Computer Vision Systems. apr 2003. (PDF) (Abstract)

Abstract. The CSU Face Identification Evaluation System provides standard face recognition algorithms and standard statistical methods for comparing face recognition algorithms. The system includes standardized image pre-processing software, three distinct face recognition algorithms, analysis software to study algorithm performance, and Unix shell scripts to run standard experiments. All code is written in ANSI C. The preprocessing code replicates feature of pre-processing used in the FERET evaluations. The three algorithms provided are Principle Components Analysis (PCA), a.k.a Eigenfaces, a combined Principle Components Analysis and Linear Discriminant Analysis algorithm (PCA+LDA), and a Bayesian Intrapersonal/Extrapersonal Classifier (BIC). The PCA+LDA and BIC algorithms are based upon algorithms used in the FERET study contributed by the University of Maryland and MIT respectively. There are two analysis. The first takes as input a set of probe images, a set of gallery images, and similarity matrix produced by one of the three algorithms. It generates a Cumulative Match Curve of recognition rate versus recognition rank. The second analysis tool generates a sample probability distribution for recognition rate at recognition rank 1, 2, etc. It takes as input multiple images per subject, and uses Monte Carlo sampling in the space of possible probe and gallery choices. This procedure will, among other things, add standard error bars to a Cumulative Match Curve. The System is available through our website and we hope it will be used by others to rigorously compare novel face identification algorithms to standard algorithms using a common implementation and known comparison techniques.

Elastic Bunch Graph Matching. D.S. Bolme. Master's Thesis: Colorado State University. May 2003. (PDF) (Abstract)

Elastic Bunch Graph Matching is a face recognition algorithm that is distributed with CSU's Evaluation of Face Recognition Algorithms System. The algorithm is modeled after the Bochum/USC face recognition algorithm used in the FERET evaluation. The algorithm recognizes novel faces by first localizing a set of landmark features and then measuring similarity between these features. Both localization and comparison uses Gabor jets extracted at landmark positions. In localization, jets are extracted from novel images and matched to jets extracted from a set of training/model jets. Similarity between novel images is expressed as function of similarity between localized Gabor jets corresponding to facial landmarks. A study of how accurately a landmark is localized using different displacement estimation methods is presented. The overall performance of the algorithm subject to changes in the number of training/model images, choice of specific wavelet encoding, displacement estimation technique and Gabor jet similarity measure is explored in a series of independent tests. Several findings were particularly striking, including results suggesting that landmark localization is less reliable than might be expected. However, it is also striking that this did not appear to greatly degrade recognition performance.

The CSU Face Identification Evaluation System User's Guide: Version 5.0. J. Ross Beveridge, D.S. Bolme, Marcio L. Teixeira., and Bruce A. Draper. Unpublished: Computer Science Department Colorado State University. May 2003. (PDF) (Abstract)

The CSU Face Identification Evaluation System provides standard face recognition algorithms and stan- dard statistical methods for comparing face recognition algorithms. This document describes Version 5.0 the Colorado State University (CSU) Face Identification Evaluation System. The system includes standardized image pre-processing software, four distinct face recognition algorithms, analysis software to study algorithm performance, and Unix shell scripts to run standard experiments. All code is written in ANSII C. The pre- processing code replicates preprocessing used in the FERET evaluations. The four algorithms provided are Principle Components Analysis (PCA), a.k.a Eigenfaces, a combined Principle Components Analysis and Lin- ear Discriminant Analysis algorithm (PCA+LDA), a Bayesian Intrapersonal/Extrapersonal Classifier (BIC), and an Elastic Bunch Graph Matching (EBGM) algorithm. The PCA+LDA, BIC, and EBGM algorithms are based upon algorithms used in the FERET study contributed by the University of Maryland, MIT, and USC respectively. Two different analysis programs are included in the evaluation system. The first takes as input a set of probe images, a set of gallery images, and similarity matrix produced by one of the four algorithms. It generates a Cumulative Match Curve that plots recognition rate as a function of recognition rank. These plots are common in evaluations such as the FERET evaluation and the Face Recognition Vendor Tests. The second analysis tool generates a sample probability distribution for recognition rate at recognition rank 1, 2, etc. It takes as input multiple images per subject, and uses Monte Carlo sampling in the space of possible probe and gallery choices. This procedure will, among other things, add standard error bars to a Cumulative Match Curve. It will also generate a sample probability distribution for the paired difference between recognition rates for two algorithms, providing an excellent basis for testing if one algorithm consistently out-performs another. The CSU Face Identification Evaluation System is available through our website and we hope it will be used by others to rigorously compare novel face identification algorithms to standard algorithms using a common implementation and known comparison techniques.

Recognizing Faces with PCA and ICA. Bruce A. Draper, Kyungim Baek, M.S. Bartlett, and J. Ross Beveridge. Computer Vision and Image Understanding. July 2003.

2002

PCA vs.ICA:A Comparison on the FERET Data Set. Kyungim Baek, Bruce A. Draper, J. Ross Beveridge, and Kai She. Joint Conference on Information Sciences. March 2002.

The CSU Face Identification Evaluation System User's Guide: Version 4.0. D.S. Bolme, Marcio L. Teixeira., J. Ross Beveridge, and Bruce A. Draper. Unpublished: Computer Science Department Colorado State University. October 2002. (PDF) (Abstract)

The CSU Face Identification Evaluation System provides standard face recognition algorithms and standard statistical methods for comparing face recognition algorithms. This document describes Version 4.0 the Colorado State University (CSU) Face Identification Evaluation System. The system includes standardized image pre-processing software, three distinct face recognition algorithms, analysis software to study algorithm performance, and Unix shell scripts to run standard experiments. All code is written in ANSI C. The preprocessing code replicates feature of preprocessing used in the FERET evaluations. The three algorithms provided are Principle Components Analysis (PCA), a.k.a Eigenfaces, a combined Principle Components Analysis and Linear Discriminant Analysis algorithm (PCA+LDA), and a Bayesian Intrapersonal/Extrapersonal Classifier (BIC). The PCA+LDA and BIC algorithms are based upon algorithms used in the FERET study contributed by the University of Maryland and MIT respectively. Two different analysis programs are included in the evaluation system. The first takes as input a set of probe images, a set of gallery images, and similarity matrix produced by one of the three algorithms. It generates a Cumulative Match Curve that plots recognition rate as a function of recognition rank. These plots are common in evaluations such as the FERET evaluation and the Face Recognition Vendor Tests. The second analysis tool generates a sample probability distribution for recognition rate at recognition rank 1, 2, etc. It takes as input multiple images per subject, and uses Monte Carlo sampling in the space of possible probe and gallery choices. This procedure will, among other things, add standard error bars to a Cumulative Match Curve. The CSU Face Identification Evaluation System is available through our website and we hope it will be used by others to rigorously compare novel face identification algorithms to standard algorithms using a common implementation and known comparison techniques.

Analyzing PCA-based Face Recognition Algorithms: Eigenvector Selection and Distance Measures. Wendy S. Yambor, Bruce A. Draper, and J. Ross Beveridge. Empirical Evaluation Methods in Computer Vision. 2002.

Interpreting LOC Cell Responses. D.S. Bolme, and Bruce A. Draper. BMCV '02: Proceedings of the Second International Workshop on Biologically Motivated Computer Vision. 2002. (PDF) (Abstract)

Kourtzi and Kanwisher identify regions in the lateral occipital cortex (LOC) with cells that respond to object type, regardless of whether the data is presented as a gray-scale image or a line drawing. They conclude from this data that these regions process or represent structural shape information. This paper suggests a slightly less restrictive explanation: they have identified regions in the LOC that are computationally down stream from complex cells in area V1.

2001

Parametric and Nonparametric Methods for the Statistical Evaluation of Human ID Algorithms. J. Ross Beveridge, Kai She, Bruce A. Draper, and Geof H. Givens. December 2001.

A Nonparametric Statistical Comparison of Principal Component and Linear Discriminant Subspaces for Face Recognition. J. Ross Beveridge, Kai She, Bruce A. Draper, and Geof H. Givens. December 2001. (PDF) (Abstract)

The FERET evaluation compared recognition rates for different semi-automated and automated face recognition algorithms. We extend FERET by considering when differences in recognition rates are statistically distinguishable subject to changes in test imagery. Nearest Neighbor classifiers using principal component and linear discriminant subspaces are compared using different choices of distance metric. Probability distributions for algoriithm recognition rates and pairwise differences in recognition rates are determined using a permutation methodology. The principal component subspace with Mahalanobis distance is the best combination; using L2 is second best. Choice of distance measure for the linear discriminant subspace matters little, and performance is always worse than the principal components classifier using either Mahalanobis or L1 distance. We make the source code for the algorithms, scoring procedures and Monte Carlo study available in the hopes others will extend this comparison to newer algorithms.

The Geometry of LDA and PCA Classifiers Illustrated with 3D Examples. J. Ross Beveridge. 2001.

2000

Localized Scene Interpretation from 3D Models, Range, and Optical Data. M. R. Stevens, and J. R. Beveridge. Image Understanding. 2000.

Matching Horizon Features Using a Messy Genetic Algorithm. J. Ross Beveridge, Karthik Balasubramaniam, and Darrell Whitley. Computer Methods in Applied Mechanics and Engineering. 2000.

Integrating Graphics and Vision for Object Recognition. Mark R. Stevens, and J. Ross Beveridge. 2000.

Analyzing PCA-based Face Recognition Algorithms: Eigenvector Selection and Distance Measures. Wendy S. Yambor, Bruce A. Draper, and J. Ross Beveridge. July 2000. (PDF) (Abstract)

This study examines the role of Eigenvector selection and Eigenspace distance measures on PCA-based face recognition systems. In particular, it builds on earlier results from the FERET face recognition evaluation studies, which created a large face database (1,196 subjects) and a baseline face recognition system for comparative evaluations. This study looks at using a combinations of traditional distance measures (City-block, Euclidean, Angle, Mahalanobis) in Eigenspace to improve performance in the matching stage of face recognition. A statistically significant improvement is observed for the Mahalanobis distance alone when compared to the other three alone. However, no combinations of these measures appear to perform better than Mahalanobis alone. This study also examines questions of how many Eigenvectors to select and according to what ordering criterion. It compares variations in performance due to different distance measures and numbers of Eigenvectors. Ordering Eigenvectors according to a like-image difference value rather than their Eigenvalues is also considered.

1999

LiME: An Environment for 2D Line Segment Matching. J. Ross Beveridge. Workshop on Performance Characterisation and Benchmarking of Vision Systems. January 1999.

1998

The Cameron Project: High-Level Programming of Image Processing Applications on Reconfigurable Computing Machines. W. Najjar, Bruce A. Draper, Wim Bohm, and J. Ross Beveridge. Proceedings of the Workshop on Reconfigurable Computing. October 1998.

Multisensor Occlusion Reasoning. Mark R. Stevens, and J. Ross Beveridge. Proceedings of the 14th International Conference on Pattern Recognition. August 1998.

Automated Velocity Picking: A Computer Vision and Optimization Approach. Darrell Whitley, J. Ross Beveridge, and Charlie Ross. 1998. (PDF)

Optimal 2D Model Matching Using a Messy Genetic Algorithm. J. Ross Beveridge. Proceedings of AAAI-98, Madison. August 1998.

1997

How Easy is Matching 2D Line Models Using Local Search?. J. Ross Beveridge, Edward M. Riseman, and Christopher R. Graves. T-PAMI. June 1997. (PDF)

A Tutorial on a Sliding Window Target Detection Algorithm Implemented in the DARPA Image Understanding Environment. J. Ross Beveridge, and Jim Steinborn. 1997.

Messy Genetic Algorithms for Subset Feature Selection. Darrell Whitley, J. Ross Beveridge, Christopher R. Graves, and C. Guerra-Salcedo. Proc. 1997 International Conference on Genetic Algorithms. July 1997. (PDF)

Efficient Indexing for Object Recognition Using Large Networks. Mark R. Stevens, Charles W. Anderson, and J. Ross Beveridge. Proc. 1997 IEEE International Conference on Neural Networks. June 1997. (PDF)

LiME Users Guide. J. Ross Beveridge. November 1997. (PDF)

Using Multisensor Occlusion Reasoning in Multisensor Object Recognition. Mark R. Stevens, and J. Ross Beveridge. Proc. 1997 IEEE International Conference on Computer Vision and Pattern Recognition. June 1997. (PDF)

Precise Matching of 3-D Target Models to Multisensor Data. Mark R. Stevens, and J. Ross Beveridge. IEEE Transactions on Image Processing. January 1997. (PDF) (Online)

Comparing Random-Starts Local Search with Key-Feature matching. J. Ross Beveridge, Christopher R. Graves, and Jim Steinborn. Proc. 1997 International Joint Conference on Artificial Intelligence. August 1997. (PDF)

A Coregistration Approach to Multisensor Target Recognition with Extensions to Exploit Digital Elevation Map Data. J. Ross Beveridge, Bruce A. Draper, Mark R. Stevens, Kris Siejko, and Allen R. Hanson. Reconnaisance, Surveilance, and Target Acquisition for the Unmanned Ground Vehicle. 1997. (PDF)

Landmark-Based Navigation and the Acquisition of Environmental Models. Edward M. Riseman, Allen R. Hanson, J. Ross Beveridge, Rakesh Kumar, and Harpreet Sawhney. Visual Navigation: From Biological Systems to Unmanned Ground Vehicles. 1997.

Visualizing Multisensor Model-Based Object Recognition. Mark R. Stevens, J. Ross Beveridge, and Michael E. Goss. Machine Graphics \& Vision. 1997. (PDF)

1996

Simultaneous Refinement of Pose and Sensor Registration. Anthony N. A. Schwickerath. Master's Thesis: Colorado State University. October 1996. (PDF)

CAD-based Target Identification in Range, IR and Color Imagery Using On-Line Rendering and Feature Prediction. J. Ross Beveridge, and Mark R. Stevens. 1996. (PDF) (Abstract)

Results for a mutlisensor CADbased ob ject recognition system are presented in the context of Automatic Target Recognition using nearly boresight aligned range IR and color sensors The system is shown to identify targets in test suite of image triples This suite includes targets at low resolution unusual aspect angles and partially obscured by terrain The key concept presented in this work is that of using online rendering of CADmodels to support an iterative predit match and rene cycle This cycle optimizes the match sub ject to variability both in ob ject pose and sensor registration An occlusion reasoning component further illustrates the power of this approach by customizing the predicted features to t specic scene geometry Occlusion reasoning detects occlusion in the range data and adjusts the features predicted to be visible accordingl

Some Lessons Learned from Coding the Burns Line Extraction Algorithm in the DARPA Image Understanding Environment. J. Ross Beveridge, Christopher R. Graves, and Christopher E. Lesher. October 1996. (PDF)

Interleaving 3D Model Feature Prediction and Matching to Support Multi-Sensor Object Recognition. Mark R. Stevens, and J. Ross Beveridge. Proceedings: Image Understanding Workshop. February 1996.

Approximate Image Mappings Between Nearly Boresight Aligned Optical and Range Sensors. J. Ross Beveridge, Mark R. Stevens, Z. Zhang, and Michael E. Goss. April 1996. (PDF)

Coregistration of Range and Optical Images Using Coplanarity and Orientation Constraints. Anthony N. A. Schwickerath, and J. Ross Beveridge. 1996 Conference on Computer Vision and Pattern Recognition. June 1996. (PDF)

Interleaving 3D Model Feature Prediction and Matching to Support Multi-Sensor Object Recognition. Mark R. Stevens, and J. Ross Beveridge. International Conference on Pattern Recognition. August 1996. (PDF)

Solving Diverse Image Understanding Problems Using the Image Understanding Environment. John Dolan, Charlie Kohl, Richard Lerner, Joseph Mundy, Terrance Boult, and J. Ross Beveridge. Proceedings: Image Understanding Workshop. February 1996.

Local Search as a Tool for Horizon Line Matching. J. Ross Beveridge, Christopher R. Graves, and Christopher E. Lesher. Proceedings: Image Understanding Workshop. February 1996. (PDF)

Test Driving Three 1995 Genetic Algorithms: New Test Functions and Geometric Matching. Darrell Whitley, J. Ross Beveridge, Christopher R. Graves, and K. Mathias. Journal of Heuristics. 1996.

Progress on Target and Terrain Recognition Research at Colorado State University. J. Ross Beveridge, Bruce A. Draper, and Kris Siejko. Proceedings: Image Understanding Workshop. February 1996. (PDF)

How Easy is Matching 2D Line Models Using Local Search?. J. Ross Beveridge, Edward M. Riseman, and Christopher R. Graves. May 1996. (PDF)

Optical Linear Feature Detection Based on Model Pose. Mark R. Stevens, and J. Ross Beveridge. Proceedings: Image Understanding Workshop. February 1996. (PDF)

Coregistering 3D Models, Range, and Optical Imagery Using Least-Median Squares Fitting. Anthony N. A. Schwickerath, and J. Ross Beveridge. Proceedings: Image Understanding Workshop. February 1996. (PDF)

1995

Optimal Geometric Model Matching Under Full 3D Perspective. J. Ross Beveridge, and Edward M. Riseman. Computer Vision and Image Understanding. 1995. (PDF) (Abstract)

Model based object recognition systems have rarely dealt directly with D perspective while matching models to images The algorithms presented here use D pose recovery during matching to explicitly and quantitatively account for changes in model appearance associated with D per spective These algorithms use random start local search to nd with high probability the globally optimal correspondence between model and image features in spaces containing over possible matches Three specic algorithms are compared on robot landmark recognition problems A full perspective algorithm uses the D pose algorithm in all stages of search while two hybrid algorithms use a computationally less demanding weak perspective procedure to rank alternative matches and updates D pose only when moving to a new match These hybrids successfully solve problems involving perspective and in less time than required by the full perspective algorithm

Three-dimensional visualization environment for Multisensor data analysis, interpretation, and model-based object recognition. Michael E. Goss, J. Ross Beveridge, Mark R. Stevens, and Aaron Fuegi. SPIE Symposium on Electronic Imaging: Science and Technology. February 1995.

Reduction of BRL/CAD Models and Their Use in Automatic Target Recognition Algorithms. Mark R. Stevens, J. Ross Beveridge, and Michael E. Goss. Proceedings: BRL-CAD Symposium. June 1995. (PDF) (Abstract)

We are currently developing an Automatic Target Recognition (ATR) algorithm to locate an object using multisensor data. The ATR algorithm will determine corresponding points between a range (LADAR) image, a color (CCD) image, a thermal (FLIR) image and a BRL/CAD model of the object being located. The success of this process depends in part on which features can be automatically extracted from the model database. The BRL/CAD models we have for this process contain more detail than can be productively used by our ATR algorithm and must be reduced to a more appropriate form.

Model-based Fusion of FLIR, Color and LADAR. J. Ross Beveridge, Allen R. Hanson, and Durga P. Panda. Proceedings of the Sensor Fusion and Networked Robotics VIII Conference. October 1995.

Obtaining 3D Silhouettes and Sampled Surfaces from Solid Models for use in Computer Vision. Mark R. Stevens. Master's Thesis: Colorado State University. 1995. (PDF)

Demonstrating Polynomial Run-Time Growth for Local Search Matching. J. Ross Beveridge, Edward M. Riseman, and Christopher R. Graves. Proceedings: International Symposium on Computer Vision. November 1995.

1994

Visualization and Verification of Automatic Target Recognition Results Using Combined Range and Optical Imagery. Michael E. Goss, J. Ross Beveridge, Mark R. Stevens, and Aaron Fuegi. Proceedings: Image Understanding Workshop. November 1994. (PDF) (Abstract)

The Rangeview software system presented here ad- dresses two significant issues in the development and deployment of an Automatic Target Recognizer: visualization of progress of the recognizer in find- ing a target, and verification by an operator of the correctness of the match. The system combines range imagery from a LADAR device with opti- cal imagery from a color CCD camera and/or FLIR sensor to display a three-dimensional representation of the scene and the target model. Range imagery creates a partial three-dimensional representation of the scene. Optical imagery is mapped onto this par- tial three-dimensional representation. Output from the ATR is registered in three dimensions with the scene. Recognized targets are displayed in correct spatial relation to the scene, and the registered scene and target may be visually inspected from any view- point.

Reply to: Performance Characterization in Computer Vision by Robert M. Haralick. Bruce A. Draper, and J. Ross Beveridge. CVGIP: Image Understanding. September 1994.

RSTA Research of the Colorado State, University of Massachusetts and Alliant Techsystems Team. J. Ross Beveridge, Allen R. Hanson, and Durga P. Panda. Image Understanding Workshop (separate addendum). November 1994. (PDF)

Optimal Geometric Model Matching Under Full 3D Perspective. J. Ross Beveridge, and Edward M. Riseman. Second CAD-Based Vision Workshop. February 1994. (PDF)

Object to Multisensor Coregistration with Eight Degrees of Freedom. Anthony N. A. Schwickerath, and J. Ross Beveridge. Proceedings: Image Understanding Workshop. November 1994. (PDF)

November 1993 Fort Carson RSTA Data Collection Final Report. J. Ross Beveridge, Durga P. Panda, and Theodore Yachik. January 1994. (PDF)

Non-parametric Classification of Pixels Under Varying Outdoor Illumination. Shashi Buluswar, Bruce A. Draper, Allen Hanson, and Edward Riseman. Proceedings: Image Understanding Workshop. November 1994. (PDF)

Integrated Color CCD, FLIR and LADAR Based Object Modeling and Recognition. J. Ross Beveridge, Allen R. Hanson, and Durga P. Panda. April 1994.

1993

Matching Perspective Views of Coplanar Structures Using Projective Unwarping and Similarity Matching.. Robert T. Collins, and J. Ross Beveridge. Proceedings: 1993 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. jun 1993.

Local Search Algorithms for Geometric Object Recognition: Optimal Correspondence and Pose. J. Ross Beveridge. may 1993. (PDF) (Abstract)

Recognizing an ob ject by its shape is a fundamental problem in computer vision and typically involves nding a discrete correspondence between ob ject model and image features as well as the pose position and orientation of the camera relative to the ob ject This thesis presents new algorithms for nding the optimal correspondence and pose of a rigid D ob ject They utilize new techniques for evaluating geometric matches and for searching the combinatorial space of possible matches An ecient closedform technique for computing pose under weak perspective four parameter D ane is presented and an iterative non linear D pose algorithm is used to support matching under full D perspective

1992

Can Too Much Perspective Spoil the View? A Case Study in 2D Affine Versus 3D Perspective Model Matching. J. Ross Beveridge, and Edward M. Riseman. Proceedings: Image Understanding Workshop. January 1992.

Hybrid Weak-Perspective and Full-Perspective Matching. J. Ross Beveridge, and Edward M. Riseman. Proceedings: IEEE 1992 Computer Society Conference on Computer Vision and Pattern Recognition. June 1992. (PDF) (Abstract)

Full perspective mappings between D objects and D images are more complicated than weak perspective mappings, which consider only rotation, translation and scaling. Therefore, in D model based robot navigation, it is important to understand how and when full perspective must be taken into account. In this paper we use a probabilistic combinatorial op timization algorithm to search for an optimal match between D landmark and D image features. Three variations are considered a weak perspective algo rithm rotates, translates and scales an initial D pro jection of the D landmark. A full perspective algo rithm always recomputes the robot's pose and repro jects the landmark when testing alternative matches. Finally, a hybrid algorithm uses weak perspective to select a most promising alternative, but then updates the pose and repro jects the landmark. The hybrid al gorithm appears to combine the best attributes of the other two. Like the full perspective algorithm, it reli ably recovers the true pose of the robot and like the weak perspective algorithm it runs to faster than the full perspective algorithm

A Maximum Likelihood View of Point and Line Segment Match Evaluation. J. Ross Beveridge. Unpublished: Unpublished draft.. 1992.

Comparing Subset-convergent and Variable-depth Local Search on Perspective Sensitive Landmark Recognition Problems. J. Ross Beveridge. Proceedings: SPIE Intelligent Robots and Computer Vision XI: Algorithms, Techniques, and Active Vision. November 1992.

1991

Issues Central to a Useful Image Understanding Environment. J. Ross Beveridge, Bruce A. Draper, Allen R. Hanson, and Edward M. Riseman. The 20th AIPR Workshop. October 1991.

Optimization of 2-Dimensional Model Matching. J. Ross Beveridge, Rich Weiss, and Edward M. Riseman. Selected Papers on Automatic Object Recognition (originally appeared in DARPA Image Understanding Workshop, 1989). 1991.

1990

Model-Directed Mobile Robot Navigation. Claude Fennema, Allen R. Hanson, Edward M. Riseman, J. Ross Beveridge, and Rakesh Kumar. smc. November/December 1990.

Combinatorial Optimization Applied to Variable Scale 2D Model Matching. J. Ross Beveridge, Rich Weiss, and Edward M. Riseman. Proceedings of the IEEE International Conference on Pattern Recognition. June 1990.

1989

The ISR: a Database for Symbolic Processing in Computer Vision. J. Brolio, Bruce A. Draper, J. Ross Beveridge, and Allen R. Hanson. Computer. December 1989.

Segmenting Images Using Localized Histograms and Region Merging. J. Ross Beveridge, Joey Griffith, Ralf R. Kohler, Allen R. Hanson, and Edward M. Riseman. ijcv. January 1989.

Optimization of 2-Dimensional Model Matching. J. Ross Beveridge, Rich Weiss, and Edward M. Riseman. Proceedings: Image Understanding Workshop. June 1989.

1987

Segmenting Images Using Localized Histograms and Region Merging. J. Ross Beveridge, Joey Griffith, Ralf R. Kohler, Allen R. Hanson, and Edward M. Riseman. October 1987.

Searching for Geometric Structure in Images of Natural Scenes. George Reynolds, and J. Ross Beveridge. Proceedings: Image Understanding Workshop. February 1987.