Computer Vision Group
Sort by: Author: Search:
Keyword:
Year:
Returned 106 of 106 papers. Show all...

Biomimetic

2002

D.S. Bolme, and Bruce A. Draper. Interpreting LOC Cell Responses. BMCV '02: Proceedings of the Second International Workshop on Biologically Motivated Computer Vision. 2002. (PDF) (Abstract)

Kourtzi and Kanwisher identify regions in the lateral occipital cortex (LOC) with cells that respond to object type, regardless of whether the data is presented as a gray-scale image or a line drawing. They conclude from this data that these regions process or represent structural shape information. This paper suggests a slightly less restrictive explanation: they have identified regions in the LOC that are computationally down stream from complex cells in area V1.

Evaluation; Face Recognition

2009

Y. M. Lui, D. S. Bolme, B. A. Draper, J. R. Beveridge, G. H. Givens, and P. J. Phillips. A Meta-Analysis of Face Recognition Covariates (includes supplemental material). Proceedings of IEEE Conference on on Biometrics: Theory, Applications and Systems. September 2009. (PDF) (Abstract)

This paper presents a meta-analysis for covariates that affect performance of face recognition algorithms. Our review of the literature found six covariates for which multiple studies reported effects on face recognition performance. These are: age of the person, elapsed time between images, gender of the person, the person's expression, the resolution of the face images, and the race of the person. The results presented are drawn from 25 studies conducted over the past 12 years. There is near complete agreement between all of the studies that older people are easier to recognize than younger people, and recognition performance begins to degrade when images are taken more than a year apart. While individual studies find men or women easier to recognize, there is no consistent gender effect. There is universal agreement that changing expression hurts recognition performance. If forced to compare different expressions, there is still insufficient evidence to conclude that any particular expression is better than another. Higher resolution images improve performance for many modern algorithms. Finally, given the studies summarized here, no clear conclusions can be drawn about whether one racial group is harder or easier to recognize than another.

Face Recognition

2010

Yui Man Lui, J. Ross Beveridge, and L. Darrell Whitley. Adaptive Appearance Model and Condensation Algorithm for Robust Face Tracking. IEEE Trans. on Systems, Man, and Cybernetics - Part A: Systems and Humans (accepted). 2010. (Online)

2009

D. S. Bolme, B. A. Draper, and J. R. Beveridge. Average of Synthetic Exact Filters. Computer Vision and Pattern Recoginition. June 2009. (PDF) (Online) (Abstract)

This paper introduces a class of correlation filters called Average of Synthetic Exact Filters (ASEF). For ASEF, the correlation output is completely specified for each training image. This is in marked contrast to prior methods such as Synthetic Discriminant Functions (SDFs) which only specify a single output value per training image. Advantages of ASEF training include: insenitivity to over-fitting, greater flexibility with regard to training images, and more robust behavior in the presence of structured backgrounds. The theory and design of ASEF filters is presented using eye localization on the FERET database as an example task. ASEF is compared to other popular correlation filters including SDF, MACE, OTF, and UMACE, and with other eye localization methods including Gabor Jets and the OpenCV Cascade Classifier. ASEF is shown to outperform all these methods, locating the eye to within the radius of the iris approximately 98.5% of the time.

D. S. Bolme, J. R. Beveridge, and B. A. Draper. FaceL: Facile Face Labeling. International Conference on Computer Vision Systems. 2009. (PDF) (Abstract)

FaceL is a simple and fun face recognition system that labels faces in live video from an iSight camera or webcam. FaceL presents a window with a few controls and annotations displayed over the live video feed. The annotations indicate detected faces, positions of eyes, and after training, the names of enrolled people. Enrollment is video based, capturing many images per person. FaceL does a good job of distinguishing between a small set of people in fairly uncontrolled settings and incorporates a novel incremental training capability. The system is very responsive, running at over 10 frames per second on modern hardware. FaceL is open source and can be downloaded from http://pyvision.sourceforge.net/facel.

2007

Jen Mei Chang, Michael Kirby, Holger Kley, J. Ross Beveridge, Bruce A. Draper, and Chris Peterson. Examples of set-to-set image classification. Seventh International Conference on Mathematics in Signal Processing Conference Digest. December 2007. (PDF) (Abstract)

We present a framework for representing a set of images as a point on a Grassmann manifold. A collection of sets of images for a specific class is then associated with a collection of points on this manifold. Relationships between classes as defined by points associated with sets of images may be determined using the pro jection F-norm, geodesic (or other) distances on the Grassmann manifold. We present several applications of this approach for image classification.

J. M. Chang, M. Kirby, H. Kley, C. Peterson, B. A. Draper, and J. R. Beveridge. Recognition of Digital Images of the Human Face at Ultra Low Resolution Via Illumination Spaces. ACCV. 2007.

D.S. Bolme, Michelle M. Strout, and J. Ross Beveridge. FacePerf: Face Recognition Performance Benchmarks. IEEE International Symposium on Workload Characterization. September 2007. (PDF) (Online) (Abstract)

In this paper we present a collection of C and C++ biometric performance benchmark algorithms called FacePerf. The benchmark includes three different face recognition algorithms that are historically important to the face recognition community: Haar-based face detection, Principal Components Analysis, and Elastic Bunch Graph Matching. The algorithms are fast enough to be useful in realtime systems; however, improving performance would allow the algorithms to process more images or search larger face databases. Bottlenecks for each phase in the algorithms have been identified. A cosine approximation was able to reduce the execution time of the Elastic Bunch Graph Matching implementation by 32%.

D.S. Bolme, J. Ross Beveridge, and Adele E. Howe. Person Identification Using Text and Image Data. Proceedings of IEEE Conference on on Biometrics: Theory, Applications and Systems. September 2007. (PDF) (Abstract)

This paper presents a bimodal identification system using text based term vectors and EBGM face recognition. Identification was tested on a database of 118 celebrities downloaded from the internet. The dataset contained multiple images and two biographies for each person. Text based identification had a 100% identification rate for the full biographies. When the text data was artificially restricted to six sentences per subject, rank one identification rates were similar to face recognition (approx. 22%). In this restricted case, combining text identification and face identification showed a significant improvement in the identification rate over either method alone.

Jen Mei Chang, Michael Kirby, and Chris Peterson. Set-to-set face recognition under variation of pose and illumination. Proceedings of 2007 Biometrics Symposium at The Biometric Consortium Conference. September 2007. (PDF) (Abstract)

Poster: Face recognition under variations in illumination and pose has been recognized as a difficult problem with pose appearing somewhat more challenging to handle than variations in illumination

2006

Jen Mei Chang, J. Ross Beveridge, Bruce A. Draper, Michael Kirby, Holger Kley, and Chris Peterson. Illumination face spaces are idiosyncratic. The International Conference on Image Processing, Computer Vision, and Pattern Recognition. 2006. (PDF) (Abstract)

Illumination spaces capture how the appearances of human faces vary under changing illumination. This work models illumination spaces as points on a Grassmann manifold and uses distance measures on this manifold to show that every person in the CMU-PIE and Yale data sets has a unique and identifying illumination space. This suggests that variations under changes in illumination can be exploited for their discriminatory information. As an example, when face recognition is cast as matching sets of face images to sets of face images, subjects in the CMU-PIE and Yale databases can be recognized with 100% accuracy.

2005

J. Ross Beveridge, D.S. Bolme, Bruce A. Draper, and Marcio L. Teixeira.. The CSU Face Identification Evaluation System. Machine Vision and Applications. 2005. (Online) (Abstract)

The CSU Face Identification Evaluation System includes standardized image preprocessing software, four distinct face recognition algorithms, analysis tools to study algorithm performance, and Unix shell scripts to run standard experiments. All code is written in ANSII C. The four algorithms provided are principle components analysis (PCA), a.k.a eigenfaces, a combined principle components analysis and linear discriminant analysis algorithm (PCA + LDA), an intrapersonal/extrapersonal image difference classifier (IIDC), and an elastic bunch graph matching (EBGM) algorithm. The PCA + LDA, IIDC, and EBGM algorithms are based upon algorithms used in the FERET study contributed by the University of Maryland, MIT, and USC, respectively. One analysis tool generates cumulative match curves; the other generates a sample probability distribution for recognition rate at recognition rank 1, 2, etc., using Monte Carlo sampling to generate probe and gallery choices. The sample probability distributions at each rank allow standard error bars to be added to cumulative match curves. The tool also generates sample probability distributions for the paired difference of recognition rates for two algorithms. Whether one algorithm consistently outperforms another is easily tested using this distribution. The CSU Face Identification Evaluation System is available through our Web site and we hope it will be used by others to rigorously compare novel face identification algorithms to standard algorithms using a common implementation and known comparison techniques.

2004

Geof H. Givens, J. Ross Beveridge, Bruce A. Draper, and D.S. Bolme. Using a Generalized Linear Mixed Model to Study the Configuration Space of PCA+LDA Human Face Recognition Algorithm. Lecture Notes in Computer Science : Articulated Motion and Deformable Objects. 2004. (PDF) (Online) (Abstract)

A generalized linear mixed model is used to estimate how rank 1 recognition of human faces with a PCA+LDA algorithm is affected by the choice of distance metric, image size, PCA space dimensionality, supplemental training and inclusion of subjects in the training. Random effects for replicated training sets and for repeated measures on people were included in the model. Results indicate between people variation was a dominant source of variability, and that there was moderate correlation within people. Statistically significant effects and interactions were found for all configuration factors except image size. Changes to the PCA+LDA configuration only improved recognition for subjects who had images included in the training data. For subjects not included in training, no configuration changes were helpful. This study is instructive for what it reveals about PCA+LDA. It is also a model for how to conduct such studies. For example, by accounting for subject variation as a random effect and explicitly looking for interaction effects, we are able to discern effects that might otherwise have been masked by subject variation and interaction effects.

2003

Geof H. Givens, J. Ross Beveridge, Bruce A. Draper, and D.S. Bolme. A Statistical Assessment of Subject Factors in the PCA Recognition of Human Faces. Computer Vision and Pattern Recognition. 2003. (PDF) (Online) (Abstract)

Some people's faces are easier to recognize than others, but it is not obvious what subject-specific factors make individual faces easy or difficult to recognize. This study considers 11 factors that might make recognition easy or difficult for 1,072 human subjects in the FERET dataset. The specific factors are: race (white, Asian, African-American, or other), gender, age (young or old), glasses (present or absent), facial hair (present or absent), bangs (present or absent), mouth (closed or other), eyes (open or other), complexion (clear or other), makeup (present or absent), and expression (neutral or other). An ANOVA is used to determine the relationship between these subject covariates and the distance between pairs of images of the same subject in a standard Eigenfaces subspace. Some results are not terribly surprising. For example, the distance between pairs of images of the same subject increases for people who change their appearance, e.g., open and close their eyes, open and close their mouth or change expression. Thus changing appearance makes recognition harder. Other findings are surprising. Distance between pairs of images for subjects decreases for people who consistently wear glasses, so wearing glasses makes subjects more recognizable. Pairwise distance also decreases for people who are either Asian or African-American rather than white. A possible shortcoming of our analysis is that minority classifications such as African-Americans and wearers-of-glasses are underrepresented in training. Followup experiments with balanced training addresses this concern and corroborates the original findings. Another possible shortcoming of this analysis is the novel use of pairwise distance between images of a single person as the predictor of recognition difficulty. A separate experiment confirms that larger distances between pairs of subject images implies a larger recognition rank for that same pair of images, thus confirming that the subject is harder to recognize.

D.S. Bolme. Elastic Bunch Graph Matching. Master's Thesis: Colorado State University. May 2003. (PDF) (Abstract)

Elastic Bunch Graph Matching is a face recognition algorithm that is distributed with CSU's Evaluation of Face Recognition Algorithms System. The algorithm is modeled after the Bochum/USC face recognition algorithm used in the FERET evaluation. The algorithm recognizes novel faces by first localizing a set of landmark features and then measuring similarity between these features. Both localization and comparison uses Gabor jets extracted at landmark positions. In localization, jets are extracted from novel images and matched to jets extracted from a set of training/model jets. Similarity between novel images is expressed as function of similarity between localized Gabor jets corresponding to facial landmarks. A study of how accurately a landmark is localized using different displacement estimation methods is presented. The overall performance of the algorithm subject to changes in the number of training/model images, choice of specific wavelet encoding, displacement estimation technique and Gabor jet similarity measure is explored in a series of independent tests. Several findings were particularly striking, including results suggesting that landmark localization is less reliable than might be expected. However, it is also striking that this did not appear to greatly degrade recognition performance.

J. Ross Beveridge, D.S. Bolme, Marcio L. Teixeira., and Bruce A. Draper. The CSU Face Identification Evaluation System User's Guide: Version 5.0. Unpublished: Computer Science Department Colorado State University. May 2003. (PDF) (Abstract)

The CSU Face Identification Evaluation System provides standard face recognition algorithms and stan- dard statistical methods for comparing face recognition algorithms. This document describes Version 5.0 the Colorado State University (CSU) Face Identification Evaluation System. The system includes standardized image pre-processing software, four distinct face recognition algorithms, analysis software to study algorithm performance, and Unix shell scripts to run standard experiments. All code is written in ANSII C. The pre- processing code replicates preprocessing used in the FERET evaluations. The four algorithms provided are Principle Components Analysis (PCA), a.k.a Eigenfaces, a combined Principle Components Analysis and Lin- ear Discriminant Analysis algorithm (PCA+LDA), a Bayesian Intrapersonal/Extrapersonal Classifier (BIC), and an Elastic Bunch Graph Matching (EBGM) algorithm. The PCA+LDA, BIC, and EBGM algorithms are based upon algorithms used in the FERET study contributed by the University of Maryland, MIT, and USC respectively. Two different analysis programs are included in the evaluation system. The first takes as input a set of probe images, a set of gallery images, and similarity matrix produced by one of the four algorithms. It generates a Cumulative Match Curve that plots recognition rate as a function of recognition rank. These plots are common in evaluations such as the FERET evaluation and the Face Recognition Vendor Tests. The second analysis tool generates a sample probability distribution for recognition rate at recognition rank 1, 2, etc. It takes as input multiple images per subject, and uses Monte Carlo sampling in the space of possible probe and gallery choices. This procedure will, among other things, add standard error bars to a Cumulative Match Curve. It will also generate a sample probability distribution for the paired difference between recognition rates for two algorithms, providing an excellent basis for testing if one algorithm consistently out-performs another. The CSU Face Identification Evaluation System is available through our website and we hope it will be used by others to rigorously compare novel face identification algorithms to standard algorithms using a common implementation and known comparison techniques.

D.S. Bolme, J. Ross Beveridge, Marcio L. Teixeira., and Bruce A. Draper. The CSU Face Identification Evaluation System: Its Purpose, Features and Structure. Proc. 3rd International Conf. on Computer Vision Systems. apr 2003. (PDF) (Abstract)

Abstract. The CSU Face Identification Evaluation System provides standard face recognition algorithms and standard statistical methods for comparing face recognition algorithms. The system includes standardized image pre-processing software, three distinct face recognition algorithms, analysis software to study algorithm performance, and Unix shell scripts to run standard experiments. All code is written in ANSI C. The preprocessing code replicates feature of pre-processing used in the FERET evaluations. The three algorithms provided are Principle Components Analysis (PCA), a.k.a Eigenfaces, a combined Principle Components Analysis and Linear Discriminant Analysis algorithm (PCA+LDA), and a Bayesian Intrapersonal/Extrapersonal Classifier (BIC). The PCA+LDA and BIC algorithms are based upon algorithms used in the FERET study contributed by the University of Maryland and MIT respectively. There are two analysis. The first takes as input a set of probe images, a set of gallery images, and similarity matrix produced by one of the three algorithms. It generates a Cumulative Match Curve of recognition rate versus recognition rank. The second analysis tool generates a sample probability distribution for recognition rate at recognition rank 1, 2, etc. It takes as input multiple images per subject, and uses Monte Carlo sampling in the space of possible probe and gallery choices. This procedure will, among other things, add standard error bars to a Cumulative Match Curve. The System is available through our website and we hope it will be used by others to rigorously compare novel face identification algorithms to standard algorithms using a common implementation and known comparison techniques.

2002

D.S. Bolme, Marcio L. Teixeira., J. Ross Beveridge, and Bruce A. Draper. The CSU Face Identification Evaluation System User's Guide: Version 4.0. Unpublished: Computer Science Department Colorado State University. October 2002. (PDF) (Abstract)

The CSU Face Identification Evaluation System provides standard face recognition algorithms and standard statistical methods for comparing face recognition algorithms. This document describes Version 4.0 the Colorado State University (CSU) Face Identification Evaluation System. The system includes standardized image pre-processing software, three distinct face recognition algorithms, analysis software to study algorithm performance, and Unix shell scripts to run standard experiments. All code is written in ANSI C. The preprocessing code replicates feature of preprocessing used in the FERET evaluations. The three algorithms provided are Principle Components Analysis (PCA), a.k.a Eigenfaces, a combined Principle Components Analysis and Linear Discriminant Analysis algorithm (PCA+LDA), and a Bayesian Intrapersonal/Extrapersonal Classifier (BIC). The PCA+LDA and BIC algorithms are based upon algorithms used in the FERET study contributed by the University of Maryland and MIT respectively. Two different analysis programs are included in the evaluation system. The first takes as input a set of probe images, a set of gallery images, and similarity matrix produced by one of the three algorithms. It generates a Cumulative Match Curve that plots recognition rate as a function of recognition rank. These plots are common in evaluations such as the FERET evaluation and the Face Recognition Vendor Tests. The second analysis tool generates a sample probability distribution for recognition rate at recognition rank 1, 2, etc. It takes as input multiple images per subject, and uses Monte Carlo sampling in the space of possible probe and gallery choices. This procedure will, among other things, add standard error bars to a Cumulative Match Curve. The CSU Face Identification Evaluation System is available through our website and we hope it will be used by others to rigorously compare novel face identification algorithms to standard algorithms using a common implementation and known comparison techniques.

2001

J. Ross Beveridge, Kai She, Bruce A. Draper, and Geof H. Givens. Parametric and Nonparametric Methods for the Statistical Evaluation of Human ID Algorithms. December 2001.

J. Ross Beveridge, Kai She, Bruce A. Draper, and Geof H. Givens. A Nonparametric Statistical Comparison of Principal Component and Linear Discriminant Subspaces for Face Recognition. December 2001. (PDF) (Abstract)

The FERET evaluation compared recognition rates for different semi-automated and automated face recognition algorithms. We extend FERET by considering when differences in recognition rates are statistically distinguishable subject to changes in test imagery. Nearest Neighbor classifiers using principal component and linear discriminant subspaces are compared using different choices of distance metric. Probability distributions for algoriithm recognition rates and pairwise differences in recognition rates are determined using a permutation methodology. The principal component subspace with Mahalanobis distance is the best combination; using L2 is second best. Choice of distance measure for the linear discriminant subspace matters little, and performance is always worse than the principal components classifier using either Mahalanobis or L1 distance. We make the source code for the algorithms, scoring procedures and Monte Carlo study available in the hopes others will extend this comparison to newer algorithms.

Face Recognition; Algebraic Geometry

2009

Yui Man Lui, J. Ross Beveridge, and Michael Kirby. Canonical Stiefel Quotient and its Application to Generic Face Recognition in Illumination Spaces. International Conference on Biometrics : Theory, Applications, and Systems. 2009.


Face Recognition; Evaluation

2009

P. J. Phillips, P. J. Flynn, J. R. Beveridge, W. T. Scruggs, A. J. O'Toole, D. S. Bolme, K. W. Bowyer, B. A. Draper, G. H. Givens, Y. M. Lui, H. Sahibzada, J. A. Scallan III, and S. Weimer. Overview of the Multiple Biometrics Grand Challenge. IEEE International Conference on Biometrics. June 2009. (Abstract)

The goal of the Multiple Biometrics Grand Challenge (MBGC) is to improve the performance of face and iris recognition technology from biometric samples acquired under unconstrained conditions. The MBGC is organized into three challenge problems. Each challenge problem relaxes the acquisition constraints in different directions. In the Portal Challenge Problem, the goal is to recognize people from near-infrared (NIR) and high definition (HD) video as they walk through a portal. Iris recognition can be performed from the NIR video and face recognition from the HD video. The availability of NIR and HD modalities allows for the development of fusion algorithms. The Still Face Challenge Problem has two primary goals. The first is to improve recognition performance from frontal and off angle still face images taken under uncontrolled indoor and outdoor lighting. The second is to improve recognition performance on still frontal face images that have been resized and compressed, as is required for electronic passports. In the Video Challenge Problem, the goal is to recognize people from video in unconstrained environments. The video is unconstrained in pose, illumination, and camera angle. All three challenge problems include a large data set, experiment descriptions, ground truth, and scoring code.

J. R. Beveridge, G. H. Givens, P. J. Phillips, B. A. Draper, D. S. Bolme, and Y. M. Lui. FRVT 2006: Quo Vidas Face Quality. IVCJ. 2009. (Abstract)

This paper summarizes a study of how three state-of-the-art algorithms from the Face Recognition Vendor Test 2006 (FRVT 2006) are effected by factors related to face images and the people being recognized. The recogni- tion scenario compares highly controlled images to images taken of people as they stand before a camera in settings such as hallways and outdoors in front of buildings. A Generalized Linear Mixed Model (GLMM) is used to estimate the probability an algorithm successfully verifies a person conditioned upon the factors included in the study. The factors associated with people are: gender, race, age and whether they wear glasses. The factors associated with images are: the size of the face, edge density and region density. The setting, indoors versus outdoors, is also a factor. Edge density can change the esti- mated probability of verification dramatically, for example from about 0.15 to 0.85. However, this effect is not consistent across algorithm or setting. This finding shows that simple measurable factors are capable of characterizing face quality; however, these factors typically interact with both algorithm and setting.

Uncategorized

2008

J. Ross Beveridge, Geof H. Givens, P. Jonathon Phillips, Bruce A. Draper, and Yui Man Lui. Focus on Quality, Predicting FRVT 2006 Performance. IEEE International Conference on Automatic Face and Gesture Recognition, Amsterdam, The Netherlands. 2008.

Yui Man Lui, J. Ross Beveridge, and Darrell Whitley. A Novel Appearance Model and Adaptive Condensation Algorithm for Human Face Tracking. IEEE International Conference on Biometrics : Theory, Applications and Systems. 2008.

Yui Man Lui, J. Ross Beveridge, Bruce A. Draper, and Michael Kirby. Image-Set Matching using a Geodesic Distance and Cohort Normalization. IEEE International Conference on Automatic Face and Gesture Recognition. 2008.

Y. M. Lui, and J. R. Beveridge. Grassmann Registration Manifolds for Face Recognition. European Conference on Computer Vision. 2008.

2007

P. J. Phillips, J. R. Beveridge, G. H. Givens, B. A. Draper, and Y. M. Lui. Preliminary Covariate Analysis Results for a Fusion of Three FRVT 2006 Algorithms. Unpublished: Presentation at the NIST Biometric Quality Workshop, Gaithersburg, MD. November 2007.

J. R. Beveridge, P. Flynn, A. Alvarez, J. Saraf, W. Fisher, and J. Gentile. Face Detection Algorithm and Feature Performance on FRGC 2.0 Imagery. September 2007.

J. Ross Beveridge, Geof H. Givens, P. Jonathon Phillips, and Bruce A. Draper. Factors that Influence Algorithm Performance in the Face Recognition Grand Challenge. Computer Vision and Image Understanding. December 2007.

J. Ross Beveridge, Bruce A. Draper, Jen Mei Chang, Michael Kirby, Holger Kley, and Chris Peterson. Principal Angles Separate Subject Illumination Spaces in YDB and CMU-PIE. IEEE Trans. on Pattern Analysis and Machine Intelligence. (under revision) 2007.

A. Clark, N. A. Thacker, J. L. Barron, J. R. Beveridge, P. Courtney, W. R. Crum, V. Ramesh, and C. Clark. Performance Characterization in Computer Vision: A Guide to Best Practices. Computer Vision and Image Understanding. June (online) 2007.

Y. M. Lui, J. R. Beveridge, A. E. Howe, and D. Whitley. Evolution Strategies for Matching Active Appearance Models to Human Faces. IEEE International Conference on Biometrics : Theory, Applications and Systems. 2007.

2006

J. Ross Beveridge, Geof H. Givens, Bruce A. Draper, and P. Jonathon Phillips. Linear and Generalized Linear Models for Analyzing Face Recognition Performance. Pattern Recognition. (under revision) 2006.

J. Ross Beveridge, Jilmil Saraf, and Ben Randall. A Comparison of Pixel, Edge and Wavelet Features for Face Detection using a Semi-Naive Bayesian Classifier. 2006.

2005

J. Ross Beveridge, Bruce A. Draper, Geof H. Givens, and Ward Fisher. Introduction to the Statistical Evaluation of Face Recognition Algorithms. 2005.

Geof H. Givens, J. Ross Beveridge, Bruce A. Draper, and P. Jonathon Phillips. Repeated Measures GLMM Estimation of Subject-Related and False Positive Threshold Effects on Human Face Verification Performance. Empirical Evaluation Methods in Computer Vision Workhsop: In Conjunction with CVPR 2005. June 2005.

2004

Geof H. Givens, J. Ross Beveridge, Bruce A. Draper, Patrick Grother, and P. Jonathon Phillips. How Features of the Human Face Affect Recognition: a Statistical Comparison of Three Face Recognition Algorithms. Proceedings: IEEE Computer Vision and Pattern Recognition 2004. 2004.

2003

Bruce A. Draper, Kyungim Baek, M.S. Bartlett, and J. Ross Beveridge. Recognizing Faces with PCA and ICA. Computer Vision and Image Understanding. July 2003.

Bruce A. Draper, J. Ross Beveridge, Wim Bohm, Charlie Ross, and Monica Chawathe. Accelerated Image Procesesing on FPGAs. IEEE Transactions on Image Processing. December 2003.

2002

Kyungim Baek, Bruce A. Draper, J. Ross Beveridge, and Kai She. PCA vs.ICA:A Comparison on the FERET Data Set. Joint Conference on Information Sciences. March 2002.

Wendy S. Yambor, Bruce A. Draper, and J. Ross Beveridge. Analyzing PCA-based Face Recognition Algorithms: Eigenvector Selection and Distance Measures. 2002.

2001

J. Ross Beveridge. The Geometry of LDA and PCA Classifiers Illustrated with 3D Examples. 2001.

2000

Wendy S. Yambor, Bruce A. Draper, and J. Ross Beveridge. Analyzing PCA-based Face Recognition Algorithms: Eigenvector Selection and Distance Measures. July 2000. (PDF) (Abstract)

This study examines the role of Eigenvector selection and Eigenspace distance measures on PCA-based face recognition systems. In particular, it builds on earlier results from the FERET face recognition evaluation studies, which created a large face database (1,196 subjects) and a baseline face recognition system for comparative evaluations. This study looks at using a combinations of traditional distance measures (City-block, Euclidean, Angle, Mahalanobis) in Eigenspace to improve performance in the matching stage of face recognition. A statistically significant improvement is observed for the Mahalanobis distance alone when compared to the other three alone. However, no combinations of these measures appear to perform better than Mahalanobis alone. This study also examines questions of how many Eigenvectors to select and according to what ordering criterion. It compares variations in performance due to different distance measures and numbers of Eigenvectors. Ordering Eigenvectors according to a like-image difference value rather than their Eigenvalues is also considered.

J. Ross Beveridge, Karthik Balasubramaniam, and Darrell Whitley. Matching Horizon Features Using a Messy Genetic Algorithm. Computer Methods in Applied Mechanics and Engineering. 2000.

M. R. Stevens, and J. R. Beveridge. Localized Scene Interpretation from 3D Models, Range, and Optical Data. Image Understanding. 2000.

Mark R. Stevens, and J. Ross Beveridge. Integrating Graphics and Vision for Object Recognition. 2000.

1999

J. Ross Beveridge. LiME: An Environment for 2D Line Segment Matching. Workshop on Performance Characterisation and Benchmarking of Vision Systems. January 1999.

1998

J. Ross Beveridge. Optimal 2D Model Matching Using a Messy Genetic Algorithm. Proceedings of AAAI-98, Madison. August 1998.

W. Najjar, Bruce A. Draper, Wim Bohm, and J. Ross Beveridge. The Cameron Project: High-Level Programming of Image Processing Applications on Reconfigurable Computing Machines. Proceedings of the Workshop on Reconfigurable Computing. October 1998.

Darrell Whitley, J. Ross Beveridge, and Charlie Ross. Automated Velocity Picking: A Computer Vision and Optimization Approach. 1998. (PDF)

Mark R. Stevens, and J. Ross Beveridge. Multisensor Occlusion Reasoning. Proceedings of the 14th International Conference on Pattern Recognition. August 1998.

1997

Edward M. Riseman, Allen R. Hanson, J. Ross Beveridge, Rakesh Kumar, and Harpreet Sawhney. Landmark-Based Navigation and the Acquisition of Environmental Models. 1997.

Mark R. Stevens, Charles W. Anderson, and J. Ross Beveridge. Efficient Indexing for Object Recognition Using Large Networks. Proc. 1997 IEEE International Conference on Neural Networks. June 1997. (PDF)

Darrell Whitley, J. Ross Beveridge, Christopher R. Graves, and C. Guerra-Salcedo. Messy Genetic Algorithms for Subset Feature Selection. Proc. 1997 International Conference on Genetic Algorithms. July 1997. (PDF)

Mark R. Stevens, J. Ross Beveridge, and Michael E. Goss. Visualizing Multisensor Model-Based Object Recognition. Machine Graphics \& Vision. 1997. (PDF)

Mark R. Stevens, and J. Ross Beveridge. Precise Matching of 3-D Target Models to Multisensor Data. IEEE Transactions on Image Processing. January 1997. (PDF) (Online)

J. Ross Beveridge, Edward M. Riseman, and Christopher R. Graves. How Easy is Matching 2D Line Models Using Local Search?. T-PAMI. June 1997. (PDF)

J. Ross Beveridge, Bruce A. Draper, Mark R. Stevens, Kris Siejko, and Allen R. Hanson. A Coregistration Approach to Multisensor Target Recognition with Extensions to Exploit Digital Elevation Map Data. 1997. (PDF)

J. Ross Beveridge, Christopher R. Graves, and Jim Steinborn. Comparing Random-Starts Local Search with Key-Feature matching. Proc. 1997 International Joint Conference on Artificial Intelligence. August 1997. (PDF)

Mark R. Stevens, and J. Ross Beveridge. Using Multisensor Occlusion Reasoning in Multisensor Object Recognition. Proc. 1997 IEEE International Conference on Computer Vision and Pattern Recognition. June 1997. (PDF)

J. Ross Beveridge, and Jim Steinborn. A Tutorial on a Sliding Window Target Detection Algorithm Implemented in the DARPA Image Understanding Environment. 1997.

J. Ross Beveridge. LiME Users Guide. November 1997. (PDF)

1996

Darrell Whitley, J. Ross Beveridge, Christopher R. Graves, and K. Mathias. Test Driving Three 1995 Genetic Algorithms: New Test Functions and Geometric Matching. Journal of Heuristics. 1996.

Anthony N. A. Schwickerath. Simultaneous Refinement of Pose and Sensor Registration. Master's Thesis: Colorado State University. October 1996. (PDF)

J. Ross Beveridge, Christopher R. Graves, and Christopher E. Lesher. Some Lessons Learned from Coding the Burns Line Extraction Algorithm in the DARPA Image Understanding Environment. October 1996. (PDF)

J. Ross Beveridge, Edward M. Riseman, and Christopher R. Graves. How Easy is Matching 2D Line Models Using Local Search?. May 1996. (PDF)

J. Ross Beveridge, Mark R. Stevens, Z. Zhang, and Michael E. Goss. Approximate Image Mappings Between Nearly Boresight Aligned Optical and Range Sensors. April 1996. (PDF)

J. Ross Beveridge, and Mark R. Stevens. CAD-based Target Identification in Range, IR and Color Imagery Using On-Line Rendering and Feature Prediction. 1996. (PDF) (Abstract)

Results for a mutlisensor CADbased ob ject recognition system are presented in the context of Automatic Target Recognition using nearly boresight aligned range IR and color sensors The system is shown to identify targets in test suite of image triples This suite includes targets at low resolution unusual aspect angles and partially obscured by terrain The key concept presented in this work is that of using online rendering of CADmodels to support an iterative predit match and rene cycle This cycle optimizes the match sub ject to variability both in ob ject pose and sensor registration An occlusion reasoning component further illustrates the power of this approach by customizing the predicted features to t specic scene geometry Occlusion reasoning detects occlusion in the range data and adjusts the features predicted to be visible accordingl

Anthony N. A. Schwickerath, and J. Ross Beveridge. Coregistering 3D Models, Range, and Optical Imagery Using Least-Median Squares Fitting. Proceedings: Image Understanding Workshop. February 1996. (PDF)

Mark R. Stevens, and J. Ross Beveridge. Interleaving 3D Model Feature Prediction and Matching to Support Multi-Sensor Object Recognition. Proceedings: Image Understanding Workshop. February 1996.

Mark R. Stevens, and J. Ross Beveridge. Optical Linear Feature Detection Based on Model Pose. Proceedings: Image Understanding Workshop. February 1996. (PDF)

Mark R. Stevens, and J. Ross Beveridge. Interleaving 3D Model Feature Prediction and Matching to Support Multi-Sensor Object Recognition. International Conference on Pattern Recognition. August 1996. (PDF)

John Dolan, Charlie Kohl, Richard Lerner, Joseph Mundy, Terrance Boult, and J. Ross Beveridge. Solving Diverse Image Understanding Problems Using the Image Understanding Environment. Proceedings: Image Understanding Workshop. February 1996.

J. Ross Beveridge, Christopher R. Graves, and Christopher E. Lesher. Local Search as a Tool for Horizon Line Matching. Proceedings: Image Understanding Workshop. February 1996. (PDF)

Anthony N. A. Schwickerath, and J. Ross Beveridge. Coregistration of Range and Optical Images Using Coplanarity and Orientation Constraints. 1996 Conference on Computer Vision and Pattern Recognition. June 1996. (PDF)

J. Ross Beveridge, Bruce A. Draper, and Kris Siejko. Progress on Target and Terrain Recognition Research at Colorado State University. Proceedings: Image Understanding Workshop. February 1996. (PDF)

1995

J. Ross Beveridge, and Edward M. Riseman. Optimal Geometric Model Matching Under Full 3D Perspective. Computer Vision and Image Understanding. 1995. (PDF) (Abstract)

Model based object recognition systems have rarely dealt directly with D perspective while matching models to images The algorithms presented here use D pose recovery during matching to explicitly and quantitatively account for changes in model appearance associated with D per spective These algorithms use random start local search to nd with high probability the globally optimal correspondence between model and image features in spaces containing over possible matches Three specic algorithms are compared on robot landmark recognition problems A full perspective algorithm uses the D pose algorithm in all stages of search while two hybrid algorithms use a computationally less demanding weak perspective procedure to rank alternative matches and updates D pose only when moving to a new match These hybrids successfully solve problems involving perspective and in less time than required by the full perspective algorithm

Mark R. Stevens. Obtaining 3D Silhouettes and Sampled Surfaces from Solid Models for use in Computer Vision. Master's Thesis: Colorado State University. 1995. (PDF)

J. Ross Beveridge, Allen R. Hanson, and Durga P. Panda. Model-based Fusion of FLIR, Color and LADAR. Proceedings of the Sensor Fusion and Networked Robotics VIII Conference. October 1995.

Michael E. Goss, J. Ross Beveridge, Mark R. Stevens, and Aaron Fuegi. Three-dimensional visualization environment for Multisensor data analysis, interpretation, and model-based object recognition. SPIE Symposium on Electronic Imaging: Science and Technology. February 1995.

Mark R. Stevens, J. Ross Beveridge, and Michael E. Goss. Reduction of BRL/CAD Models and Their Use in Automatic Target Recognition Algorithms. Proceedings: BRL-CAD Symposium. June 1995. (PDF) (Abstract)

We are currently developing an Automatic Target Recognition (ATR) algorithm to locate an object using multisensor data. The ATR algorithm will determine corresponding points between a range (LADAR) image, a color (CCD) image, a thermal (FLIR) image and a BRL/CAD model of the object being located. The success of this process depends in part on which features can be automatically extracted from the model database. The BRL/CAD models we have for this process contain more detail than can be productively used by our ATR algorithm and must be reduced to a more appropriate form.

J. Ross Beveridge, Edward M. Riseman, and Christopher R. Graves. Demonstrating Polynomial Run-Time Growth for Local Search Matching. Proceedings: International Symposium on Computer Vision. November 1995.

1994

Anthony N. A. Schwickerath, and J. Ross Beveridge. Object to Multisensor Coregistration with Eight Degrees of Freedom. Proceedings: Image Understanding Workshop. November 1994. (PDF)

Bruce A. Draper, and J. Ross Beveridge. Reply to: Performance Characterization in Computer Vision by Robert M. Haralick. CVGIP: Image Understanding. September 1994.

Michael E. Goss, J. Ross Beveridge, Mark R. Stevens, and Aaron Fuegi. Visualization and Verification of Automatic Target Recognition Results Using Combined Range and Optical Imagery. Proceedings: Image Understanding Workshop. November 1994. (PDF) (Abstract)

The Rangeview software system presented here ad- dresses two significant issues in the development and deployment of an Automatic Target Recognizer: visualization of progress of the recognizer in find- ing a target, and verification by an operator of the correctness of the match. The system combines range imagery from a LADAR device with opti- cal imagery from a color CCD camera and/or FLIR sensor to display a three-dimensional representation of the scene and the target model. Range imagery creates a partial three-dimensional representation of the scene. Optical imagery is mapped onto this par- tial three-dimensional representation. Output from the ATR is registered in three dimensions with the scene. Recognized targets are displayed in correct spatial relation to the scene, and the registered scene and target may be visually inspected from any view- point.

J. Ross Beveridge, and Edward M. Riseman. Optimal Geometric Model Matching Under Full 3D Perspective. Second CAD-Based Vision Workshop. February 1994. (PDF)

J. Ross Beveridge, Allen R. Hanson, and Durga P. Panda. RSTA Research of the Colorado State, University of Massachusetts and Alliant Techsystems Team. Image Understanding Workshop (separate addendum). November 1994. (PDF)

J. Ross Beveridge, Durga P. Panda, and Theodore Yachik. November 1993 Fort Carson RSTA Data Collection Final Report. January 1994. (PDF)

Shashi Buluswar, Bruce A. Draper, Allen Hanson, and Edward Riseman. Non-parametric Classification of Pixels Under Varying Outdoor Illumination. Proceedings: Image Understanding Workshop. November 1994. (PDF)

J. Ross Beveridge, Allen R. Hanson, and Durga P. Panda. Integrated Color CCD, FLIR and LADAR Based Object Modeling and Recognition. April 1994.

1993

Robert T. Collins, and J. Ross Beveridge. Matching Perspective Views of Coplanar Structures Using Projective Unwarping and Similarity Matching.. Proceedings: 1993 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. jun 1993.

J. Ross Beveridge. Local Search Algorithms for Geometric Object Recognition: Optimal Correspondence and Pose. may 1993. (PDF) (Abstract)

Recognizing an ob ject by its shape is a fundamental problem in computer vision and typically involves nding a discrete correspondence between ob ject model and image features as well as the pose position and orientation of the camera relative to the ob ject This thesis presents new algorithms for nding the optimal correspondence and pose of a rigid D ob ject They utilize new techniques for evaluating geometric matches and for searching the combinatorial space of possible matches An ecient closedform technique for computing pose under weak perspective four parameter D ane is presented and an iterative non linear D pose algorithm is used to support matching under full D perspective

1992

J. Ross Beveridge, and Edward M. Riseman. Can Too Much Perspective Spoil the View? A Case Study in 2D Affine Versus 3D Perspective Model Matching. Proceedings: Image Understanding Workshop. January 1992.

J. Ross Beveridge, and Edward M. Riseman. Hybrid Weak-Perspective and Full-Perspective Matching. Proceedings: IEEE 1992 Computer Society Conference on Computer Vision and Pattern Recognition. June 1992. (PDF) (Abstract)

Full perspective mappings between D objects and D images are more complicated than weak perspective mappings, which consider only rotation, translation and scaling. Therefore, in D model based robot navigation, it is important to understand how and when full perspective must be taken into account. In this paper we use a probabilistic combinatorial op timization algorithm to search for an optimal match between D landmark and D image features. Three variations are considered a weak perspective algo rithm rotates, translates and scales an initial D pro jection of the D landmark. A full perspective algo rithm always recomputes the robot's pose and repro jects the landmark when testing alternative matches. Finally, a hybrid algorithm uses weak perspective to select a most promising alternative, but then updates the pose and repro jects the landmark. The hybrid al gorithm appears to combine the best attributes of the other two. Like the full perspective algorithm, it reli ably recovers the true pose of the robot and like the weak perspective algorithm it runs to faster than the full perspective algorithm

J. Ross Beveridge. A Maximum Likelihood View of Point and Line Segment Match Evaluation. Unpublished: Unpublished draft.. 1992.

J. Ross Beveridge. Comparing Subset-convergent and Variable-depth Local Search on Perspective Sensitive Landmark Recognition Problems. Proceedings: SPIE Intelligent Robots and Computer Vision XI: Algorithms, Techniques, and Active Vision. November 1992.

1991

J. Ross Beveridge, Rich Weiss, and Edward M. Riseman. Optimization of 2-Dimensional Model Matching. 1991.

J. Ross Beveridge, Bruce A. Draper, Allen R. Hanson, and Edward M. Riseman. Issues Central to a Useful Image Understanding Environment. The 20th AIPR Workshop. October 1991.

1990

J. Ross Beveridge, Rich Weiss, and Edward M. Riseman. Combinatorial Optimization Applied to Variable Scale 2D Model Matching. Proceedings of the IEEE International Conference on Pattern Recognition. June 1990.

Claude Fennema, Allen R. Hanson, Edward M. Riseman, J. Ross Beveridge, and Rakesh Kumar. Model-Directed Mobile Robot Navigation. smc. November/December 1990.

1989

J. Brolio, Bruce A. Draper, J. Ross Beveridge, and Allen R. Hanson. The ISR: a Database for Symbolic Processing in Computer Vision. Computer. December 1989.

J. Ross Beveridge, Joey Griffith, Ralf R. Kohler, Allen R. Hanson, and Edward M. Riseman. Segmenting Images Using Localized Histograms and Region Merging. ijcv. January 1989.

J. Ross Beveridge, Rich Weiss, and Edward M. Riseman. Optimization of 2-Dimensional Model Matching. Proceedings: Image Understanding Workshop. June 1989.

1987

J. Ross Beveridge, Joey Griffith, Ralf R. Kohler, Allen R. Hanson, and Edward M. Riseman. Segmenting Images Using Localized Histograms and Region Merging. October 1987.

George Reynolds, and J. Ross Beveridge. Searching for Geometric Structure in Images of Natural Scenes. Proceedings: Image Understanding Workshop. February 1987.


Powered by DynaPublio