Abstract:
The brain has the remarkable ability to rapidly and accurately extract meaning from a flood of complex and ever-changing sensory information. While great progress has been made in the last several decades in the development of computer algorithms that attempt to recreate these abilities in machines, even our most powerful algorithms still lag far behind the performance of biological systems. This performance gap has inspired the CVRL group at Notre Dame to pursue interdisciplinary work at the intersection of neuroscience and computer science, aimed at uncovering the algorithmic underpinnings of sensory processing in the brain, with the dual goals of advancing understanding in biology and building more robust and powerful artificial information processing systems.This talk will highlight some of our recent progress in these areas.
Most importantly, the core of our overarching approach is the idea that data from biological visual systems experiments can be incorporated directly into the process of building biologically-inspired computer vision systems, both at the level of model selection, and at the level of machine learning. The key insight is that while most computer vision and machine learning approaches focus solely on optimizing performance (e.g., face recognition performance), we can also incorporate biological knowledge as a powerful regularizer on the space of possible solutions. We have demonstrated an initial proof of concept for this approach, incorporating detailed psychometric data from large-scale human psychophysics experiments into a kernel machine formulation in order to produce state-of-the-art performance in real-world face detection and social attribute assignment problems.
For many problems in computer vision, human learners are considerably better than machines. Humans possess highly accurate internal recognition and learning mechanisms that are not yet understood, and they frequently have access to more extensive training data through a lifetime of unbiased experience with the visual world. In this talk, an advanced online psychometric testing platform will be described that makes new kinds of annotation data available for learning. Subsequently, a new technique for harnessing these new kinds of information - "perceptual annotations" - for support vector machines will be introduced. A key intuition for this approach is that while it may remain infeasible to dramatically increase the amount of data and high-quality labels available for the training of a given system, measuring the exemplar-by-exemplar difficulty and pattern of errors of human annotators can provide important information for regularizing the solution of the system at hand.
Biography:
Walter J. Scheirer, Ph.D. is an Assistant Professor in the Department of Computer Science and Engineering at the University of Notre Dame. Previously, he was a postdoctoral fellow at Harvard University, with affiliations in the School of Engineering and Applied Sciences, Dept. of Molecular and Cellular Biology and Center for Brain Science, and the director of research & development at Securics, Inc., an early stage company producing innovative computer vision-based solutions. He received his Ph.D. from the University of Colorado and his M.S. and B.A. degrees from Lehigh University. Dr. Scheirer has extensive experience in the areas of computer vision and human biometrics, with an emphasis on advanced learning techniques. His overarching research interest is the fundamental problem of recognition, including the representations and algorithms supporting solutions to it.
Host:
Dr. Arun Ross
|