Skip to main content
2011-2012 Colloquium Series: Tong Zhang

Spearker: Tong Zhang

 

Host: Rong Jin

 

Time: 2 - 3pm, Nov. 18

 

Location: A405 Wells Hall

 

Title: Some statistical models for nonlinear feature learning.

 

Abstract:

 

Feature learning is becoming an important topic in machine learning, but most studies have focused on developing heuristic methods without proper justifications. In this talk I'd discuss two application problems we investigated recently that led to simple and effective feature learning algorithms based on sound statistical models.

 

The first model is unsupervised learning of super-vector coding for image classification, and the second model is supervised learning of regularized decision forest for generic nonlinear prediction problems.

Both methods can be regarded as learning nonlinear basis functions for statistical additive models. The first method is mathematically motivated by the computation of Hellinger distance between two empirical probability distributions in high dimension, and our solution achieves the state of the art performance in image classification. The second method is motivated by boosted decision trees where we are able to apply the modern concept of structured sparsity regularization to design algorithms that are superior.

 

Bio:

 

Tong Zhang received a B.A. in mathematics and computer science from Cornell University in 1994 and a Ph.D. in Computer Science from Stanford University in 1998. After graduation, he worked at IBM T.J. Watson Research Center in Yorktown Heights, New York, and Yahoo Research in New York city. He is currently a professor of statistics at Rutgers University. His research interests include machine learning, algorithms for statistical computation, their mathematical analysis and applications.