Sign Up

150 Western Avenue

#EE Seminar series

In this talk, I will review recent work on the use of low-rank tensor models in multivariate probability, density estimation, supervised learning, and combinatorial optimization. We have recently shown that it is possible to learn high-order but low-rank multivariate distributions from low-order marginals, and that every multivariate categorical distribution can be generated by a (so-called) "naive" Bayes model. As it turns out, many real-life datasets can be fitted using distributions of very low rank. We have also proposed viewing sampling and supervised learning / system identification problems through the lens of low-rank tensor completion, which affords parsimonious modeling and sample-efficient learning with identification guarantees.  Our most recent work explores the interplay between tensors and combinatorial optimization: it shows that every NP-complete problem can be cast as an instance of computing the minimum element of a tensor from its (two) rank-one factors. This exemplifies the modeling power of very low-rank tensors, and it also opens the door to a continuous multilinear problem relaxation whose empirical performance on the classic partition problem and other combinatorial optimization problems appears to be promising.

0 people are interested in this event