Statistics and Data Science Seminar

Prof. Ryan Martin
UIC
Bayesian estimation of sparse high-dimensional normal means
Abstract: In high-dimensional problems, the parameter of interest is called sparse if most of its components are zero. An important example is in high-dimensional regression, were it is believed that only a few of the many predictor variables explain variation in the response. Since we don't know beforehand which coordinates of the parameter vector are zero, the challenge is to simultaneously identify those which are zero and accurately estimate those which are non-zero. From a Bayesian point of view, to accommodate sparsity, an intuitive strategy is to consider a discrete-continuous mixture prior which allows coordinates to be exactly zero with positive probability. The relevant asymptotic theory looks for conditions such that the posterior distribution concentrates around the true signal at the best rate. In the first part of the talk, I will discuss some results along these lines from the very-recent literature. One drawback to the discrete-continuous mixture priors is that computation can be very difficult, so it is natural to ask if similar posterior concentration results can be achieve with computationally simpler non-mixture priors. This question is almost completely open, and the second part of the talk will discuss some aspects of this problem and what I think can be done.
Wednesday February 6, 2013 at 4:00 PM in SEO 636
Web Privacy Notice HTML 5 CSS FAE
UIC LAS MSCS > persisting_utilities > seminars >