Variational Methods

Variational Methods

  • Streaming variational Bayes. T. Broderick, N. Boyd, A. Wibisono, A. C. Wilson and M. I. Jordan. arXiv:1307.6769, 2013.

  • Variational Bayesian inference with stochastic search. J. Paisley, D. Blei, and M. I. Jordan. In J. Langford and J. Pineau (Eds.), Proceedings of the 29th International Conference on Machine Learning (ICML), Edinburgh, UK, 2012.

  • Variational inference over combinatorial spaces. A. Bouchard-Côté and M. I. Jordan. In J. Shawe-Taylor, R. Zemel, J. Lafferty, and C. Williams (Eds.) Advances in Neural Information Processing Systems (NIPS) 23, to appear. [Supplementary information].

  • Optimization of structured mean field objectives. A. Bouchard-Côté and M. I. Jordan. In Uncertainty in Artificial Intelligence (UAI), Proceedings of the Twenty-Fifth Conference, Montreal, Canada, 2009.

  • Graphical models, exponential families, and variational inference. M. J. Wainwright and M. I. Jordan. Foundations and Trends in Machine Learning, 1, 1-305, 2008. [Substantially revised and expanded version of a 2003 technical report.]

  • Shared segmentation of natural scenes using dependent Pitman-Yor processes. E. Sudderth and M. I. Jordan. In D. Koller, Y. Bengio, D. Schuurmans and L. Bottou (Eds.), Advances in Neural Information Processing Systems (NIPS) 21, 2009.

  • Estimating divergence functionals and the likelihood ratio by convex risk minimization. X. Nguyen, M. J. Wainwright and M. I. Jordan. IEEE Transactions on Information Theory, 56, 5847-5861, 2010.

  • Estimating divergence functionals and the likelihood ratio by penalized convex risk minimization. X. Nguyen, M. J. Wainwright and M. I. Jordan. In J. Platt, D. Koller, Y. Singer and A. McCallum (Eds.), Advances in Neural Information Processing Systems (NIPS) 20, 2008.

  • Nonparametric estimation of the likelihood ratio and divergence functionals.X. Nguyen, M. J. Wainwright and M. I. Jordan. International Symposium on Information Theory (ISIT), Nice, France, 2007.

  • Log-determinant relaxation for approximate inference in discrete Markov random fields. M. J. Wainwright and M. I. Jordan. IEEE Transactions on Signal Processing, 54, 2099-2109, 2006.

  • A variational principle for graphical models. M. J. Wainwright and M. I. Jordan. New Directions in Statistical Signal Processing: From Systems to Brain. Cambridge, MA: MIT Press, 2005.

  • Variational inference for Dirichlet process mixtures. D. M. Blei and M. I. Jordan. Technical Report 674, Department of Statistics, University of California, Berkeley, 2004.

  • Semidefinite relaxations for approximate inference on graphs with cycles. M. J. Wainwright and M. I. Jordan. In S. Thrun, L. Saul, and B. Schoelkopf (Eds.), Advances in Neural Information Processing Systems (NIPS) 16, (long version), 2004.

  • Learning spectral clustering. F. R. Bach and M. I. Jordan. In S. Thrun, L. Saul, and B. Schoelkopf (Eds.), Advances in Neural Information Processing Systems (NIPS) 16, (long version), 2004.

  • Graphical models. M. I. Jordan. Statistical Science (Special Issue on Bayesian Statistics), 19, 140-155, 2004.

  • On the concentration of expectation and approximate inference in layered Bayesian networks. X. Nguyen and M. I. Jordan. In S. Thrun, L. Saul, and B. Schoelkopf (Eds.), Advances in Neural Information Processing Systems (NIPS) 16, (long version), 2004.

  • Graph partition strategies for generalized mean field inference. E. P. Xing, M. I. Jordan, and S. Russell. In Uncertainty in Artificial Intelligence (UAI), Proceedings of the Twentieth Conference, 2004.

  • A generalized mean field algorithm for variational inference in exponential families. E. P. Xing, M. I. Jordan, and S. Russell. In C. Meek and U. Kjaerulff, Uncertainty in Artificial Intelligence (UAI), Proceedings of the Eighteenth Conference, 2003.

  • Variational inference in graphical models: The view from the marginal polytope. M. J. Wainwright and M. I. Jordan. Forty-first Annual Allerton Conference on Communication, Control, and Computing, Urbana-Champaign, IL, 2003.

  • Graphical models: Probabilistic inference. M. I. Jordan and Y. Weiss. In M. Arbib (Ed.), The Handbook of Brain Theory and Neural Networks, 2nd edition. Cambridge, MA: MIT Press, 2002.

  • Loopy belief propagation and Gibbs measures. S. Tatikonda and M. I. Jordan. In D. Koller and A. Darwiche (Eds)., Uncertainty in Artificial Intelligence (UAI), Proceedings of the Eighteenth Conference, 2002.

  • Variational MCMC. N. de Freitas, P. Højen-Sørensen, M. I. Jordan, and S. Russell. In J. Breese and D. Koller (Ed)., Uncertainty in Artificial Intelligence (UAI), Proceedings of the Seventeenth Conference, 2001.

  • Attractor dynamics for feedforward neural networks. L. K. Saul and M. I. Jordan. Neural Computation, 12, 1313-1335, 2000.

  • Bayesian logistic regression: a variational approach. T. S. Jaakkola and M. I. Jordan. Statistics and Computing, 10, 25-37, 2000.

  • Approximate inference algorithms for two-layer Bayesian networks. A. Y. Ng and M. I. Jordan. Advances in Neural Information Processing Systems (NIPS) 12, Cambridge MA: MIT Press, 2000.

  • Loopy belief-propagation for approximate inference: An empirical study. K. Murphy, Y. Weiss, and M. I. Jordan. In K. B. Laskey and H. Prade (Eds.), Uncertainty in Artificial Intelligence (UAI), Proceedings of the Fifteenth Conference, San Mateo, CA: Morgan Kaufmann, 1999.

  • Variational probabilistic inference and the QMR-DT network. T. S. Jaakkola and M. I. Jordan. Journal of Artificial Intelligence Research, 10, 291-322, 1999.

  • An introduction to variational methods for graphical models. M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. In M. I. Jordan (Ed.), Learning in Graphical Models, Cambridge: MIT Press, 1999.

  • Improving the mean field approximation via the use of mixture distributions. T. S. Jaakkola and M. I. Jordan. In M. I. Jordan (Ed.), Learning in Graphical Models, Cambridge: MIT Press, 1999.

  • Learning in graphical models. M. I. Jordan (Ed.), Cambridge MA: MIT Press, 1999.

  • Approximating posterior distributions in belief networks using mixtures. C. M. Bishop, N. Lawrence, T. S. Jaakkola, and M. I. Jordan. In Jordan, M. I., Kearns, M. J. and Solla, S. A. (Eds.), Advances in Neural Information Processing Systems (NIPS) 10, Cambridge, MA: MIT Press, 1998.

  • Mixture representations for inference and learning in Boltzmann machines. N. D. Lawrence, C. M. Bishop and M. I. Jordan. In G. F. Cooper and S. Moral (Eds.), Uncertainty in Artificial Intelligence (UAI), Proceedings of the Fourteenth Conference, San Mateo, CA: Morgan Kaufman, 1998.

  • A variational principle for model-based interpolation. L. K. Saul and M. I. Jordan. In M. C. Mozer, M. I. Jordan, and T. Petsche (Eds.), Advances in Neural Information Processing Systems (NIPS) 9, Cambridge MA: MIT Press, 1997.

  • Recursive algorithms for approximating probabilities in graphical models. T. S. Jaakkola and M. I. Jordan. In M. C. Mozer, M. I. Jordan, and T. Petsche (Eds.), Advances in Neural Information Processing Systems (NIPS) 9, Cambridge MA: MIT Press, 1997.

  • Mean field theory for sigmoid belief networks. L. K. Saul, T. Jaakkola, and M. I. Jordan. Journal of Artificial Intelligence Research, 4, 61-76, 1996.

  • Fast learning by bounding likelihoods in sigmoid belief networks. T. S. Jaakkola, L. K. Saul, and M. I. Jordan. In D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo (Eds.), Advances in Neural Information Processing Systems (NIPS) 8, Cambridge MA: MIT Press, 1996.

  • Computing upper and lower bounds on likelihoods in intractable networks. T. S. Jaakkola and M. I. Jordan. In E. Horvitz (Ed.), Uncertainty in Artificial Intelligence (UAI), Proceedings of the Twelth Conference, Portland, Oregon, 1996.

  • Exploiting tractable substructures in intractable networks. L. K. Saul and M. I. Jordan. In D. Touretzky, M. Mozer, and M. Hasselmo (Eds.), Advances in Neural Information Processing Systems (NIPS) 8, MIT Press, 1996.