Graphical Models

Graphical Models

  • Joint modeling of multiple time series via the beta process with application to motion capture segmentation. E. Fox, M. Hughes, E. Sudderth, and M. I. Jordan. Annals of Applied Statistics, to appear.

  • Learning dependency-based compositional semantics. P. Liang, M. I. Jordan, and D. Klein. Computational Linguistics, 39, 389–446, 2013.

  • Phylogenetic inference via sequential Monte Carlo. A. Bouchard-Côté, S. Sankararaman, and M. I. Jordan. Systematic Biology, 61, 579-593, 2012.

  • A sticky HDP-HMM with application to speaker diarization. E. Fox, E. Sudderth, M. I. Jordan, and A. Willsky. Annals of Applied Statistics, 5, 1020-1056, 2011.

  • Bayesian nonparametric inference of switching linear dynamical models. E. Fox, E. Sudderth, M. I. Jordan, and A. Willsky. IEEE Transactions on Signal Processing, 59, 1569-1585, 2011.

  • Bayesian nonparametric learning: Expressive priors for intelligent systems. M. I. Jordan. In R. Dechter, H. Geffner, and J. Halpern (Eds.), Heuristics, Probability and Causality: A Tribute to Judea Pearl, College Publications, 2010.

  • Sharing features among dynamical systems with beta processes. E. Fox, E. Sudderth, M. I. Jordan, and A. S. Willsky. In Y. Bengio, D. Schuurmans, J. Lafferty and C. Williams (Eds.) Advances in Neural Information Processing Systems (NIPS) 22, 2010.

  • The nested Chinese restaurant process and Bayesian inference of topic hierarchies. D. M. Blei, T. Griffiths, and M. I. Jordan. Journal of the ACM, 57, 1-30, 2010. [Software].

  • Bayesian nonparametric methods for learning Markov switching processes. E. Fox, E. Sudderth, M. I. Jordan, and A. Willsky. IEEE Signal Processing Magazine, 27, 43-54, 2010.

  • Optimization of structured mean field objectives. A. Bouchard-Côté and M. I. Jordan. In Uncertainty in Artificial Intelligence (UAI), Proceedings of the Twenty-Fifth Conference, Montreal, Canada, 2009.

  • Nonparametric Bayesian identification of jump systems with sparse dependencies. E. Fox, E. Sudderth, M. I. Jordan, and A. Willsky. 15th IFAC Symposium on System Identification (SYSID), St. Malo, France, 2009.

  • Graphical models, exponential families, and variational inference. M. J. Wainwright and M. I. Jordan. Foundations and Trends in Machine Learning, 1, 1-305, 2008. [Substantially revised and expanded version of a 2003 technical report.]

  • Shared segmentation of natural scenes using dependent Pitman-Yor processes. E. Sudderth and M. I. Jordan. In D. Koller, Y. Bengio, D. Schuurmans and L. Bottou (Eds.), Advances in Neural Information Processing Systems (NIPS) 21, 2009.

  • Efficient inference in phylogenetic InDel trees. A. Bouchard-Côté, M. I. Jordan, and D. Klein. In D. Koller, Y. Bengio, D. Schuurmans and L. Bottou (Eds.), Advances in Neural Information Processing Systems (NIPS) 21, 2009.

  • Nonparametric Bayesian learning of switching linear dynamical systems. E. B. Fox, E. Sudderth, M. I. Jordan, and A. S. Willsky. In D. Koller, Y. Bengio, D. Schuurmans and L. Bottou (Eds.), Advances in Neural Information Processing Systems (NIPS) 21, 2009.

  • An HDP-HMM for systems with state persistence. E. Fox, E. Sudderth, M. I. Jordan, and A. Willsky. Proceedings of the 25th International Conference on Machine Learning (ICML), 2008. [Long version].

  • The phylogenetic Indian buffet process: A non-exchangeable nonparametric prior for latent features. K. Miller, T. Griffiths and M. I. Jordan. In Uncertainty in Artificial Intelligence (UAI), Proceedings of the Twenty-Fourth Conference, 2008.

  • Hierarchical beta processes and the Indian buffet process. R. Thibaux and M. I. Jordan. Proceedings of the Tenth Conference on Artificial Intelligence and Statistics (AISTATS), 2007.

  • Hierarchical Dirichlet processes. Y. W. Teh, M. I. Jordan, M. J. Beal and D. M. Blei. Journal of the American Statistical Association, 101, 1566-1581, 2006. [Software]

  • Log-determinant relaxation for approximate inference in discrete Markov random fields. M. J. Wainwright and M. I. Jordan. IEEE Transactions on Signal Processing, 54, 2099-2109, 2006.

  • The DLR hierarchy of approximate inference. M. Rosen-Zvi, M. I. Jordan, and A. Yuille. In Uncertainty in Artificial Intelligence (UAI), Proceedings of the Twenty-First Conference, 2005.

  • A variational principle for graphical models. M. J. Wainwright and M. I. Jordan. New Directions in Statistical Signal Processing: From Systems to Brain. Cambridge, MA: MIT Press, 2005.

  • Graphical models. M. I. Jordan. Statistical Science (Special Issue on Bayesian Statistics), 19, 140-155, 2004.

  • Multiple-sequence functional annotation and the generalized hidden Markov phylogeny. J. D. McAuliffe, L. Pachter, and M. I. Jordan. Bioinformatics, 20, 1850-1860, 2004.

  • Learning graphical models for stationary time series. F. R. Bach and M. I. Jordan. IEEE Transactions on Signal Processing, 52, 2189-2199, 2004.

  • Kalman filtering with intermittent observations. B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, M. I. Jordan, and S. Sastry. In press: IEEE Transactions on Automatic Control, 2004.

  • Hierarchical topic models and the nested Chinese restaurant process. D. M. Blei, T. Griffiths, M. I. Jordan, and J. Tenenbaum. In S. Thrun, L. Saul, and B. Schoelkopf (Eds.), Advances in Neural Information Processing Systems (NIPS) 16, 2004.

  • LOGOS: A modular Bayesian model for de novo motif detection. E. P. Xing, W. Wu, M. I. Jordan, and R. M. Karp. Journal of Bioinformatics and Computational Biology, 2, 127-154, 2004.

  • Latent Dirichlet allocation. D. M. Blei, A. Y. Ng, and M. I. Jordan. Journal of Machine Learning Research, 3, 993-1022, 2003.

  • Beyond independent components: Trees and clusters. F. R. Bach and M. I. Jordan. Journal of Machine Learning Research, 4, 1205-1233, 2003. [Matlab code]

  • Modeling annotated data. D. M. Blei and M. I. Jordan. 26th International Conference on Research and Development in Information Retrieval (SIGIR), New York: ACM Press, 2003.

  • Variational inference in graphical models: The view from the marginal polytope. M. J. Wainwright and M. I. Jordan. Forty-first Annual Allerton Conference on Communication, Control, and Computing, Urbana-Champaign, IL, 2003.

  • Hierarchical Bayesian models for applications in information retrieval. D. M. Blei, M. I. Jordan and A. Y. Ng. In: J. M. Bernardo, M. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith, and M. West (Eds.), Bayesian Statistics 7, 2003.

  • Kalman filtering with intermittent observations. B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, M. I. Jordan, and S. Sastry. In press: 42nd IEEE Conference on Decision and Control (CDC), 2004.

  • Learning graphical models with Mercer kernels. F. R. Bach and M. I. Jordan. In S. Becker, S. Thrun, and K. Obermayer (Eds.), Advances in Neural Information Processing Systems (NIPS) 15, 2003. and Blind Signal Separation (ICA), 2003.

  • A hierarchical Bayesian Markovian model for motifs in biopolymer sequences. E. P. Xing, M. I. Jordan, R. M. Karp and S. Russell. In S. Becker, S. Thrun, and K. Obermayer (Eds.), Advances in Neural Information Processing Systems (NIPS) 15, 2003.

  • Graphical models: Probabilistic inference. M. I. Jordan and Y. Weiss. In M. Arbib (Ed.), The Handbook of Brain Theory and Neural Networks, 2nd edition. Cambridge, MA: MIT Press, 2002.

  • Tree-dependent component analysis. F. R. Bach and M. I. Jordan. In D. Koller and A. Darwiche (Eds)., Uncertainty in Artificial Intelligence (UAI), Proceedings of the Eighteenth Conference, 2002. [Matlab code]

  • Random sampling of a continuous-time stochastic dynamical system. M. Micheli and M. I. Jordan. Proceedings of the Fifteenth International Symposium on Mathematical Theory of Networks and Systems, 2002.

  • Thin junction trees. F. R. Bach and M. I. Jordan. In T. Dietterich, S. Becker and Z. Ghahramani (Eds.), Advances in Neural Information Processing Systems (NIPS) 14, 2002.

  • Efficient stepwise selection in decomposable models. A. Deshpande, M. N. Garofalakis, and M. I. Jordan. In J. Breese and D. Koller (Ed)., Uncertainty in Artificial Intelligence (UAI), Proceedings of the Seventeenth Conference, 2001.

  • Learning with mixtures of trees. M. Meila and M. I. Jordan. Journal of Machine Learning Research, 1, 1-48, 2000.

  • Attractor dynamics for feedforward neural networks. L. K. Saul and M. I. Jordan. Neural Computation, 12, 1313-1335, 2000.

  • Loopy belief-propagation for approximate inference: An empirical study. K. Murphy, Y. Weiss, and M. I. Jordan. In K. B. Laskey and H. Prade (Eds.), Uncertainty in Artificial Intelligence (UAI), Proceedings of the Fifteenth Conference, San Mateo, CA: Morgan Kaufmann, 1999.

  • Variational probabilistic inference and the QMR-DT network. T. S. Jaakkola and M. I. Jordan. Journal of Artificial Intelligence Research, 10, 291-322, 1999.

  • An introduction to variational methods for graphical models. M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. In M. I. Jordan (Ed.), Learning in Graphical Models, Cambridge: MIT Press, 1999.

  • Learning in graphical models. M. I. Jordan (Ed.), Cambridge MA: MIT Press, 1999.

  • Factorial hidden Markov models. Z. Ghahramani and M. I. Jordan. Machine Learning, 29, 245--273, 1997.

  • Optimal triangulation with continuous cost functions. M. Meila and M. I. Jordan. In M. C. Mozer, M. I. Jordan, and T. Petsche (Eds.), Advances in Neural Information Processing Systems (NIPS) 9, Cambridge MA: MIT Press, 1997.

  • Hidden Markov decision trees. M. I. Jordan, Z. Ghahramani, and L. K. Saul. In M. C. Mozer, M. I. Jordan, and T. Petsche (Eds.), Advances in Neural Information Processing Systems (NIPS) 9, Cambridge MA: MIT Press, 1997.

  • Neural networks. M. I. Jordan and C. Bishop. In Tucker, A. B. (Ed.), CRC Handbook of Computer Science, Boca Raton, FL: CRC Press, 1997.

  • Probabilistic independence networks for hidden Markov probability models. P. Smyth, D. Heckerman, and M. I. Jordan. Neural Computation, 9, 227-270, 1997.

  • Markov mixtures of experts. M. Meila and M. I. Jordan. In Murray-Smith, R., and Johansen, T. A. (Eds.), Multiple Model Approaches to Modelling and Control, London: Taylor and Francis, 1997.

  • Mean field theory for sigmoid belief networks. L. K. Saul, T. Jaakkola, and M. I. Jordan. Journal of Artificial Intelligence Research, 4, 61-76, 1996.

  • Boltzmann chains and hidden Markov Models. L. K. Saul and M. I. Jordan. In G. Tesauro, D. S. Touretzky and T. K. Leen, (Eds.), Advances in Neural Information Processing Systems (NIPS) 7, MIT Press, 1995.

  • Learning in Boltzmann trees. L. K. Saul and M. I. Jordan. Neural Computation, 6, 1173-1183, 1994.