Classification and Regression

Classification and Regression

  • Lower bounds on the performance of polynomial-time algorithms for sparse linear regression. Y. Zhang, M. Wainwright, and M. I. Jordan. Proceedings of the Conference on Computational Learning Theory (COLT), Barcelona, Spain, 2014.

  • EP-GIG priors and applications in Bayesian sparse learning. Z. Zhang, S. Wang, D. Liu, and M. I. Jordan. Journal of Machine Learning Research, 13, 2031-2061, 2012.

  • Coherence functions with applications in large-margin classification methods. Z. Zhang, D. Liu, G. Dai, and M. I. Jordan. Journal of Machine Learning Research, 13, 2705-2734, 2012.

  • Union support recovery in high-dimensional multivariate regression. G. Obozinski, M. J. Wainwright, and M. I. Jordan. Annals of Statistics, 39, 1-47, 2011.

  • Heavy-tailed processes for selective shrinkage. F. Wauthier and M. I. Jordan. In J. Shawe-Taylor, R. Zemel, J. Lafferty, and C. Williams (Eds.) Advances in Neural Information Processing Systems (NIPS) 23, 2011.

  • Joint covariate selection and joint subspace selection for multiple classification problems. G. Obozinski, B. Taskar, and M. I. Jordan. Statistics and Computing, 20, 231-252, 2010.

  • Regularized discriminant analysis, ridge regression and beyond. Z. Zhang, G. Dai, C. Xu, and M. I. Jordan. Journal of Machine Learning Research, 11, 2141-2170, 2010.

  • An asymptotic analysis of smooth regularizers. P. Liang, F. Bach, G. Bouchard, and M. I. Jordan. In Y. Bengio, D. Schuurmans, J. Lafferty and C. Williams (Eds.) Advances in Neural Information Processing Systems (NIPS) 22, 2010.

  • On surrogate loss functions and f-divergences. X. Nguyen, M. J. Wainwright and M. I. Jordan. Annals of Statistics, 37, 876-904, 2009.

  • Kernel dimension reduction in regression. K. Fukumizu, F. R. Bach, and M. I. Jordan. Annals of Statistics, 37, 1871-1905, 2009.

  • A flexible and efficient algorithm for regularized Fisher discriminant analysis. Z. Zhang, G. Dai, and M. I. Jordan. Machine Learning and Knowledge Discovery in Databases: European Conference (ECML PKDD), Bled, Slovenia, 2009.

  • Learning from measurements in exponential families. P. Liang, M. I. Jordan, and D. Klein. Proceedings of the 26th International Conference on Machine Learning (ICML), Montreal, Canada, 2009.

  • Coherence functions for multicategory margin-based classification methods. Z. Zhang, M. I. Jordan, W-J. Li, and D-Y. Yeung. Proceedings of the Twelfth Conference on Artificial Intelligence and Statistics (AISTATS), Clearwater Beach, FL, 2009.

  • Posterior consistency of the Silverman g-prior in Bayesian model choice. Z. Zhang and M. I. Jordan. In D. Koller, Y. Bengio, D. Schuurmans and L. Bottou (Eds.), Advances in Neural Information Processing Systems (NIPS) 21, 2009.

  • DiscLDA: Discriminative learning for dimensionality reduction and classification. S. Lacoste-Julien, F. Sha, and M. I. Jordan. In D. Koller, Y. Bengio, D. Schuurmans and L. Bottou (Eds.), Advances in Neural Information Processing Systems (NIPS) 21, 2009.

  • High-dimensional union support recovery in multivariate regression. G. Obozinski, M. J. Wainwright and M. I. Jordan. In D. Koller, Y. Bengio, D. Schuurmans and L. Bottou (Eds.), Advances in Neural Information Processing Systems (NIPS) 21, 2009. [Appendix].

  • Multiway spectral clustering: A maximum margin perspective. Z. Zhang and M. I. Jordan. Statistical Science, 23, 383-403, 2008.

  • An analysis of generative, discriminative, and pseudolikelihood estimators. P. Liang and M. I. Jordan. Proceedings of the 25th International Conference on Machine Learning (ICML), 2008.

  • On optimal quantization rules for some sequential decision problems. X. Nguyen, M. J. Wainwright, and M. I. Jordan. IEEE Transactions on Information Theory, 54, 3285-3295, 2008.

  • Structured prediction, dual extragradient and Bregman projections. B. Taskar, S. Lacoste-Julien and M. I. Jordan. Journal of Machine Learning Research, 7, 1627-1653, 2006.

  • Bayesian multicategory support vector machines. Z. Zhang, and M. I. Jordan. In Uncertainty in Artificial Intelligence (UAI), Proceedings of the Twenty-Second Conference, 2006.

  • Convexity, classification, and risk bounds. P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Journal of the American Statistical Association, 101, 138-156, 2006.

  • On optimal quantization rules for sequential decision problems. X. Nguyen, M. J. Wainwright and M. I. Jordan. International Symposium on Information Theory (ISIT), Seattle, WA, 2006. [Long version].

  • Structured prediction via the extragradient method. B. Taskar, S. Lacoste-Julien and M. I. Jordan. In Y. Weiss and B. Schoelkopf and J. Platt (Eds.), Advances in Neural Information Processing Systems (NIPS) 18, 2006.

  • Comment on 'Support vector machines with applications'. P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Statistical Science, 21, 341-346, 2006.

  • Divergences, surrogate loss functions and experimental design. X. Nguyen, M. J. Wainwright and M. I. Jordan. In Y. Weiss and B. Schoelkopf and J. Platt (Eds.), Advances in Neural Information Processing Systems (NIPS) 18, 2006, [Long version].

  • Gaussian processes and the null-category noise model. N. D. Lawrence and M. I. Jordan. In O. Chapelle, B. Schoelkopf & A. Zien (Eds), Semi-Supervised Learning, Cambridge, MA: MIT Press, 2005.

  • A kernel-based learning approach to ad hoc sensor network localization. X. Nguyen, M. I. Jordan, and B. Sinopoli. ACM Transactions on Sensor Networks, 1, 134-152, 2005.

  • Computing regularization paths for learning multiple kernels. F. R. Bach, R. Thibaux, and M. I. Jordan. In L. Saul, Y. Weiss, and L. Bottou (Eds.), Advances in Neural Information Processing Systems (NIPS) 17, 2005. [Matlab code]

  • Extensions of the informative vector machine. N. D. Lawrence, J. C. Platt, \& M. I. Jordan. In J. Winkler and N. D. Lawrence and M. Niranjan (Eds.), Proceedings of the Sheffield Machine Learning Workshop, Lecture Notes in Computer Science, New York: Springer, 2005.

  • On information divergence measures, surrogate loss functions and decentralized hypothesis testing. X. Nguyen, M. J. Wainwright and M. I. Jordan. Forty-third Annual Allerton Conference on Communication, Control, and Computing, Urbana-Champaign, IL, 2005.

  • Multiple kernel learning, conic duality, and the SMO algorithm. F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan. Proceedings of the 21st International Conference on Machine Learning (ICML), 2004. [Long version]. [Software].

  • Semi-supervised learning via Gaussian processes. N. D. Lawrence and M. I. Jordan. Advances in Neural Information Processing Systems (NIPS) 16, 2004.

  • Decentralized detection and classification using kernel methods. X. Nguyen, M. J. Wainwright, and M. I. Jordan. Proceedings of the 21st International Conference on Machine Learning (ICML), 2004. [Long version].

  • Sparse Gaussian process classification with multiple classes. M. Seeger and M. I. Jordan. Technical Report 661, Department of Statistics, University of California, Berkeley, 2004.

  • Learning the kernel matrix with semidefinite programming. G. R. G. Lanckriet, N. Cristianini, L. El Ghaoui, P. L. Bartlett, and M. I. Jordan. Journal of Machine Learning Research, 5, 27-72, 2004.

  • Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. K. Fukumizu, F. R. Bach, and M. I. Jordan. Journal of Machine Learning Research, 5, 73-79, 2004.

  • Discussion of boosting papers. P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Annals of Statistics, 32, 2004.

  • Large margin classifiers: convex loss, low noise, and convergence rates. P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. In S. Thrun, L. Saul, and B. Schoelkopf (Eds.), Advances in Neural Information Processing Systems (NIPS) 16, 2004.

  • A robust minimax approach to classification. G. R. G. Lanckriet, L. El Ghaoui, C. Bhattacharyya, and M. I. Jordan. Journal of Machine Learning Research, 3, 552-582, 2002. [Matlab code]

  • Thin junction trees. F. R. Bach and M. I. Jordan. In T. Dietterich, S. Becker and Z. Ghahramani (Eds.), Advances in Neural Information Processing Systems (NIPS) 14, 2002.

  • Minimax probability machine. G. R. G. Lanckriet, L. El Ghaoui, C. Bhattacharyya, and M. I. Jordan. In T. Dietterich, S. Becker and Z. Ghahramani (Eds.), Advances in Neural Information Processing Systems (NIPS) 14, 2002.

  • On discriminative vs. generative classifiers: A comparison of logistic regression and naive Bayes. A. Y. Ng and M. I. Jordan. In T. Dietterich, S. Becker and Z. Ghahramani (Eds.), Advances in Neural Information Processing Systems (NIPS) 14, 2002.

  • Convergence rates of the Voting Gibbs classifier, with application to Bayesian feature selection. A. Y. Ng and M. I. Jordan. Machine Learning: Proceedings of the Eighteenth International Conference, San Mateo, CA: Morgan Kaufmann, 2001.

  • Learning with mixtures of trees. M. Meila and M. I. Jordan. Journal of Machine Learning Research, 1, 1-48, 2000.

  • Bayesian logistic regression: a variational approach. T. S. Jaakkola and M. I. Jordan. Statistics and Computing, 10, 25-37, 2000.

  • Local linear perceptrons for classification. E. Alpaydin and M. I. Jordan. IEEE Transactions on Neural Networks, 7, 788--792, 1996.

  • A statistical approach to decision tree modeling. M. I. Jordan. In M. Warmuth (Ed.), Proceedings of the Seventh Annual ACM Conference on Computational Learning Theory, New York: ACM Press, 1994.