Neural Networks

Neural Networks

  • Boundary Attack++: Query-efficient decision-based adversarial attack. J. Chen and M. I. Jordan. arxiv.org/abs/1904.02144, 2019.

  • L-Shapley and C-Shapley: Efficient model interpretation for structured data. J. Chen, L. Song, M. Wainwright, and M. I. Jordan arxiv.org/abs/1808.02610, 2018.

  • A Swiss army infinitesimal jackknife. R. Giordano, W. Stephenson, R. Liu, M. I. Jordan, and T. Broderick. arxiv.org/abs/1806.00550, 2018.

  • Greedy Attack and Gumbel Attack: Generating adversarial examples for discrete data. P. Yang, J. Chen, C.-J. Hsieh, J.-L. Wang, and M. I. Jordan. arxiv.org/abs/1805.12316, 2018.

  • Neural rendering model: Joint generation and prediction for semi-supervised learning. N. Ho, T. Nguyen, A. Patel, A. Anandkumar, M. I. Jordan, and R. Baraniuk. arxiv.org/abs/1811.02657, 2018.

  • Partial transfer learning with selective adversarial networks. Z. Cao, M. Long, J. Wang, and M. I. Jordan. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, 2018.

  • On nonlinear dimensionality reduction, linear smoothing and autoencoding. D. Ting and M. I. Jordan. arxiv.org/abs/1803.02432, 2018.

  • Domain adaptation with randomized multilinear adversarial networks. M. Long, Z. Cao, J. Wang, and M. I. Jordan. arxiv.org/abs/1705.10667, 2017.

  • How to escape saddle points efficiently. C. Jin, R. Ge, P. Netrapalli, S. Kakade, \& M. I. Jordan. arXiv:1703.00877, 2017.

  • Deep transfer learning with joint adaptation networks. M. Long, H. Zhu, J. Wang, and M. I. Jordan Proceedings of the 34th International Conference on Machine Learning (ICML), Sydney, Australia, 2017.

  • Unsupervised domain adaptation with residual transfer networks. M. Long, H. Zhu, J. Wang, and M. I. Jordan In U. von Luxburg, I. Guyon, D. Lee, M. Sugiyama (Eds.), Advances in Neural Information Processing Systems (NIPS) 30, 2017.

  • On the learnability of fully-connected neural networks. Y. Zhang, J. Lee, M. Wainwright, and M. I. Jordan. In A. Singh and J. Zhu (Eds.), Proceedings of the Nineteenth Conference on Artificial Intelligence and Statistics (AISTATS), 2017.

  • SparkNet: Training deep networks in Spark. P. Moritz, R. Nishihara, I. Stoica and M. I. Jordan. International Conference on Learning Representations (ICLR), Puerto Rico, 2016.

  • l_1-regularized neural networks are improperly learnable in polynomial time. Y. Zhang, J. Lee, and M. I. Jordan. i> Proceedings of the 33rd International Conference on Machine Learning (ICML), New York, NY, 2016.

  • Learning halfspaces and neural networks with random initialization. Y. Zhang, J. Lee, M. Wainwright and M. I. Jordan. arXiv:1511.07948, 2015.

  • Learning transferable features with deep adaptation networks. M. Long, Y. Cao, J. Wang, and M. I. Jordan. In F. Bach and D. Blei (Eds.), Proceedings of the 32nd International Conference on Machine Learning (ICML), Lille, France, 2015.

  • Attractor dynamics for feedforward neural networks. L. K. Saul and M. I. Jordan. Neural Computation, 12, 1313-1335, 2000.

  • Recurrent networks. M. I. Jordan. In R. A. Wilson and F. C. Keil (Eds.), The MIT Encyclopedia of the Cognitive Sciences, Cambridge, MA: MIT Press, 1999.

  • Neural networks. M. I. Jordan. In R. A. Wilson and F. C. Keil (Eds.), The MIT Encyclopedia of the Cognitive Sciences, Cambridge, MA: MIT Press, 1999.

  • Mixture representations for inference and learning in Boltzmann machines. N. D. Lawrence, C. M. Bishop and M. I. Jordan. In G. F. Cooper and S. Moral (Eds.), Uncertainty in Artificial Intelligence (UAI), Proceedings of the Fourteenth Conference, San Mateo, CA: Morgan Kaufman, 1998.

  • Neural networks. M. I. Jordan and C. Bishop. In Tucker, A. B. (Ed.), CRC Handbook of Computer Science, Boca Raton, FL: CRC Press, 1997.

  • Serial order: A parallel, distributed processing approach. M. I. Jordan. In J. W. Donahoe and V. P. Dorsel, (Eds.). Neural-network Models of Cognition: Biobehavioral Foundations, Amsterdam: Elsevier Science Press, 1997.

  • Mean field theory for sigmoid belief networks. L. K. Saul, T. Jaakkola, and M. I. Jordan. Journal of Artificial Intelligence Research, 4, 61-76, 1996.

  • Why the logistic function? A tutorial discussion on probabilities and neural networks. M. I. Jordan. MIT Computational Cognitive Science Report 9503, August 1995.

  • Boltzmann chains and hidden Markov Models. L. K. Saul and M. I. Jordan. In G. Tesauro, D. S. Touretzky and T. K. Leen, (Eds.), Advances in Neural Information Processing Systems (NIPS) 8, MIT Press, 1995.

  • Learning in Boltzmann trees. L. K. Saul and M. I. Jordan. Neural Computation, 6, 1173-1183, 1994.

  • Learning piecewise control strategies in a modular neural network architecture. R. A. Jacobs and M. I. Jordan. IEEE Transactions on Systems, Man, and Cybernetics, 23, 337--345, 1993.

  • A dynamical model of priming and repetition blindness. D. Bavelier and M. I. Jordan. In Hanson, S. J., Cowan, J. D., and Giles, C. L., (Eds.), Advances in Neural Information Processing Systems (NIPS) 6, San Mateo, CA: Morgan Kaufmann, 1993.

  • The cascade neural network model and a speed-accuracy tradeoff of arm movement. M. Hirayama, M. Kawato, and M. I. Jordan. Journal of Motor Behavior, 25, 162--175, 1993.

  • Constrained supervised learning. M. I. Jordan. Journal of Mathematical Psychology, 36, 396--425, 1992.

  • Computational consequences of a bias towards short connections. R. A. Jacobs and M. I. Jordan. Journal of Cognitive Neuroscience, 4, 331--344, 1992.

  • A more biologically plausible learning network model for neural networks. P. Mazzoni, R. Andersen, and M. I. Jordan. Proceedings of the National Academy of Sciences, 88, 4433--4437, 1991.

  • Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks. R. A. Jacobs, M. I. Jordan, and A. G. Barto. Cognitive Science, 15, 219--250, 1991.

  • A more biologically plausible learning rule than backpropagation applied to a network model of cortical area 7a. P. Mazzoni, R. Andersen, and M. I. Jordan. Cerebral Cortex, 1, 293--307, 1991.

  • A non-empiricist perspective on learning in layered networks. M. I. Jordan. Behavioral and Brain Sciences, 13, 497--498, 1990.

  • AR-P learning applied to a network model of cortical area 7a. P. Mazzoni, R. Andersen, and M. I. Jordan. Proceedings of the International Joint Conference On Neural Networks, San Diego, CA, pp. 373--379, 1990.

  • Gradient following without backpropagation in layered networks. A. G. Barto and M. I. Jordan. Proceedings of the IEEE First Annual International Conference on Neural Networks, New York: IEEE Publishing Services, 1987.

  • Attractor dynamics and parallelism in a connectionist sequential machine. Jordan, M. I. Proceedings of the Eighth Annual Conference of the Cognitive Science Society, Englewood Cliffs, NJ: Erlbaum, pp. 531--546. [Reprinted in IEEE Tutorials Series, New York: IEEE Publishing Services, 1990], 1986.