Yuzhe Ma, Xiaojin Zhu, and Justin Hsu. Data Poisoning against Differentially-Private Learners: Attacks and Defenses, Submitted. March, 2019.
O’Neill, M. and Wright, S. J., A log-barrier Newton-CG method for bound constrained optimization with complexity guarantees, submitted, April, 2019.
Glendening, E., Wright, S. J., and Weinhold, F., Efficient optimization of natural resonance theory weightings and bond orders by Gram-based convex programming, to appear in Journal of Computational Chemistry, 2019.
Zeyuan Allen-Zhu, Yuanzhi Li, Yingyu Liang, Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers https://arxiv.org/abs/1811.04918
Hongyang Zhang, Vatsal Sharan, Moses Charikar, Yingyu Liang, Recovery Guarantees for Quadratic Tensors with Limited Observations, International Conference on Artificial Intelligence and Statistics (AISTATS), 2019
Yuanzhi Li, Yingyu Liang, Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data, Neural Information Processing Systems (NeurIPS), 2018.
H. Wang, Z. Charles, D. Papailiopoulos, ErasureHead: Distributed Gradient Descent without Delays Using Approximate Gradient Coding,
S Rajput, Z. Feng, Z. Charles, P.-L. Loh, D. Papailiopoulos, Does Data Augmentation Lead to Positive Margin? ICML 2019
Z. Charles, H. Rosenberg, D. Papailiopoulos , A Geometric Perspective on the Transferability of Adversarial Directions, AISTATS 2019
Chen, H. Wang, J. Zhao, P. Koutris, D. Papailiopoulos, The Effect of Network Width on the Performance of Large-batch Training, NeurIPS 2018.
Charles, D., Papailiopoulos, Gradient Coding Using the Stochastic Block Model ISIT 2018.
Anru Zhang, Cross: efficient tensor completion. Annals of Statistics, 47, 936-964, 2019.
Anru Zhang and Dong Xia. Tensor SVD: statistical and computational limits. IEEE Transactions on Information Theory, 64, 7311-7338, 2018.
Anru Zhang and Rungang Han, Optimal sparse singular value decomposition for high-dimensional high-order data. Journal of the American Statistical Association, to appear, 2019.
Anru Zhang and Kehui Chen. Nonparametric covariance estimation for mixed longitudinal studies, with applications in midlife women’s health, under revision, October, 2018.
Anru Zhang and Yuchen Zhou. On the non-asymptotic and sharp lower tail bounds of random variables, under revision, November, 2018.
Pixu Shi, Yuchen Zhou, and Anru Zhang. High-dimensional log-error-in-variable regression with applications to microbial compositional data analysis, submitted, December, 2018.
Anru Zhang and Mengdi Wang. Spectral state compression of Markov processes, submitted, September, 2018.
Botao Hao, Anru Zhang, and Guang Cheng. Sparse and low-rank tensor estimation via cubic sketchings, under revision, January, 2019.
Anru Zhang, T. Tony Cai, and Yihong Wu (2018). Heteroskedastic PCA: algorithm, optimality, and applications, submitted, October, 2018.
Anru Zhang, Yuetian Luo, Garvesh Raskutti, Ming Yuan, ISLET: Fast and optimal low-rank tensor regression via importance sketching, submitted, December, 2018
Lili Zheng, Garvesh Raskutti, Testing for high-dimensional network parameters in autoregressive models, submitted, December 2018
Benjamin Mark, Garvesh Raskutti, Rebecca Willett, Estimating Network Structure from Incomplete Event Data, (AISTATS), 2019
K.-.S. Jun, R. Willett, S. Wright, and R. Nowak. Bilinear Bandits with Low-Rank Structure. submitted, ICML, 2019
D. Gilton, G. Ongie, and R. Willett. Neumann Networks for Inverse Problems in Imaging, submitted, January 2019
G. Ongie, L. Balzano, D. Pimentel-Alarcón, R. Willett, and R. D. Nowak, Tensor Methods for Nonlinear Matrix Completion, 2018.
Y. Li, G. Raskutti, and R. Willett, Graph-based regularization for regression problems with highly-correlated designs, 2018.
Z. Kuang, Y. Bao, J. Thomson, M. Caldwell, P. Peissig, R. Stewart, R. Willett, and D. Page. A Machine-Learning Based Drug Repurposing Approach Using Baseline Regularization. Invited book chapter. In Silico Repurposing. Methods in Molecular Biology Series. Springer, 2018.
Z. Charles, A. Jalali, and R. Willett, Sparse Subspace Clustering with Missing and Corrupted Data, IEEE Data Science Workshop, 2018. https://arxiv.org/abs/1707.02461
A. Jalali and R. Willett, Missing data in sparse transition matrix estimation for sub-gaussian vector autoregressive processes, accepted to American Control Conference, 2018.
Anne Gelb, Xiao Hou, Qin Li, Numerical analysis for conservation laws using $l_1$ minimization, submitted, December, 2018
Kit Newton, Qin Li, Diffusion equation assisted Markov Chain Monte Carlo methods for the inverse radiative transfer equation, submitted, December, 2018
Jun, K.-S., Willett, R., Wright, S. J. and Nowak, R., “Bilinear bandits with low-rank structure,” ICML, 2019. https://arxiv.org/abs/1901.02470
Lee, C.-p. and Wright, S. J., “First-order algorithms converge faster than O(1/k) on convex problems,” ICML, 2019
Lim, C.-H., Linderoth, J. T., Luedtke, J. R., and Wright, S. J., “Parallelizing subgradient methods for the Lagrangian dual in stochastic mixed-integer programming,” submitted, October, 2018.
O’Neill, M. and Wright, S. J., Behavior of accelerated gradient methods near critical points of nonconvex functions, https://arxiv.org/abs/1706.07993, To appear in Mathematical Programming, Series B.
Lee, C.-p. and Wright, S. J., “Inexact successive quadratic approximation for regularized optimization,” to appear in Computational Optimization and Applications, 2019.
Laurent Lessard, Xuezhou Zhang, and Xiaojin Zhu. An optimal control approach to sequential machine teaching. In The 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 2019.
Xiaojin Zhu. An optimal control view of adversarial machine learning. arXiv:1811.04422, 2018. [link]
Kwang-Sung Jun, Lihong Li, Yuzhe Ma, and Xiaojin Zhu. Adversarial attacks on stochastic bandits. In Advances in Neural Information Processing Systems (NIPS), 2018. [pdf]
Xiaojin Zhu, Adish Singla, Sandra Zilles, Anna N. Rafferty. An Overview of Machine Teaching. ArXiv 1801.05927, 2018.
Yuzhe Ma, Robert Nowak, Philippe Rigollet, Xuezhou Zhang, and Xiaojin Zhu. Teacher improves learning by selecting a training subset. In The 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 2018. [pdf | AMPL code]
Ayon Sen, Scott Alfeld, Xuezhou Zhang, Ara Vartanian, Yuzhe Ma, and Xiaojin Zhu. Training set camouflage. In Conference on Decision and Game Theory for Security (GameSec), 2018 [pdf]
Yuzhe Ma, Kwang-Sung Jun, Lihong Li, and Xiaojin Zhu. Data poisoning attacks in contextual bandits. In Conference on Decision and Game Theory for Security (GameSec), 2018 [arXiv]
Evan Hernandez, Ara Vartanian, and Xiaojin Zhu. Program synthesis with visual specification. ArXiv 1806.00938, 2018. e[arXiv | visual specification corpus data set | speech programming game for Chrome | HAMLET talk | CS NEST talk | whitepaper]
Ayon Sen, Purav Patel, Martina A. Rau, Blake Mason, Robert Nowak, Timothy T. Rogers, and Xiaojin Zhu. Machine beats human at sequencing visuals for perceptual-fluency practice. In Educational Data Mining, 2018. [pdf]
Lee, C.-p. and Wright, S. J., “Inexact variable metric stochastic block-coordinate descent for regularized optimization,” http://www.optimization-online.org/DB_HTML/2018/08/6753.html August, 2018.
Oswal, Urvashi, and Robert Nowak. “Scalable Sparse Subspace Clustering via Ordered Weighted l 1 Regression.” In 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 305-312. IEEE, 2018.
Katariya, Sumeet, Lalit Jain, Nandana Sengupta, James Evans, and Robert Nowak. “Adaptive Sampling for Coarse Ranking.” In International Conference on Artificial Intelligence and Statistics, pp. 1839-1848. 2018.
Jun, Kwang-Sung, and Robert Nowak. “Bayesian Active Learning on Graphs.” In Cooperative and Graph Signal Processing, pp. 283-297. Academic Press, 2018.
Sebastien Roch, On the Variance of Internode Distance Under the Multispecies Coalescent, Proceedings of RECOMB-CG 2018, 196-206.
Yuling Yan, Bret Hanlon, Sebastien Roch, Karl Rohe. Asymptotic seed bias in respondent-driven sampling, Preprint, 2018. https://arxiv.org/abs/1808.10593
Yilin Zhang, Karl Rohe, Sebastien Roch, Reducing Seed Bias in Respondent-Driven Sampling by Estimating Block Transition Probabilities, Preprint, 2018. https://arxiv.org/abs/1812.01188
Ke Chen, Qin Li, Jianfeng Lu, Stephen J. Wright, Random Sampling and Efficient Algorithms for Multiscale PDEs, https://arxiv.org/abs/1807.08848, Submitted, July 2018
Yuanzhi Li, Yingyu Liang, Learning Mixtures of Linear Regressions with Nearly Optimal Complexity, https://arxiv.org/abs/1802.07895, Submitted, Conference on Learning Theory, 2018.
Hongyi Wang*, Scott Sievert*, Zachary Charles, Dimitris Papailiopoulos, Stephen Wright (authors with * contributed equally), ATOMO: Communication-efficient Learning via Atomic Sparsification, NIPS 2018, NeurlPS 2018.
Jared D. Martin, Adrienne Wood, William T. L. Cox, Scott Sievert, Robert Nowak, Eva Gilboa-Schechtman, & Paula M. Niedenthal, Validating a social-functional smile typology: Insights from machine learning and computer-assisted facial expression analysis, Cognitive Science, Submitted, May 2018.
Gautam Dasarathy, Elchanan Mossel, Robert Nowak, Sebastien Roch,Coalescent-based species tree estimation: a stochastic Farris transform,https://arxiv.org/abs/1707.04300, Submitted, 2017.
Sebastien Roch, Karl Rohe, Generalized least squares can overcome the critical threshold in respondent-driven sampling, https://arxiv.org/abs/1708.04999,
Proceedings of the National Academy of Sciences, 115(41):10299-10304, 2018.
Sebastien Roch, Kun-Chieh Wang, Circular Networks from Distorted Metrics, https://arxiv.org/abs/1707.05722, Proceedings of the 22nd Annual International Conference on Research in Computational Molecular Biology (RECOMB) 2018, 167–176
Zachary Charles, Dimitris Papailiopoulos and Jordan Ellenberg, Approximate Gradient Coding via Sparse Random Graphs, https://arxiv.org/pdf/1711.06771.pdf, November 2017.
Zachary Charles and Dimitris Papailiopoulos, Stability and Generalization of Learning Algorithms
that Converge to Global Optima, https://arxiv.org/pdf/1710.08402.pdf, ICML Accepted, October 2017.
Lingjiao Chen, Hongyi Wang, Zachary Charles, Dimitris Papailiopoulos, DRACO:Robust Distributed Training via Redundant Gradients, https://arxiv.org/pdf/1803.09877.pdf, ICML Accepted, April 2018.
Zachary Charles and Dimitris Papailiopoulos, Gradient coding using the stochastic block model, ISIT, Accepted 2018.
Ke Chen and Qin Li and Jianfeng Lu and Stephen J. Wright, Randomized sampling for basis functions construction in generalized finite element methods, https://arxiv.org/abs/1801.06938, SIAM-MMS Submitted, 2018.
Ke Chen and Qin Li and Jian-Guo Liu, Online learning in optical tomography: a stochastic approach, http://iopscience.iop.org/article/10.1088/1361-6420/aac220/pdf, Inverse Problems, Accepted, 2018.
Sebastien Roch, Michael Nute, Tandy Warnow, Long-branch attraction in species tree estimation: inconsistency of partitioned likelihood and topology-based summary methods, https://arxiv.org/abs/1803.02800,
To appear in Systematic Biology, 2018.
Zvi Rosen, Anand Bhaskar, Sebastien Roch, Yun S. Song, Geometry of the sample frequency spectrum and the perils of demographic inference, https://arxiv.org/abs/1712.05035,
Genetics, 210(2):665-682, 2018.
Wai-Tong Fan, Sebastien Roch, Necessary and sufficient conditions for consistent root reconstruction in Markov models on trees, https://arxiv.org/abs/1707.05702,
Electronic Journal of Probability, Volume 23, paper no. 47, 24 pp., 2018.
Sumeet Katariya, Lalit Jain, Nandana Sengupta, James Evans, Robert Nowak, Adaptive Sampling for Coarse Ranking, https://arxiv.org/abs/1802.07176, AISTATS Feb 2018.
Ching-pei Lee,Cong Han Lim, and Stephen J Wright, A Distributed quasi-Newton algorithm for empirical risk minimization with nonsmooth regularization, https://arxiv.org/abs/1803.01370, KDD, 2018.
Bin Hu, Laurent Lessard, and Stephen J Wright, Dissipativity theory for accelerating stochastic variance reduction: A Unified analysis of SVRG and Katyusha using semidefinite programs, ICML, 2018.
Gábor Braun, Sebastian Pokutta, Dan Tu, and Stephen Wright, Blended Conditional Gradients: The unconditioning of conditional gradients, https://arxiv.org/abs/1805.07311
Greg Ongie, Laura Balzano, Daniel Pimentel-Alarcón, Rebecca Willett, Robert D. Nowak, Tensor Methods for Nonlinear Matrix Completion, https://arxiv.org/abs/1804.10266, April 2018, submitted.
Yuan Li, Garvesh Raskutti, Rebecca Willett, Graph-based regularization for regression problems with highly-correlated designs, https://arxiv.org/abs/1803.07658, March, 2018, submitted.
Clément W. Royer, Michael O’Neill, Stephen J. Wright, A Newton-CG Algorithm with Complexity Guarantees for Smooth Unconstrained Optimization, https://arxiv.org/abs/1803.02924, March, 2018,
to appear in Mathematical Programming, Series A, 2019.
Benjamin Mark, Garvesh Raskutti, Rebecca Willett, Network Estimation from Point Process Data, https://arxiv.org/abs/1802.04838, February 2018, submitted to IEEE Transactions on Information Theory.
Xiaomin Zhang, Xuezhou Zhang, Xiaojin Zhu, Po-Ling Loh, Theoretical support of machine learning debugging via weighted M-estimation, January 2018, submitted.
Xuezhou Zhang, Xiaojin Zhu, Stephen J. Wright, Training set debugging using trusted items, January, 2018, https://arxiv.org/abs/1801.08019 , AAAI 2018.
Kwang-Sung Jun, Francesco Orabona, Stephen Wright, Rebecca Willett, Online Learning for Changing Environments using Coin Betting November, 2017, https://arxiv.org/abs/1711.02545 . To appear in Online Journal of Statistics.
Royer, C. and Wright, S. J., Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization, November, 2017. https://arxiv.org/abs/1706.03131 SIAM Journal on Optimization.