Tech Reports

H Wang, M Yurochkin, Y Sun, D Papailiopoulos, Y Khazaeni, Federated Learning with Matched Averaging, ICLR 2020

S Rajput, H Wang, Z Charles, D Papailiopoulos, DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation, NeurIPS 2019

S Liu, D Papailiopoulos, D Achlioptas, Bad Global Minima Exist and SGD Can Reach Them, ICML 2019, Workshop on Deep Phenomena, https://arxiv.org/abs/1906.02613

Shashank Rajput, Anant Gupta, Dimitris Papailiopoulos, Closing the convergence gap of SGD without replacement, February 2020
https://arxiv.org/abs/2002.10400

Z Charles, S Rajput, S Wright, D Papailiopoulos, Convergence and Margin of Adversarial Training on Separable Data, May 2019, https://arxiv.org/abs/1905.09209

Ng, T. L. and Newton, M. A., Random weighting to approximate posterior inference in LASSO regression, submitted, February, 2020, https://arxiv.org/abs/2002.02629

Ke Chen, Qin Li, Kit Newton, Steve Wright, Structured random sketching for PDE inverse problems, submitted

Ke Chen, Qin Li, Jianfeng Lu, Steve Wright, Randomized sampling for basis functions construction in generalized finite element methods, accepted, SIAM-MMS, 2019

Zhiyan Ding, Lukas Einkemme and Qin Li, Error analysis of an asymptotic preserving dynamical low-rank integrator for the multi-scale radiative transfer equation, submitted

Wright, S. J. and Lee, C.-p., Analyzing random permutations for cyclic coordinate descent, to appear in Mathematics of Computation, 2020.

Curtis, F. E., Robinson, D. P., Royer, C. W., and Wright, S. J., Trust-region Newton-CG with strong second- order complexity guarantees for nonconvex optimization, submitted, December, 2019. https://arxiv.org/abs/1912.04365

Xie, Y. and Wright, S. J., Complexity of proximal augmented Lagrangian for nonconvex optimization with nonlinear equality constraints, September, 2019. https://arxiv.org/abs/1908.00131

Han, R., Willett, R., and Zhang, A., An optimal statistical and computational framework for generalized tensor estimation, submitted, February, 2020. https://arxiv.org/abs/2002.11255

Zhu, Z., Li, X., Wang, M., and Zhang, A., Learning Markov models via low-rank optimization, under revision, October, 2019. https://arxiv.org/abs/1907.00113

Cai, T. T., Zhang, A., and Zhou, Y.,  Sparse Group Lasso: Optimal Sample Complexity, Convergence Rate, and Statistical Inference, submitted, September, 2019. https://arxiv.org/abs/1909.09851

Anru Zhang, Yuetian Luo, Garvesh Raskutti, and Ming Yuan, ISLET: Fast and optimal low-rank tensor regression via importance sketching, SIAM Journal on Mathematics of Data Science, to appear, 2020. https://arxiv.org/abs/1911.03804

Yiding Chen and Xiaojin Zhu, Optimal attack against autoregressive models by manipulating the environment. In The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI), 2020. Pdf

Xuanqing Liu, Si Si, Xiaojin Zhu, Yang Li, and Cho-Jui Hsieh, A unified framework for data poisoning attack to graph-based semi-supervised learning, In Advances in Neural Information Processing Systems (NeurIPS), 2019.

Farnam Mansouri, Yuxin Chen, Ara Vartanian, Xiaojin Zhu, and Adish Singla, Preference-based batch and sequential teaching: Towards a unified view of models, In Advances in Neural Information Processing Systems (NeurIPS), 2019.

Ke Chen, Qin Li, Stephen J. Wright,  Schwarz iteration method for elliptic equation with rough media based on random sampling, Submitted August 2019, accepted, ICCM 2018 proceeding

Ke Chen, Qin Li, Jianfeng Lu, Stephen J. Wright, A low-rank Schwarz method for radiative transport equation with heterogeneous scattering coefficient. arXiv:1906.02176. 2019.

Ru-Yu Lai and Qin Li, Parameter Reconstruction for general transport equation., arXiv:1904.10049. 2019.

Kit Newton, Qin Li and Andrew Stuart,Diffusive optical tomography in the Bayesian framework, arXiv:1902.10317. 2019, accepted, SIAM-MMS

Ayon Sen, Xiaojin Zhu, Liam Marshall, Robert Nowak, Should Adversarial Attacks Use Pixel p-Norm?, arXiv:1906.02439. 2019.

Sanjoy Dasgupta, Daniel Hsu, Stefanos Poulis, Xiaojin Zhu, Teaching a black-box learner, In The 36th International Conference on Machine Learning (ICML), 2019.

Yuzhe Ma, Xuezhou Zhang, Wen Sun, Xiaojin Zhu, Policy Poisoning in Batch Reinforcement Learning and Control, In Advances in Neural Information Processing Systems (NeurIPS), 2019.

Xuezhou Zhang, Xiaojin Zhu, and Laurent Lessard, Online Data Poisoning Attacks, arXiv:1903.01666. 2019.

Yiding Chen and Xiaojin Zhu, Optimal Adversarial Attack on Autoregressive Models, arXiv:1902.00202, 2019.

Owen Levin, Zihang Meng, Vikas Singh, Xiaojin Zhu, Fooling Computer Vision into Inferring the Wrong Body Mass Index, arXiv:1905.06916, 2019.

Zhang H, Ericksen SS, Lee C-p, Ananiev GE, Wlodarchak N, Yu P, Mitchell JC, Gitter A, Wright SJ, Hoffman FM, Wildman SA, Newton MA, Predicting kinase inhibitors using bioactivity matrix derived informer sets, PLoS Comput Biol 15(8): e1006813, 2019.

Yuzhe Ma, Xiaojin Zhu, and Justin Hsu. Data Poisoning against Differentially-Private Learners: Attacks and Defenses. In The 28th International Joint Conference on Artificial Intelligence (IJCAI), 2019.

O’Neill, M. and Wright, S. J., A log-barrier Newton-CG method for bound constrained optimization with complexity guarantees, submitted, April, 2019.

Glendening, E., Wright, S. J., and Weinhold, F.Efficient optimization of natural resonance theory weightings and bond orders by Gram-based convex programming, to appear in Journal of Computational Chemistry, 2019.

Zeyuan Allen-Zhu, Yuanzhi Li, Yingyu Liang, Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers https://arxiv.org/abs/1811.04918

Hongyang Zhang, Vatsal Sharan, Moses Charikar, Yingyu Liang, Recovery Guarantees for Quadratic Tensors with Limited Observations, International Conference on Artificial Intelligence and Statistics (AISTATS), 2019

Yuanzhi Li, Yingyu Liang, Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data, Neural Information Processing Systems (NeurIPS), 2018.

H. Wang, Z. Charles, D. Papailiopoulos, ErasureHead: Distributed Gradient Descent without Delays Using Approximate Gradient Coding, 

S Rajput, Z. Feng, Z. Charles, P.-L. Loh, D. Papailiopoulos, Does Data Augmentation Lead to Positive Margin? ICML 2019

Z. Charles, H. Rosenberg, D. Papailiopoulos , A Geometric Perspective on the Transferability of Adversarial Directions,  AISTATS 2019

Chen, H. Wang, J. Zhao, P. Koutris, D. Papailiopoulos, The Effect of Network Width on the Performance of Large-batch Training, NeurIPS 2018.

Charles, D., Papailiopoulos, Gradient Coding Using the Stochastic Block Model ISIT 2018.

Anru Zhang, Cross: efficient tensor completion. Annals of Statistics, 47, 936-964, 2019.

Anru Zhang and Dong Xia. Tensor SVD: statistical and computational limits. IEEE Transactions on Information Theory, 64, 7311-7338, 2018.

Anru Zhang and Rungang Han, Optimal sparse singular value decomposition for high-dimensional high-order data. Journal of the American Statistical Association, to appear, 2019.

Anru Zhang and Kehui Chen. Nonparametric covariance estimation for mixed longitudinal studies, with applications in midlife women’s health, under revision, October, 2018.

Anru Zhang and Yuchen Zhou. On the non-asymptotic and sharp lower tail bounds of random variables, under revision, November, 2018.

Pixu Shi, Yuchen Zhou, and Anru Zhang. High-dimensional log-error-in-variable regression with applications to microbial compositional data analysis, submitted, December, 2018.

Anru Zhang and Mengdi Wang. Spectral state compression of Markov processes, submitted, September, 2018.

Botao Hao, Anru Zhang, and Guang Cheng. Sparse and low-rank tensor estimation via cubic sketchings, under revision, January, 2019.

Anru Zhang, T. Tony Cai, and Yihong Wu (2018). Heteroskedastic PCA: algorithm, optimality, and applications, submitted, October, 2018.

Anru Zhang, Yuetian Luo, Garvesh Raskutti, Ming Yuan, ISLET: Fast and optimal low-rank tensor regression via importance sketching, submitted, December, 2018

Lili Zheng, Garvesh Raskutti, Testing for high-dimensional network parameters in autoregressive models, submitted, December 2018

Benjamin Mark, Garvesh Raskutti, Rebecca Willett, Estimating Network Structure from Incomplete Event Data, (AISTATS), 2019

K.-.S. Jun, R. Willett, S. Wright, and R. Nowak. Bilinear Bandits with Low-Rank Structure. submitted, ICML, 2019

D. Gilton, G. Ongie, and R. WillettNeumann Networks for Inverse Problems in Imaging, submitted, January 2019

G. Ongie, L. Balzano, D. Pimentel-Alarcón, R. Willett, and R. D. Nowak, Tensor Methods for Nonlinear Matrix Completion, 2018.

Y. Li, G. Raskutti, and R. WillettGraph-based regularization for regression problems with highly-correlated designs, 2018.

Z. Kuang, Y. Bao, J. Thomson, M. Caldwell, P. Peissig, R. Stewart, R. Willett, and D. Page. A Machine-Learning Based Drug Repurposing Approach Using Baseline Regularization. Invited book chapter. In Silico Repurposing. Methods in Molecular Biology Series. Springer, 2018.

Z. Charles, A. Jalali, and R. Willett, Sparse Subspace Clustering with Missing and Corrupted Data, IEEE Data Science Workshop, 2018. https://arxiv.org/abs/1707.02461

A. Jalali and R. Willett, Missing data in sparse transition matrix estimation for sub-gaussian vector autoregressive processes, accepted to American Control Conference, 2018.

Anne Gelb, Xiao Hou, Qin Li, Numerical analysis for  conservation laws using $l_1$ minimization, submitted, December, 2018

Kit Newton, Qin Li, Diffusion equation assisted Markov Chain Monte Carlo methods for the inverse radiative transfer equation, submitted, December, 2018

Jun, K.-S., Willett, R., Wright, S. J. and Nowak, R., “Bilinear bandits with low-rank structure,” ICML, 2019. https://arxiv.org/abs/1901.02470

Lee, C.-p. and Wright, S. J., “First-order algorithms converge faster than O(1/k) on convex problems,” ICML, 2019

Lim, C.-H., Linderoth, J. T., Luedtke, J. R., and Wright, S. J., “Parallelizing subgradient methods for the Lagrangian dual in stochastic mixed-integer programming,” submitted, October, 2018.

O’Neill, M. and Wright, S. J., Behavior of accelerated gradient methods near critical points of nonconvex functions,  https://arxiv.org/abs/1706.07993, To appear in Mathematical Programming, Series B.

Lee, C.-p. and Wright, S. J., “Inexact successive quadratic approximation for regularized optimization,” to appear in Computational Optimization and Applications, 2019.

Laurent Lessard, Xuezhou Zhang, and Xiaojin Zhu. An optimal control approach to sequential machine teaching. In The 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 2019.

Xiaojin Zhu. An optimal control view of adversarial machine learning. arXiv:1811.04422, 2018.  [link]

Kwang-Sung Jun, Lihong Li, Yuzhe Ma, and Xiaojin Zhu. Adversarial attacks on stochastic bandits. In Advances in Neural Information Processing Systems (NIPS), 2018. [pdf]

Xiaojin Zhu, Adish Singla, Sandra Zilles, Anna N. Rafferty. An Overview of Machine Teaching. ArXiv 1801.05927, 2018.

Yuzhe Ma, Robert Nowak, Philippe Rigollet, Xuezhou Zhang, and Xiaojin Zhu. Teacher improves learning by selecting a training subset. In The 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 2018. [pdf | AMPL code]

Ayon Sen, Scott Alfeld, Xuezhou Zhang, Ara Vartanian, Yuzhe Ma, and Xiaojin Zhu. Training set camouflage. In Conference on Decision and Game Theory for Security (GameSec), 2018 [pdf]

Yuzhe Ma, Kwang-Sung Jun, Lihong Li, and Xiaojin Zhu. Data poisoning attacks in contextual bandits. In Conference on Decision and Game Theory for Security (GameSec), 2018 [arXiv]

Evan Hernandez, Ara Vartanian, and Xiaojin Zhu. Program synthesis with visual specification. ArXiv 1806.00938, 2018. e[arXiv | visual specification corpus data set | speech programming game for Chrome | HAMLET talk | CS NEST talk | whitepaper]

Ayon Sen, Purav Patel, Martina A. Rau, Blake Mason, Robert Nowak, Timothy T. Rogers, and Xiaojin Zhu. Machine beats human at sequencing visuals for perceptual-fluency practice. In Educational Data Mining, 2018. [pdf]

Lee, C.-p. and Wright, S. J., “Inexact variable metric stochastic block-coordinate descent for regularized optimization,” http://www.optimization-online.org/DB_HTML/2018/08/6753.html August, 2018.

Oswal, Urvashi, and Robert Nowak. “Scalable Sparse Subspace Clustering via Ordered Weighted l 1 Regression.” In 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 305-312. IEEE, 2018.

Katariya, Sumeet, Lalit Jain, Nandana Sengupta, James Evans, and Robert Nowak. “Adaptive Sampling for Coarse Ranking.” In International Conference on Artificial Intelligence and Statistics, pp. 1839-1848. 2018.

Jun, Kwang-Sung, and Robert Nowak. “Bayesian Active Learning on Graphs.” In Cooperative and Graph Signal Processing, pp. 283-297. Academic Press, 2018.

Sebastien Roch, On the Variance of Internode Distance Under the Multispecies Coalescent, Proceedings of RECOMB-CG 2018, 196-206.

Yuling Yan, Bret Hanlon, Sebastien Roch, Karl Rohe. Asymptotic seed bias in respondent-driven sampling, Preprint, 2018. https://arxiv.org/abs/1808.10593

Yilin Zhang, Karl Rohe, Sebastien Roch, Reducing Seed Bias in Respondent-Driven Sampling by Estimating Block Transition Probabilities, Preprint, 2018. https://arxiv.org/abs/1812.01188

Ke Chen, Qin Li, Jianfeng Lu, Stephen J. WrightRandom Sampling and Efficient Algorithms for Multiscale PDEshttps://arxiv.org/abs/1807.08848, Submitted, July 2018

Yuanzhi Li, Yingyu Liang, Learning Mixtures of Linear Regressions with Nearly Optimal Complexity, https://arxiv.org/abs/1802.07895, Submitted, Conference on Learning Theory, 2018.

Hongyi Wang*, Scott Sievert*, Zachary Charles, Dimitris Papailiopoulos, Stephen Wright (authors with * contributed equally), ATOMO: Communication-efficient Learning via Atomic Sparsification, NIPS 2018, NeurlPS 2018.

Jared D. Martin, Adrienne Wood, William T. L. Cox, Scott Sievert, Robert Nowak, Eva Gilboa-Schechtman, & Paula M. Niedenthal, Validating a social-functional smile typology: Insights from machine learning and computer-assisted facial expression analysis, Cognitive Science, Submitted, May 2018.

Gautam Dasarathy, Elchanan Mossel, Robert Nowak, Sebastien Roch,Coalescent-based species tree estimation: a stochastic Farris transform,https://arxiv.org/abs/1707.04300, Submitted, 2017.

Sebastien Roch, Karl Rohe, Generalized least squares can overcome the critical threshold in respondent-driven sampling, https://arxiv.org/abs/1708.04999,
Proceedings of the National Academy of Sciences, 115(41):10299-10304, 2018.

Sebastien Roch, Kun-Chieh Wang, Circular Networks from Distorted Metrics, https://arxiv.org/abs/1707.05722, Proceedings of the 22nd Annual International Conference on Research in Computational Molecular Biology (RECOMB) 2018, 167–176

Zachary Charles, Dimitris Papailiopoulos and Jordan Ellenberg, Approximate Gradient Coding via Sparse Random Graphs, https://arxiv.org/pdf/1711.06771.pdf, November 2017.

Zachary Charles and Dimitris Papailiopoulos, Stability and Generalization of Learning Algorithms
that Converge to Global Optima
, https://arxiv.org/pdf/1710.08402.pdf, ICML Accepted, October 2017.

Lingjiao Chen, Hongyi Wang, Zachary Charles, Dimitris Papailiopoulos, DRACO:Robust Distributed Training via Redundant Gradients, https://arxiv.org/pdf/1803.09877.pdf, ICML Accepted, April 2018.

Zachary Charles and Dimitris Papailiopoulos, Gradient coding using the stochastic block model, ISIT, Accepted 2018.

Ke Chen and Qin Li and Jianfeng Lu and Stephen J. Wright, Randomized sampling for basis functions construction in generalized finite element methods, https://arxiv.org/abs/1801.06938, SIAM-MMS Submitted, 2018.

Ke Chen and Qin Li and Jian-Guo Liu, Online learning in optical tomography: a stochastic approach, http://iopscience.iop.org/article/10.1088/1361-6420/aac220/pdf, Inverse Problems, Accepted, 2018.

Sebastien Roch, Michael Nute, Tandy Warnow, Long-branch attraction in species tree estimation: inconsistency of partitioned likelihood and topology-based summary methods, https://arxiv.org/abs/1803.02800,
To appear in Systematic Biology, 2018.

Zvi Rosen, Anand Bhaskar, Sebastien Roch, Yun S. Song, Geometry of the sample frequency spectrum and the perils of demographic inference, https://arxiv.org/abs/1712.05035,
Genetics, 210(2):665-682, 2018.

Wai-Tong Fan, Sebastien Roch, Necessary and sufficient conditions for consistent root reconstruction in Markov models on trees, https://arxiv.org/abs/1707.05702,
Electronic Journal of Probability, Volume 23, paper no. 47, 24 pp., 2018.

Sumeet Katariya, Lalit Jain, Nandana Sengupta, James Evans, Robert Nowak, Adaptive Sampling for Coarse Ranking, https://arxiv.org/abs/1802.07176, AISTATS Feb 2018.

Ching-pei Lee,Cong Han Lim, and Stephen J Wright, A Distributed quasi-Newton algorithm for empirical risk minimization with nonsmooth regularization, https://arxiv.org/abs/1803.01370, KDD, 2018.

Bin Hu, Laurent Lessard, and Stephen J Wright, Dissipativity theory for accelerating stochastic variance reduction: A Unified analysis of SVRG and Katyusha using semidefinite programs, ICML, 2018.

Gábor Braun, Sebastian Pokutta, Dan Tu, and Stephen WrightBlended Conditional Gradients: The unconditioning of conditional gradients, https://arxiv.org/abs/1805.07311
ICML 2019.

Mert Gurbuzbalaban, Asuman Ozdaglar, Nuri Denizcan Vanli, Stephen J. Wright,  Randomness and permutations in coordinate descent methods, https://arxiv.org/abs/1803.08200 submitted, March, 2018.

Greg Ongie, Laura Balzano, Daniel Pimentel-Alarcón, Rebecca Willett, Robert D. Nowak, Tensor Methods for Nonlinear Matrix Completion, https://arxiv.org/abs/1804.10266, April 2018, submitted.

Yuan Li, Garvesh Raskutti, Rebecca Willett, Graph-based regularization for regression problems with highly-correlated designs, https://arxiv.org/abs/1803.07658March, 2018, submitted.

Clément W. Royer, Michael O’Neill, Stephen J. Wright, A Newton-CG Algorithm with Complexity Guarantees for Smooth Unconstrained Optimization, https://arxiv.org/abs/1803.02924, March, 2018,
to appear in Mathematical Programming, Series A, 2019.

Benjamin Mark, Garvesh Raskutti, Rebecca Willett, Network Estimation from Point Process Data,  https://arxiv.org/abs/1802.04838, February 2018, submitted to IEEE Transactions on Information Theory.

Xiaomin Zhang, Xuezhou Zhang, Xiaojin Zhu, Po-Ling Loh, Theoretical support of machine learning debugging via weighted M-estimation, January 2018, submitted.

Xuezhou Zhang, Xiaojin Zhu, Stephen J. Wright, Training set debugging using trusted items, January, 2018, https://arxiv.org/abs/1801.08019 , AAAI 2018.

Kwang-Sung Jun, Francesco Orabona, Stephen Wright, Rebecca Willett, Online Learning for Changing Environments using Coin Betting November, 2017, https://arxiv.org/abs/1711.02545 . To appear in Online Journal of Statistics.

Royer, C. and Wright, S. J., Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization, November, 2017https://arxiv.org/abs/1706.03131 SIAM Journal on Optimization.