Nonconvex Formulations and Algorithms in Data Sciences

Dates: July 30-August 1, 2018.
Venue: University of Wisconsin-Madison, Discovery Building, Orchard View Room
Sponsored by: NSF TRIPODS Institutes at the University of Wisconsin-Madison and the University of Washington -and- Machines, Algorithms, and Data Lab, Air Force University Center of Excellence @ UW-Madison
Description: The aim of this workshop is to explore nonconvex problems that arise in data science and to develop algorithms for solving these problems that are practical, that draw on the full range of technology from optimization, statistics, and computational hardware and systems, and that come with performance guarantees. This workshop brings together people form mathematics, statistics, and computer science who are interested in the topic of nonconvex optimization. The workshop will feature invited presentations and open problem sessions and panel discussions to answer questions such as: What are the most important instances of nonconvex learning problems? What are the key properties of nonconvex optimization problems? What are the current bottlenecks of solving nonconvex formulations? What are the gaps in our understanding of algorithm performance?


Confirmed speakers (click for more information):

Register Here


Hotel: IFDS has reserved a block of rooms at Hawthorn Suites (Madison/Fitchburg, 5421 Caddis Bend Fitchburg, Wisconsin 53711-7127).
Please call 608-514-1329 to reserve a room and use the group code “Group IFDS workshop”.


Lectures are available on Vimeo.
Please see lecture page.

Monday, July 30, 2018

09:30 – 09:40 Welcome
Stephen Wright (University of Wisconsin‐Madison)
09:40 – 10:20 Sketchy decisions: Low‐rank convex matrix optimization with optimal storage
Joel Tropp (California Institute of Technology)
10:20 – 11:00 Accelerated Methods for Non‐Convex Optimization
Aaron Sidford (Stanford University)
11:00 – 11:30 Break
11:30 – 12:10 Stochastic methods for nonsmooth nonconvex optimization
Dimitriy Drusvyatskiy (University of Washington)
12:10 – 1:40 LUNCH (attendees on their own)
1:40 – 3:00 Open Problems: Breakout or Large-Group Discussion
3:00 – 3:30 Break
3:30 – 4:10 Do I Believe My Labels?
Sujay Sanghavi (University of Texas, Austin)
4:10 – 4:50 When Recurrent Models Don’t Need to be Recurrent
Moritz Hardt (University of California, Berkeley)

Tuesday, July 31, 2018

09:30 – 10:10 Towards Theoretical Understanding of Overparametrization in Deep Learning
Jason D. Lee (University of Southern California)
10:10 – 10:50 Learning Regularizers from Data
Venkat Chandrasekaran (California Institute of Technology)
10:50 – 11:20 Break
11:20 – 12:00 Open Problems: Report‐Out, Large-Group Discussion
12:00 – 1:30 LUNCH (attendees on their own)
1:30 – 2:10 Fusion Subspace Clustering: Full & Incomplete Data
Daniel Leonardo Pimentel-Alarcon (Georgia State University)
2:10 – 2:50 Tractable nonconvex optimization via geometry
Suvrit Sra (Massachusetts Institute of Technology)
2:50 – 3:20 Break
3:20 – 4:00 SGD with AdaGrad: near-optimal convergence over nonconvex landscapes, no learning rate tuning required
Rachel Ward (University of Texas, Austin)
4:00 – 5:00 Open Problems

Wednesday, August 1, 2018

9:30 – 10:10 Your dreams may come true with MTP2
Caroline Uhler (Massachusetts Institute of Technology)
10:10 – 10:50 Convergence of Policy Gradient Methods for Linear Quadratic Regulators
Sham Kakade (University of Washington)
10:50 – 11:20 Break
11:20 – 12:00 Integrated Principal Component Analysis (iPCA) via a Kronecker Covariance Model
Genevera Allen (Rice University)
12:00 – 12:10 Closing Remarks
Stephen Wright (University of Wisconsin-­‐Madison)