Signature



Workshop on optimization methods got accepted at ICML 2020!
Semi-supervised learning (SSL) - A systematic survey
Mar 2020 Co-writers: Vatsal Shah, John Chen
Solution uniqueness on overparameterized matrix sensing at AISTATS 2020.
An updated overview of recent gradient descent algorithms
Mar 2020 Re-posted from my student’s, John Chen, website
FourierSAT at AAAI 2020.
Learning Sparse Distributions using Iterative Hard Thresholding at NeurIPS 2019.
Rice’s Data to Knowledge team gives recommendation to Houston City Council
Aug 2019 Links below.
Workshop on optimization methods got accepted at NeurIPS 2019!
Compressing Gradient Optimizers via Count-Sketches at ICML 2019.
Panic attacks during public speaking
Jun 2019 This blog post is about panic attacks during public speaking. I decided to write about this, since a panic attack somewhat happened during one of my lectures for Advanced topics in optimization: From simple to complex ML systems at Rice University. If someone suffers from something similar, I believe it is relieving to know that there are people with similar experiences. And, there is a way out of it (not to overcome it once and for all, but to be bold and not be too much affected by it).
Our algorithm in the news!
Aug 2018 Our algorithm in the Rice news!
Fast and provable algorithms for QST at Nature.
Simple algorithms for low-rank approximations at UAI 2018.
IHT-dies-hard paper got accepted at AISTATS 2018.
Statistical inference using SGD got accepted at AAAI 2018.
Setting up a Nvidia GPU-equipped computer
Oct 2017 Very roughly, there are two trends in deep learning research: one is on applications (check the vast volume of new papers in the ArXiv RSS feeds of Learning Theory, Machine Learning; even Optimization). The other is understanding how to train/set up a deep neural network. The latter is exactly what creates doubts to/worries/concerns/interests me. We have (almost) no idea why these tools work and how we should train them.
Our paper in AISTATS 2017
Non-convex workshop at ICML 2016 - Part I
Sep 2016 With this blog post, I want to summarize the key points presented at the ICML 2016 workshop Advances in non-convex analysis and optimization, hosted by Animashree Anandkumar, Sivaraman Balakrishnan, Srinadh Bhojanapalli, Kamalika Chaudhuri, Yudong Chen, Percy Liang, Praneeth Netrapalli, Sewoong Oh, Zhaoran Wang, and me.
Papers in ICML 2016 & COLT 2016
Workshop on non-convex methods at ICML 2016!
Simons Seminar at UT
Mar 2016 This semester I’m co-organizing the Simons seminar at UT Austin.
Telling a story about IHT using Python (Chapter II)
Jan 2016 In this notebook, $(i)$ we will further dive in the original IHT scheme and note some of its pros/cons in solving the CS problem, and $(ii)$ we will provide an overview of more recent developments on constant step size selection for IHT.
Three papers accepted to AISTATS 2016
Telling a story about IHT using Python (Chapter I)
Dec 2015 The purpose of this notebook is 2-fold: $(i)$ Since this is the first attempt to “migrate” from Matlab-type-of-mathematical-programming language to Python language, this very first notebook serves as a guide for future posts. $(ii)$ This is a loooong post that presents Iterative Hard Thresholding (IHT) algorithm and its variants, a method that solves Compressive Sensing problems in the non-convex setting.
Seminars at UT
Nov 2015 This semester I’m co-organizing two seminars at UT Austin.