Bayesian Coresets - Revisiting the Nonconvex Optimization Perspective at AISTATS 2021.
Our paper on Bayesian Coresets: Revisiting the Nonconvex Optimization Perspective is accepted at the AISTATS conference this year (virtual).
On Continuous Local BDD-Based Search for Hybrid SAT Solving at AAAI 2021.
Our paper on On Continuous Local BDD-Based Search for Hybrid SAT Solving is accepted at the AAAI conference this year (virtual).
Negative sampling in semi-supervised learning at ICML 2020.
Workshop on optimization methods got accepted at ICML 2020!
Semi-supervised learning (SSL) - A systematic survey
Co-writers: Vatsal Shah, John Chen
Solution uniqueness on overparameterized matrix sensing at AISTATS 2020.
An updated overview of recent gradient descent algorithms
Re-posted from my student’s, John Chen, website
FourierSAT at AAAI 2020.
Learning Sparse Distributions using Iterative Hard Thresholding at NeurIPS 2019.
Rice’s Data to Knowledge team gives recommendation to Houston City Council
Workshop on optimization methods got accepted at NeurIPS 2019!
Compressing Gradient Optimizers via Count-Sketches at ICML 2019.
Panic attacks during public speaking
This blog post is about panic attacks during public speaking. I decided to write about this, since a panic attack somewhat happened during one of my lectures for Advanced topics in optimization: From simple to complex ML systems at Rice University. If someone suffers from something similar, I believe it is relieving to know that there are people with similar experiences. And, there is a way out of it (not to overcome it once and for all, but to be bold and not be too much affected by it).
Our algorithm in the news!
Our algorithm in the Rice news!
Fast and provable algorithms for QST at Nature.
Simple algorithms for low-rank approximations at UAI 2018.
IHT-dies-hard paper got accepted at AISTATS 2018.
Statistical inference using SGD got accepted at AAAI 2018.
Setting up a Nvidia GPU-equipped computer
Very roughly, there are two trends in deep learning research: one is on applications (check the vast volume of new papers in the ArXiv RSS feeds of Learning Theory, Machine Learning; even Optimization). The other is understanding how to train/set up a deep neural network. The latter is exactly what creates doubts to/worries/concerns/interests me. We have (almost) no idea why these tools work and how we should train them.
Our paper in AISTATS 2017
Non-convex workshop at ICML 2016 - Part I
With this blog post, I want to summarize the key points presented at the ICML 2016 workshop Advances in non-convex analysis and optimization, hosted by Animashree Anandkumar, Sivaraman Balakrishnan, Srinadh Bhojanapalli, Kamalika Chaudhuri, Yudong Chen, Percy Liang, Praneeth Netrapalli, Sewoong Oh, Zhaoran Wang, and me.
Papers in ICML 2016 & COLT 2016
Workshop on non-convex methods at ICML 2016!
Simons Seminar at UT
This semester I’m co-organizing the Simons seminar at UT Austin.
Telling a story about IHT using Python (Chapter II)
In this notebook, $(i)$ we will further dive in the original IHT scheme and note some of its pros/cons in solving the CS problem, and $(ii)$ we will provide an overview of more recent developments on constant step size selection for IHT.