I am looking for students! Apply at Rice CS PhD program!

## Code packages (before moving to Github)

• (Bi-) Factored Gradient Descent algorithm for low-rank recovery (Matlab)

This software package is a proof of concept for $UV^\top$ parameterization in optimization and focuses on first-order, gradient descent algorithmic solutions for the case of matrix sensing. The algorithm proposed is named as Bi-Factored Gradient Descent (BFGD) algorithm, an efficient first-order method that operates on the $U, V$ factors. Subsequent versions will include more involved applications such as 1-bit matrix completion (logistic regression objective), low-rank image recovery from limited measurements, quantum state tomography settings, etc.

• A simple, but yet efficient, algorithm for Bipartite Correlation Clustering (Matlab)

In this code, we implement a simple, yet efficient, algorithm for the problem of Bipartite Correlation Clustering (BCC). Our algorithm solves a more generalized problem of maximizing a bilinear form, a specific instance of which is the BCC problem. In the demo included, we run our algorithm on a small movie lense dataset, in order to automatically extract useful clusters between users that have similar movie preferences.

• ALgebraic PursuitS (ALPS) for sparse signal recovery (Matlab)

This software package implements the ALgebraic PursuitS class of methods for sparse signal recovery. ALPS framework includes some of the fastest Iterative Hard Thresholding (IHT) variants, utilizing optimal subspace exploration, cheap and adaptive step size selection and Nesterov-style accelerations. ALPS are also backed with strong theoretical guarantees, based on RIP assumptions in compressed sensing settings.

• Matrix ALgebraic PursuitS (Matrix ALPS) for low rank recovery (Matlab)

This software package implements the Matrix ALgebraic PursuitS class of methods for low rank recovery. This set of problems includes the affine rank minimization, matrix completion and PCA settings. Similar to the vectors case, the Matrix ALPS framework includes some of the fastest Iterative Hard Thresholding (IHT) variants for low rank matrix reconstruction, utilizing optimal subspace exploration, cheap and adaptive step size selection and Nesterov-style accelerations. Matrix ALPS are also backed with strong theoretical guarantees, based on RIP assumptions in compressed sensing settings.

• Matrix ALgebraic PursuitS (Matrix ALPS) for low rank + sparse recovery (Matlab)

This software package is the extension of the Matrix ALPS software package for the case of low rank and sparse recovery. Applications include background video subtraction and robust PCA, among others.

• Self-Concordant OPTimization (SCOPT) for composite convex problems (Matlab)

SCOPT is a MATLAB implementation of the proximal gradient, proximal quasi-Newton, proximal Newton, and path-following interior-point algorithms for solving composite convex minimization problems involving self-concordant and self-concordant-like functions.

• CLASH and Normed Pursuits: Hard thresholding with norm constraints (Matlab)

CLASH (Combinatorial selection and Least Absolute SHrinkage) and Normed Pursuits are new sparse signal approximation algorithm that leverages prior information beyond sparsity to obtain approximate solutions to the Lasso problem. These algorithms alters the selection process of Lasso algorithm by exploiting additional signal structure, dubbed as combinatorial sparsity model, and introduces approximate combinatorial selection with provable convergence guarantees.

• Sparse projections onto the simplex (Matlab)

In this snippet of code, we compute sparse projections onto the simplex. Our approach is greedy, i.e., sequentially and greedily we construct the solution that $(i)$ minimizes the euclidean distance to an anchor given point, $(ii)$ "lives" on the simplex and, $(iii)$ is $k$-sparse. (The code is to be updated with some demo files).

• Provable deterministic leverage scores sampling (Matlab)

Here, we explain in practice the curious empirical phenomenon: “Approximating a matrix by deterministically selecting a subset of its columns with the corresponding largest leverage scores results in a good low-rank matrix surrogate”. In this demo, we demonstrate empirically the performance of deterministic leverage score sampling, which many times matches or outperforms the state-of-the-art techniques.

• Generalized Sparse Additive Models (GSPAM) with interactions in high-dimensions (Matlab)

This software package solves a d-dimensional GSPAM instance, where univariate and bivariate set of indices ($S_1$ and $S_2$, respectively) are unknowns. The code follows the steps indicated by the main algorithm, described in the conference paper.

• Expanders and group sparse recovery (Matlab)

Proof of concept why expanders accelerate execution time of convex solvers for group sparse recovery. The problem setting is that of group sparse convex norm minimization over linear constraints. The experiments include both synthetic and real data settings.

• Finding the M-PSK vector that maximizes a rank-deficient complex quadratic form (Matlab)

Given a rank-deficient PSD complex matrix as an input, computes the polynomial-sized set of candidate M-PSK solutions, among which the optimal M-PSK vector --that maximizes the rank-deficient quadratic form-- lies.

• Multiway (tensor) compressed sensing for sparse and low rank tensors (Matlab)

In this contribution, we consider compressed sensing for sparse and low-rank tensors. More specifically, we consider low-rank tensors synthesized as sums of outer products of sparse loading vectors, and a special class of linear dimensionality-reducing transformations that reduce each mode individually. We prove interesting "oracle" properties showing that it is possible to identify the uncompressed sparse loadings directly from the compressed tensor data. This Matlab demo works as a proof of concept for the main ideas in the paper.