This page lists some potential course project ideas. The goal of the project is to review recent developments in statistical computing, implement, and compare the related methods. Three (3) students should team up to accomplish the goal. Each team is required to choose one paper from the list below (no duplication is allowed) and submit a project proposal by the due date.
In large-scale optimization, often the objective function or its derivatives can only be estimated. In this case, stochastic methods come to rescue. Recent developments include:
Liang, Tengyuan, and Weijie J. Su. “Statistical inference for the population landscape via moment-adjusted stochastic gradients.” Journal of the Royal Statistical Society – Series B (2019): 431-456.
Ya-Ping Hsieh, Chen Liu, Volkan Cevher. “Finding Mixed Nash Equilibria of Generative Adversarial Networks.” Proceedings of the 36th International Conference on Machine Learning, PMLR 97:2810-2819, 2019. URL: http://proceedings.mlr.press/v97/hsieh19b.html
Bo Dai, Niao He, Hanjun Dai, Le Song. “Provable Bayesian Inference via Particle Mirror Descent.” Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:985-994, 2016. URL: http://proceedings.mlr.press/v51/dai16.html
In design of experiments, optimal designs are a class of experimental designs that are optimal with respect to some statistical criterion. Recent algorithmic developments include:
Harman, Radoslav, Lenka Filová, and Peter Richtárik. “A randomized exchange algorithm for computing optimal approximate designs of experiments.” Journal of the American Statistical Association (2019): 1-30.
Yang, Min, Stefanie Biedermann, and Elina Tang. “On optimal designs for nonlinear models: a general and efficient algorithm.” Journal of the American Statistical Association 108.504 (2013): 1411-1420.
Total variation (TV) penalty has been popular in image processing since the work of Rudin, L. I., Osher, S. and Fatemi, E. (1992), “Nonlinear total variation based noise removal algorithms,” Physica D: 60(1), 259–268. This penalty has become popularized in statistics under the name “fused lasso,” due to Tibshirani, R. , Saunders, M., Rosset, S., Zhu, J., and Knight, K. (2005), “Sparsity and Smoothness Via the Fused Lasso,” Journal of the Royal Statistical Society, Series B, 67, 91–108; and Tibshirani, R. J., and Taylor, J. (2011), “The Solution Path of the Generalized Lasso,” Annals of Statistics, 39, 1335–1371. TV penalty is becoming increasingly popular in other estimation problems than regression:
Peter Radchenko, and Gourab Mukherjee. “Convex clustering via $\ell_1$ fusion penalization.” Journal of the Royal Statistical Society – Series B (2017): 1527-1546.
Unser, Michael, Julien Fageot, and John Paul Ward. “Splines are universal solutions of linear inverse problems with generalized TV regularization.” SIAM Review 59.4 (2017): 769-793.