Parametric Simplex Method for Sparse Learning
著者
Abstract
High dimensional sparse learning has imposed a great computational challenge to
large scale data analysis. In this paper, we are interested in a broad class of sparse
learning approaches formulated as linear programs parametrized by a regularization
factor, and solve them by the parametric simplex method (PSM). Our parametric
simplex method offers significant advantages over other competing methods: (1)PSM naturally obtains the complete solution path for all values of the regularization parameter; (2) PSM provides a high precision dual certificate stopping criterion;(3) PSM yields sparse solutions through very few iterations, and the solution sparsity significantly reduces the computational cost per iteration. Particularly, we demonstrate the superiority of PSM over various sparse learning approaches, including Dantzig selector for sparse linear regression, LAD-Lasso for sparse robust linear regression, CLIME for sparse precision matrix estimation, sparse differential network estimation, and sparse Linear Programming Discriminant (LPD) analysis. We then provide sufficient conditions under which PSM always outputs sparse solutions such that its computational performance can be significantly boosted.Thorough numerical experiments are provided to demonstrate the outstanding performance of the PSM method.
メモ
従来法の欠点
一つの正則化項の手法は人気だが、適した選択が通常わからないので十分ではない
複数の合理的な範囲の値をチューニング
高次元で非常に非効率になってしまう
提案手法
parametric simplex method (PSM)のバリエーション版を提案する
正則化要因をパラメータとして扱う