Infinitely divisible(无穷可分)分享 http://blog.sciencenet.cn/u/a3141592653589 概率与数理统计,随机过程,金融数学,精算,大数据,机器学习,高维统计,金融统计,数学建模,学术资讯,书单

博文

Lasso变量选择有关的统计方向高引论文清单(续)

已有 7021 次阅读 2015-8-14 09:16 |系统分类:论文交流

前文见http://blog.sciencenet.cn/blog-752541-912555.html

 

200-
Witten, D. M., Friedman, J. H., & Simon, N. (2011). New insights and faster computations for the graphical lasso. Journal of Computational and Graphical Statistics, 20(4), 892-900.被引用次数:98
Wang, H., & Leng, C. (2008). A note on adaptive group lasso. Computational statistics & data analysis, 52(12), 5277-5286.被引用次数:95
Koenker, R., & Mizera, I. (2004). Penalized triograms: total variation regularization for bivariate smoothing. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 66(1), 145-163.被引用次数:94Park, M. Y., Hastie, T., & Tibshirani, R. (2007). Averaged gene expressions for regression. Biostatistics, 8(2), 212-227.被引用次数:94
Li, Q., & Lin, N. (2010). The Bayesian elastic net. Bayesian Analysis, 5(1), 151-170.被引用次数:97
Meinshausen, N., Meier, L., & Bühlmann, P. (2012). P-values for high-dimensional regression. Journal of the American Statistical Association.
被引用次数:99
Huang, X., & Pan, W. (2003). Linear regression and two-class classification with gene expression data. Bioinformatics, 19(16), 2072-2078.被引用次数:95
Wang, H. (2009). Forward regression for ultra-high dimensional variable screening. Journal of the American Statistical Association, 104(488), 1512-1524.被引用次数:95
Städler, N., Bühlmann, P., & Van De Geer, S. (2010). ℓ 1-penalization for mixture regression models. Test, 19(2), 209-256.被引用次数:93
Zhao, P., & Yu, B. (2007). Stagewise lasso. The Journal of Machine Learning Research, 8, 2701-2726.被引用次数:92
Kozumi, H., & Kobayashi, G. (2011). Gibbs sampling methods for Bayesian quantile regression. Journal of statistical computation and simulation, 81(11), 1565-1578.被引用次数:94
Guan, Y., & Stephens, M. (2011). Bayesian variable selection regression for genome-wide association studies and other large-scale problems. The Annals of Applied Statistics, 1780-1815.被引用次数:97
Marx, B. D. (1996). Iteratively reweighted partial least squares estimation for generalized linear regression. Technometrics, 38(4), 374-381.被引用次数:90
Wand, M. P. (2000). A comparison of regression spline smoothing procedures. Computational Statistics, 15(4), 443-462.被引用次数:88
Bach, F., Jenatton, R., Mairal, J., & Obozinski, G. (2012). Structured sparsity through convex optimization. Statistical Science, 27(4), 450-468.被引用次数:93
Danaher, P., Wang, P., & Witten, D. M. (2014). The joint graphical lasso for inverse covariance estimation across multiple classes. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(2), 373-397.被引用次数:96
Khan, J. A., Van Aelst, S., & Zamar, R. H. (2007). Robust linear model selection based on least angle regression. Journal of the American Statistical Association, 102(480), 1289-1299.被引用次数:88
Polson, N. G., & Scott, J. G. (2010). Shrink globally, act locally: Sparse Bayesian regularization and prediction. Bayesian Statistics, 9, 501-538.被引用次数:89
Grömping, U. (2009). Variable importance assessment in regression: linear regression versus random forest. The American Statistician, 63(4).被引用次数:90
Zhang, Y., Li, R., & Tsai, C. L. (2010). Regularization parameter selections via generalized information criterion. Journal of the American Statistical Association, 105(489), 312-323.被引用次数:91
Qiu, P., Zou, C., & Wang, Z. (2010). Nonparametric profile monitoring by mixed effects modeling. Technometrics, 52(3).被引用次数:88
Friedman, J. H. (2012). Fast sparse regression and classification. International Journal of Forecasting, 28(3), 722-738.被引用次数:86
Wang, S., & Zhu, J. (2008). Variable Selection for Model‐Based High‐Dimensional Clustering and Its Application to Microarray Data. Biometrics, 64(2), 440-448.被引用次数:83
Tibshirani, R., Bien, J., Friedman, J., Hastie, T., Simon, N., Taylor, J., & Tibshirani, R. J. (2012). Strong rules for discarding predictors in lasso‐type problems. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(2), 245-266.被引用次数:89
Hastie, T., & Tibshirani, R. (2004). Efficient quadratic regularization for expression arrays. Biostatistics, 5(3), 329-340.被引用次数:82
Li, J., Das, K., Fu, G., Li, R., & Wu, R. (2011). The Bayesian lasso for genome-wide association studies. Bioinformatics, 27(4), 516-523.被引用次数:85
Scott, J. G., & Carvalho, C. M. (2008). Feature-inclusion stochastic search for Gaussian graphical models. Journal of Computational and Graphical Statistics, 17(4).被引用次数:81
Bai, J., & Ng, S. (2010). Instrumental variable estimation in a data rich environment. Econometric Theory, 26(06), 1577-1606.被引用次数:81
Zou, H., & Yuan, M. (2008). The Fo-norm Support Vector Machine. Statistica Sinica, 18, 379-398.
Van de Geer, S., Bühlmann, P., Ritov, Y. A., & Dezeure, R. (2014). On asymptotically optimal confidence regions and tests for high-dimensional models. The Annals of Statistics, 42(3), 1166-1202.
Xie, H., & Huang, J. (2009). SCAD-penalized regression in high-dimensional partially linear models. The Annals of Statistics, 673-696.
Castle, J. L., Doornik, J. A., & Hendry, D. F. (2012). Model selection when there are multiple breaks. Journal of Econometrics, 169(2), 239-246.
Liang, H., & Li, R. (2009). Variable selection for partially linear models with measurement errors. Journal of the American Statistical Association, 104(485), 234-248.
Li, F., & Zhang, N. R. (2010). Bayesian variable selection in structured high-dimensional covariate spaces with applications in genomics. Journal of the American Statistical Association, 105(491).
Hoefling, H. (2010). A path algorithm for the fused lasso signal approximator. Journal of Computational and Graphical Statistics, 19(4), 984-1006.
Lee, A. B., Nadler, B., & Wasserman, L. (2008). Treelets: an adaptive multi-scale basis for sparse unordered data. The Annals of Applied Statistics, 435-471.
Pötscher, B. M., & Leeb, H. (2009). On the distribution of penalized maximum likelihood estimators: The LASSO, SCAD, and thresholding. Journal of Multivariate Analysis, 100(9), 2065-2082.
Chatterjee, A., & Lahiri, S. N. (2011). Bootstrapping lasso estimators. Journal of the American Statistical Association, 106(494), 608-625.
Xue, L., & Zou, H. (2012). Regularized rank-based estimation of high-dimensional nonparanormal graphical models. The Annals of Statistics, 40(5), 2541-2571.
Liang, H., Liu, X., Li, R., & Tsai, C. L. (2010). Estimation and testing for partially linear single-index models. Annals of statistics, 38(6), 3811.
Bach F. Self-concordant analysis for logistic regression[J]. Electronic Journal of Statistics, 2010, 4: 384-414.
Zhang, C. H., & Zhang, S. S. (2014). Confidence intervals for low dimensional parameters in high dimensional linear models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(1), 217-242.
Xue, L., & Zou, H. (2012). Regularized rank-based estimation of high-dimensional nonparanormal graphical models. The Annals of Statistics, 40(5), 2541-2571.
Greenshtein, E. (2006). Best subset selection, persistence in high-dimensional statistical learning and optimization under l1 constraint. The Annals of Statistics, 34(5), 2367-2386.
Liang, H., Liu, X., Li, R., & Tsai, C. L. (2010). Estimation and testing for partially linear single-index models. Annals of statistics, 38(6), 3811.
Huang J, Ma S, Xie H. Regularized Estimation in the Accelerated Failure Time Model with High‐Dimensional Covariates[J]. Biometrics, 2006, 62(3): 813-820.
Qian, M., & Murphy, S. A. (2011). Performance guarantees for individualized treatment rules. Annals of statistics, 39(2), 1180.
Zhang C H, Zhang T. A general theory of concave regularization for high-dimensional sparse estimation problems[J]. Statistical Science, 2012, 27(4): 576-593.
Bühlmann, P., & Yu, B. (2006). Sparse boosting. The Journal of Machine Learning Research, 7, 1001-1024.
Lafferty, J., & Wasserman, L. (2008). Rodeo: sparse, greedy nonparametric regression. The Annals of Statistics, 28-63.
Fan, J., Lin, H., & Zhou, Y. (2006). Local partial-likelihood estimation for lifetime data. The Annals of Statistics, 34(1), 290-325.
Johnson, B. A., Lin, D. Y., & Zeng, D. (2008). Penalized estimating functions and variable selection in semiparametric regression models. Journal of the American Statistical Association, 103(482), 672-680.
Harchaoui, Z., & Lévy-Leduc, C. (2010). Multiple change-point estimation with a total variation penalty. Journal of the American Statistical Association, 105(492).
Breheny, P., & Huang, J. (2009). Penalized methods for bi-level variable selection. Statistics and its interface, 2(3), 369.
Bradic, J., Fan, J., & Wang, W. (2011). Penalized composite quasi‐likelihood for ultrahigh dimensional variable selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(3), 325-349.
Bien, J., Taylor, J., & Tibshirani, R. (2013). A lasso for hierarchical interactions. The Annals of Statistics, 41(3), 1111-1141.
Li, L., Dennis Cook, R., & Nachtsheim, C. J. (2005). Model‐free variable selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2), 285-299.
Cai, J., Fan, J., Li, R., & Zhou, H. (2005). Variable selection for multivariate failure time data. Biometrika, 92(2), 303-316.
Hall, P., & Miller, H. (2009). Using generalized correlation to effect variable selection in very high dimensional problems. Journal of Computational and Graphical Statistics, 18(3).
Caner, M. (2009). Lasso-type GMM estimator. Econometric Theory, 25(01), 270-290.
Tibshirani, R. J., & Taylor, J. (2012). Degrees of freedom in lasso problems. The Annals of Statistics, 40(2), 1198-1232.
Khalili, A., & Chen, J. (2007). Variable selection in finite mixture of regression models. Journal of the american Statistical association, 102(479).
Bien, J., & Tibshirani, R. J. (2011). Sparse estimation of a covariance matrix. Biometrika, 98(4), 807.Li, R., Zhong, W., & Zhu, L. (2012). Feature screening via distance correlation learning. Journal of the American Statistical Association, 107(499), 1129-1139.
Hsu, N. J., Hung, H. L., & Chang, Y. M. (2008). Subset selection for vector autoregressive processes using lasso. Computational Statistics & Data Analysis, 52(7), 3645-3657.
Meinshausen, N., Rocha, G., & Yu, B. (2007). Discussion: A tale of three cousins: Lasso, L2Boosting and Dantzig. The Annals of Statistics, 2373-2384.
Rothman, A. J., Levina, E., & Zhu, J. (2010). Sparse multivariate regression with covariance estimation. Journal of Computational and Graphical Statistics, 19(4), 947-962.
Li, R., & Lin, D. K. (2002). Data analysis in supersaturated designs. Statistics & Probability Letters, 59(2), 135-144.
Goddard, M. E., Wray, N. R., Verbyla, K., & Visscher, P. M. (2009). Estimating effects and making predictions from genome-wide marker data. Statistical Science, 24(4), 517-529.
Boysen, L., Kempe, A., Liebscher, V., Munk, A., & Wittich, O. (2009). Consistencies and rates of convergence of jump-penalized least squares estimators. The Annals of Statistics, 157-183.
Liu, Y., & Wu, Y. (2012). Variable selection via a combination of the L0 and L1 penalties. Journal of Computational and Graphical Statistics.
McShane, B. B., & Wyner, A. J. (2011). A statistical analysis of multiple temperature proxies: Are reconstructions of surface temperatures over the last 1000 years reliable?. The Annals of Applied Statistics, 5-44.
Schelldorfer, J., Bühlmann, P., DE, G., & VAN, S. (2011). Estimation for High‐Dimensional Linear Mixed‐Effects Models Using ℓ1‐Penalization. Scandinavian Journal of Statistics, 38(2), 197-214.
James, G. M., & Radchenko, P. (2009). A generalized Dantzig selector with shrinkage tuning. Biometrika, 96(2), 323-337.
Loubes, J. M., & Van De Geer, S. (2002). Adaptive estimation with soft thresholding penalties. Statistica Neerlandica, 56(4), 453-478.Paul, D., Bair, E., Hastie, T., & Tibshirani, R. (2008). " Preconditioning" for feature selection and regression in high-dimensional problems. The Annals of Statistics, 1595-1618.
Liu, Y., Zhang, H. H., Park, C., & Ahn, J. (2007). Support vector machines with adaptive Lq penalty. Computational Statistics & Data Analysis, 51(12), 6380-6394.
Yuan, M., Joseph, V. R., & Lin, Y. (2007). An efficient variable selection approach for analyzing designed experiments. Technometrics, 49(4), 430-439.
Cai, T., Tian, L., Wong, P. H., & Wei, L. J. (2011). Analysis of randomized comparative clinical trial data for personalized treatment selections. Biostatistics, 12(2), 270-282.
Zou, C., & Qiu, P. (2012). Multivariate statistical process control using LASSO. Journal of the American Statistical Association.
Shen, X., Pan, W., & Zhu, Y. (2012). Likelihood-based selection and sharp parameter estimation. Journal of the American Statistical Association, 107(497), 223-232.
Radchenko, P., & James, G. M. (2008). Variable inclusion and shrinkage algorithms. Journal of the American Statistical Association, 103(483).
Marra, G., & Wood, S. N. (2011). Practical variable selection for generalized additive models. Computational Statistics & Data Analysis, 55(7), 2372-2387.
Xie, X., & Geng, Z. (2008). A recursive method for structural learning of directed acyclic graphs. The Journal of Machine Learning Research, 9, 459-483.
Wu, Y., Boos, D. D., & Stefanski, L. A. (2012). Controlling variable selection by the addition of pseudovariables. Journal of the American Statistical Association.
Wei, F., & Huang, J. (2010). Consistent group selection in high-dimensional linear regression. Bernoulli, 16(4), 1369.被引用次数:53
Mishchencko, Y., Vogelstein, J. T., & Paninski, L. (2011). A Bayesian approach for inferring neuronal connectivity from calcium fluorescent imaging data. The Annals of Applied Statistics, 1229-1261.Amato, U., Antoniadis, A., & Pensky, M. (2006). Wavelet kernel penalized estimation for non-equispaced design regression. Statistics and Computing, 16(1), 37-55.
Bühlmann, P. (2013). Statistical significance in high-dimensional linear models. Bernoulli, 19(4), 1212-1242.
Cook, R. D., & Forzani, L. (2008). Principal fitted components for dimension reduction in regression. Statistical Science, 23(4), 485-501.
Li, Q., Xi, R., & Lin, N. (2010). Bayesian regularized quantile regression. Bayesian Analysis, 5(3), 533-556.
Ni, L., Cook, R. D., & Tsai, C. L. (2005). A note on shrinkage sliced inverse regression. Biometrika, 92(1), 242-247.
Clarke, B. (2003). Comparing Bayes model averaging and stacking when model approximation error cannot be ignored. The Journal of Machine Learning Research, 4, 683-712.
Tutz, G., & Ulbricht, J. (2009). Penalized regression with correlation-based penalty. Statistics and Computing, 19(3), 239-253.
Shen, X., & Huang, H. C. (2006). Optimal model assessment, selection, and combination. Journal of the American Statistical Association, 101(474), 554-568.
Vansteelandt, S., Bekaert, M., & Claeskens, G. (2012). On model selection and model misspecification in causal inference. Statistical methods in medical research, 21(1), 7-30.
She, Y. (2009). Thresholding-based iterative selection procedures for model selection and shrinkage. Electronic Journal of Statistics, 3, 384-415.
Hebiri, M., & Van De Geer, S. (2011). The Smooth-Lasso and other ℓ1+ ℓ2-penalized methods. Electronic Journal of Statistics, 5, 1184-1226.
Li, R., & Lin, D. K. (2003). Analysis methods for supersaturated design: some comparisons. Journal of Data Science, 1(3), 249-260.
Hautsch, N., Schaumburg, J., & Schienle, M. (2014). Financial network systemic risk contributions. Review of Finance, rfu010.
Carbonetto, P., & Stephens, M. (2012). Scalable variational inference for Bayesian variable selection in regression, and its accuracy in genetic association studies. Bayesian Analysis, 7(1), 73-108.
Balakrishnan, S., & Madigan, D. (2008). Algorithms for sparse linear classifiers in the massive data setting. The Journal of Machine Learning Research, 9, 313-337.
Xie, B., Pan, W., & Shen, X. (2008). Penalized model-based clustering with cluster-specific diagonal covariance matrices and grouped variables. Electronic journal of statistics, 2, 168.
Chesneau, C., & Hebiri, M. (2008). Some theoretical results on the grouped variables Lasso. Mathematical Methods of Statistics, 17(4), 317-326.
Fan, J., Lv, J., & Qi, L. (2011). Sparse high dimensional models in economics. Annual review of economics, 3, 291.
Li, L., & Nachtsheim, C. J. (2006). Sparse sliced inverse regression. Technometrics, 48(4).
Wang, S., Nan, B., Zhu, N., & Zhu, J. (2009). Hierarchically penalized Cox regression with grouped variables. Biometrika, 96(2), 307-322.
Li, P., Chen, J., & Marriott, P. (2009). Non-finite Fisher information and homogeneity: an EM approach. Biometrika, asp011.
Chen, Y. H., Chatterjee, N., & Carroll, R. J. (2009). Shrinkage estimators for robust and efficient inference in haplotype-based case-control studies. Journal of the American Statistical Association, 104(485), 220-233.
Meinshausen, N. (2004, May). Consistent neighbourhood selection for sparse high-dimensional graphs with the Lasso. Seminar für Statistik, Eidgenössische Technische Hochschule (ETH), Zürich.
Mai, Q., Zou, H., & Yuan, M. (2012). A direct approach to sparse discriminant analysis in ultra-high dimensions. Biometrika, asr066.
Yuan, M., Joseph, V. R., & Zou, H. (2009). Structured variable selection and estimation. The Annals of Applied Statistics, 1738-1757.
Belitz, C., & Lang, S. (2008). Simultaneous selection of variables and smoothing parameters in structured additive regression models. Computational Statistics & Data Analysis, 53(1), 61-81.
Sardy, S., Antoniadis, A., & Tseng, P. (2004). Automatic smoothing with wavelets for a wide class of distributions. Journal of computational and graphical statistics, 13(2), 399-421.被引用次数:45

Ni, L., Cook, R. D., & Tsai, C. L. (2005). A note on shrinkage sliced inverse regression. Biometrika, 92(1), 242-247.
Liu, H., & Zhang, J. (2009). Estimation consistency of the group lasso and its applications. In International Conference on Artificial Intelligence and Statistics (pp. 376-383).Lu, Y., Zhou, Y., Qu, W., Deng, M., & Zhang, C. (2011). A Lasso regression model for the construction of microRNA-target regulatory networks. Bioinformatics, 27(17), 2406-2413.
Zhou, N., & Zhu, J. (2010). Group variable selection via a hierarchical lasso and its oracle property. arXiv preprint arXiv:1006.2871.
Hans, C. (2010). Model uncertainty and variable selection in Bayesian lasso regression. Statistics and Computing, 20(2), 221-229.
Pötscher, B. M., & Schneider, U. (2009). On the distribution of the adaptive LASSO estimator. Journal of Statistical Planning and Inference, 139(8), 2775-2790.
Zou, H. (2008). A note on path-based variable selection in the penalized proportional hazards model. Biometrika, 95(1), 241-247.
van de Geer, S., Bühlmann, P., & Zhou, S. (2011). The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso). Electronic Journal of Statistics, 5, 688-749.
Shi, W., Wahba, G., Wright, S., Lee, K., Klein, R., & Klein, B. (2008). LASSO-Patternsearch algorithm with application to ophthalmology and genomic data. Statistics and its Interface, 1(1), 137.Javanmard, A., & Montanari, A. (2014). Hypothesis testing in high-dimensional regression under the gaussian random design model: Asymptotic theory. Information Theory, IEEE Transactions on, 60(10), 6522-6554.
Zhou, S., van de Geer, S., & Bühlmann, P. (2009). Adaptive Lasso for high dimensional regression and Gaussian graphical modeling. arXiv preprint arXiv:0903.2515.
Genovese, C. R., Jin, J., Wasserman, L., & Yao, Z. (2012). A comparison of the lasso and marginal regression. The Journal of Machine Learning Research, 13(1), 2107-2143.
Bunea, F. (2008). Consistent selection via the Lasso for high dimensional approximating regression models. In Pushing the limits of contemporary statistics: contributions in honor of Jayanta K. Ghosh (pp. 122-137). Institute of Mathematical Statistics.
Zou, C., Ning, X., & Tsung, F. (2012). LASSO-based multivariate linear profile monitoring. Annals of operations research, 192(1), 3-19.
Hansen, N. R., Reynaud-Bouret, P., & Rivoirard, V. (2015). Lasso and probabilistic inequalities for multivariate point processes. Bernoulli, 21(1), 83-143.被引用次数:31
Alhamzawi, R., Yu, K., & Benoit, D. F. (2012). Bayesian adaptive Lasso quantile regression. Statistical Modelling, 12(3), 279-297.
Kim, S., & Xing, E. P. (2012). Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping. The Annals of Applied Statistics, 6(3), 1095-1117.
Massart, P., & Meynet, C. (2011). The Lasso as an ℓ1-ball model selection procedure. Electronic Journal of Statistics, 5, 669-687.
Radchenko, P., & James, G. M. (2011). Improved variable selection with forward-lasso adaptive shrinkage. The Annals of Applied Statistics, 5(1), 427-448.
Nardi, Y., & Rinaldo, A. (2011). Autoregressive process modeling via the lasso procedure. Journal of Multivariate Analysis, 102(3), 528-549.
Simon, N., & Tibshirani, R. (2012). Standardization and the group lasso penalty. Statistica Sinica, 22(3), 983.
Fraley, C., & Hesterberg, T. (2009). Least angle regression and LASSO for large datasets. Statistical Analysis and Data Mining: The ASA Data Science Journal, 1(4), 251-259.
Hebiri, M., & Lederer, J. (2013). How correlations influence Lasso prediction. Information Theory, IEEE Transactions on, 59(3), 1846-1854.

Cho, H., & Fryzlewicz, P. (2012). High dimensional variable selection via tilting. Journal of the Royal Statistical Society: series B (statistical methodology), 74(3), 593-622.(tilting方法)

Lykou, A., & Whittaker, J. (2010). Sparse CCA using a Lasso with positivity constraints. Computational Statistics & Data Analysis, 54(12), 3144-3157.
Liu, J., & Ye, J. (2010). Fast overlapping group lasso. arXiv preprint arXiv:1009.0306.
Zhao, Y., Ogden, R. T., & Reiss, P. T. (2012). Wavelet-based LASSO in functional linear regression. Journal of Computational and Graphical Statistics, 21(3), 600-617.
Zou, C., Jiang, W., & Tsung, F. (2012). A LASSO-based diagnostic framework for multivariate statistical process control. Technometrics.
Lambert-Lacroix, S., & Zwald, L. (2011). Robust regression through the Huber’s criterion and adaptive lasso penalty. Electronic Journal of Statistics, 5, 1015-1053.
Kim, J., Kim, Y., & Kim, Y. (2012). A gradient-based optimization algorithm for lasso. Journal of Computational and Graphical Statistics.
Charbonnier, C., Chiquet, J., & Ambroise, C. (2010). Weighted-LASSO for structured network inference from time course data. Statistical applications in genetics and molecular biology, 9(1).
Lee, J. D., Sun, D. L., Sun, Y., & Taylor, J. E. (2013). Exact post-selection inference with the lasso. arXiv preprint arXiv:1311.6238.
Kamarianakis, Y., Shen, W., & Wynter, L. (2012). Real‐time road traffic forecasting using regime‐switching space‐time models and adaptive LASSO. Applied Stochastic Models in Business and Industry, 28(4), 297-315.
Huang, J., Ma, S., & Zhang, C. H. (2008). The iterated lasso for high–dimensional logistic regression. The University of Iowa Department of Statistical and Actuarial Science Technical Report, (392).
Huang, H. C., Hsu, N. J., Theobald, D. M., & Breidt, F. J. (2010). Spatial LASSO with applications to GIS model selection. Journal of Computational and Graphical Statistics, 19(4), 963-983.
Johnson, B. A. (2009). On lasso for censored data. Electronic Journal of statistics, 3, 485-506.
Chatterjee, S., Steinhaeuser, K., Banerjee, A., Chatterjee, S., & Ganguly, A. R. (2012, April). Sparse Group Lasso: Consistency and Climate Applications. In SDM (pp. 47-58).
Leng, C., & Ma, S. (2007). Path consistent model selection in additive risk model via Lasso. Statistics in medicine, 26(20), 3753-3770.
Kato, T., & Uemura, M. (2012). Period Analysis using the Least Absolute Shrinkage and Selection Operator (Lasso). Publications of the Astronomical Society of Japan, 64(6), 122.
Leng, C., Tran, M. N., & Nott, D. (2014). Bayesian adaptive lasso. Annals of the Institute of Statistical Mathematics, 66(2), 221-244.
Xu, J., & Ying, Z. (2010). Simultaneous estimation and variable selection in median regression using Lasso-type penalty. Annals of the Institute of Statistical Mathematics, 62(3), 487-514.Wang, D., Eskridge, K. M., & Crossa, J. (2011). Identifying QTLs and epistasis in structured plant populations using adaptive mixed LASSO. Journal of agricultural, biological, and environmental statistics, 16(2), 170-184.
Chatterjee, A., & Lahiri, S. N. (2013). Rates of convergence of the adaptive LASSO estimators to the oracle distribution and higher order refinements by the bootstrap. The Annals of Statistics, 41(3), 1232-1259. 被引用次数:19
Kalouptsidi, M. (2014). Time to build and fluctuations in bulk shipping. The American Economic Review, 104(2), 564-608.
Meinshausen, N. (2008). A note on the Lasso for Gaussian graphical model selection. Statistics & Probability Letters, 78(7), 880-884.
Huang, J., Sun, T., Ying, Z., Yu, Y., & Zhang, C. H. (2013). Oracle inequalities for the lasso in the Cox model. Annals of statistics, 41(3), 1142.
Percival, D. (2012). Theoretical properties of the overlapping groups lasso. Electronic Journal of Statistics, 6, 269-288.
Reid, S., Tibshirani, R., & Friedman, J. (2013). A study of error variance estimation in lasso regression. arXiv preprint arXiv:1311.5274.
Chang, C., & Tsay, R. S. (2010). Estimation of covariance matrix via the sparse Cholesky factor with lasso. Journal of Statistical Planning and Inference, 140(12), 3858-3873.
Chatterjee, A., & Lahiri, S. (2010). Asymptotic properties of the residual bootstrap for lasso estimators. Proceedings of the American Mathematical Society, 138(12), 4497-4509.
Kato, K. (2011). Group Lasso for high dimensional sparse quantile regression models. arXiv preprint arXiv:1103.1458.
Chiquet, J., Grandvalet, Y., & Charbonnier, C. (2012). Sparsity with sign-coherent groups of variables via the cooperative-lasso. The Annals of Applied Statistics, 6(2), 795-830.
Zeng, P., He, T., & Zhu, Y. (2012). A Lasso-type approach for estimation and variable selection in single index models. Journal of Computational and Graphical Statistics, 21(1), 92-109.
Biswas, S., & Lin, S. (2012). Logistic Bayesian LASSO for Identifying Association with Rare Haplotypes and Application to Age‐Related Macular Degeneration. Biometrics, 68(2), 587-597.
Ren, Y., & Zhang, X. (2010). Subset selection for vector autoregressive processes via adaptive Lasso. Statistics & probability letters, 80(23), 1705-1712.
Silver, M., Montana, G., & Alzheimer's Disease Neuroimaging Initiative. (2012). Fast identification of biological pathways associated with a quantitative trait using group lasso with overlaps. Statistical applications in genetics and molecular biology, 11(1), 1-43.
Gefang, D. (2014). Bayesian doubly adaptive elastic-net Lasso for VAR shrinkage. International Journal of Forecasting, 30(1), 1-11.
Ahmed, S. E., Hossain, S., & Doksum, K. A. (2012). LASSO and shrinkage estimation in Weibull censored regression models. Journal of Statistical Planning and Inference, 142(6), 1273-1284.
Gao, X., & Huang, J. (2010). Asymptotic analysis of high-dimensional lad regression with lasso. Statistica Sinica, 20(4), 1485.
Alquier, P. (2008). Lasso, iterative feature selection and the correlation selector: Oracle inequalities and numerical performances. Electronic Journal of Statistics, 2, 1129-1152.
Sequential Lasso cum EBIC for feature selection with ultra-high dimensional feature space. Journal of the American Statistical Association, 109(507), 1229-1240.
Lykou, A., & Ntzoufras, I. (2013). On Bayesian lasso variable selection and the specification of the shrinkage parameter. Statistics and Computing, 23(3), 361-390.被引用次数:10
Wagener, J., & Dette, H. (2012). Bridge estimators and the adaptive Lasso under heteroscedasticity. Mathematical Methods of Statistics, 21(2), 109-126.
Tian, G. L., Tang, M. L., Fang, H. B., & Tan, M. (2008). Efficient methods for estimating constrained parameters with applications to regularized (lasso) logistic regression. Computational statistics & data analysis, 52(7), 3528-3542.
Tateishi, S., Matsui, H., & Konishi, S. (2010). Nonlinear regression modeling via the lasso-type regularization. Journal of Statistical Planning and Inference, 140(5), 1125-1134.
Osborne, M. R., & Turlach, B. A. (2011). A homotopy algorithm for the quantile regression lasso and related piecewise linear problems. Journal of Computational and Graphical Statistics, 20(4).
Liu, J., Huang, J., Ma, S., & Wang, K. (2013). Incorporating group correlations in genome-wide association studies using smoothed group Lasso. Biostatistics, 14(2), 205-219.
Arslan, O. (2012). Weighted LAD-LASSO method for robust parameter estimation and variable selection in regression. Computational Statistics & Data Analysis, 56(6), 1952-1965.
Vidaurre, D., Bielza, C., & Larrañaga, P. (2012). Lazy lasso for local regression. Computational Statistics, 27(3), 531-550.
Guo, R., Zhu, H., Chow, S. M., & Ibrahim, J. G. (2012). Bayesian lasso for semiparametric structural equation models. Biometrics, 68(2), 567-577.
Jia, J., Rohe, K., & Yu, B. (2010). The lasso under heteroscedasticity. arXiv preprint arXiv:1011.1026.
Foster, S. D., Verbyla, A. P., & Pitchford, W. S. (2008). A random model approach for the LASSO. Computational Statistics, 23(2), 217-233.
Wang, H., Zou, G., & Wan, A. T. (2013). Adaptive LASSO for varying-coefficient partially linear measurement error models. Journal of Statistical Planning and Inference, 143(1), 40-54.
van de Geer, S., Buhlmann, P., & Zhou, S. (2010). The adaptive and the thresholded Lasso for potentially misspecified models. arXiv preprint arXiv:1001.5176.
Huang, F. (2003). Prediction error property of the lasso estimator and its generalization. Australian & New Zealand journal of statistics, 45(2), 217-228.
Witten, D., & Friedman, J. (2011). A fast screening rule for the graphical lasso. Journal of Computational and Graphical Statistics, to appear.
Foster, S. D., Verbyla, A. P., & Pitchford, W. S. (2009). Estimation, prediction and inference for the LASSO random effects model. Australian & New Zealand Journal of Statistics, 51(1), 43-61.
Peterson, C., Vannucci, M., Karakas, C., Choi, W., Ma, L., & MALETIĆ-SAVATIĆ, M. I. R. J. A. N. A. (2013). Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors. Statistics and its interface, 6(4), 547.
Meynet, C. (2013). An ℓ 1-oracle inequality for the Lasso in finite mixture Gaussian regression models. ESAIM: Probability and Statistics, 17, 650-671.
De Castro, Y. (2013). A remark on the lasso and the dantzig selector. Statistics & Probability Letters, 83(1), 304-314.
Bergersen, L. C., Glad, I. K., & Lyng, H. (2011). Weighted lasso with data integration. Statistical applications in genetics and molecular biology, 10(1), 1-29.
Frank, L. E., & Heiser, W. J. (2008). Feature selection in Feature Network Models: Finding predictive subsets of features with the Positive Lasso. British Journal of Mathematical and Statistical Psychology, 61(1), 1-27.
Park, H., & Sakaori, F. (2013). Lag weighted lasso for time series model. Computational Statistics, 28(2), 493-504.
Masarotto, G., & Varin, C. (2012). The ranking lasso and its application to sport tournaments. The Annals of Applied Statistics, 6(4), 1949-1970.
Wagener, J., & Dette, H. (2013). The adaptive lasso in high-dimensional sparse heteroscedastic models. Mathematical Methods of Statistics, 22(2), 137-154.Hossain, S., & Ahmed, E. (2012). Shrinkage and penalty estimators of a Poisson regression model. Australian & New Zealand Journal of Statistics, 54(3), 359-373.被引用次数:6
Fang, Z., & Meinshausen, N. (2012). LASSO isotone for high-dimensional additive isotonic regression. Journal of Computational and Graphical Statistics, 21(1), 72-91.
Sampson, J. N., Chatterjee, N., Carroll, R. J., & Müller, S. (2013). Controlling the local false discovery rate in the adaptive Lasso. Biostatistics, kxt008.
Hirose, K., & Konishi, S. (2012). Variable selection via the weighted group lasso for factor analysis models. Canadian Journal of Statistics, 40(2), 345-361.
Lu, W., Goldberg, Y., & Fine, J. P. (2012). On the robustness of the adaptive lasso to model misspecification. Biometrika, 99(3), 717-731.
Li, J., & Gu, M. (2012). Adaptive LASSO for general transformation models with right censored data. Computational Statistics & Data Analysis, 56(8), 2583-2597.
Alquier, P., & Hebiri, M. (2012). Transductive versions of the LASSO and the Dantzig Selector. Journal of Statistical Planning and Inference, 142(9), 2485-2500.
Chen, K., & Chan, K. S. (2011). Subset ARMA selection via the adaptive Lasso. Statistics and Its Interface, 4, 197-205.
Chatterjee, A., & Lahiri, S. N. (2011). Strong consistency of Lasso estimators. Sankhya A, 73(1), 55-78.
Massart, P., & Meynet, C. (2012, January). Some rates of convergence for the selected Lasso estimator. In Algorithmic learning theory (pp. 17-33). Springer Berlin Heidelberg.
Gupta, S. (2012). A note on the asymptotic distribution of LASSO estimator for correlated data. Sankhya A, 74(1), 10-28.
Ye, F., & Zhang, C. H. (2009). Rate minimaxity of the lasso and dantzig estimators. Technical report, Department of Statistics and Biostatistics, Rutgers University.
Tran, M. N., Nott, D. J., & Leng, C. (2012). The predictive lasso. Statistics and computing, 22(5), 1069-1084.
Sabbe, N., Thas, O., & Ottoy, J. P. (2013). EMLasso: logistic lasso with missing data. Statistics in medicine, 32(18), 3143-3157.
Hussami, N., & Tibshirani, R. (2013). A Component Lasso. arXiv preprint arXiv:1311.4472.
Benoit, D. F., Alhamzawi, R., & Yu, K. (2013). Bayesian lasso binary quantile regression. Computational Statistics, 28(6), 2861-2873.
Wang, X., & Song, L. (2011). Adaptive Lasso Variable Selection for the Accelerated Failure Models. Communications in Statistics-Theory and Methods, 40(24), 4372-4386.
Wu, T. T. (2013). Lasso penalized semiparametric regression on high-dimensional recurrent event data via coordinate descent. Journal of Statistical Computation and Simulation, 83(6), 1145-1155.

2014
Fan, J., Xue, L., & Zou, H. (2014). Strong oracle optimality of folded concave penalized estimation. Annals of statistics, 42(3), 819.
Wang, Z., Liu, H., & Zhang, T. (2014). Optimal computational and statistical rates of convergence for sparse nonconvex learning problems. Annals of statistics, 42(6), 2164.
Allen, G. I., Grosenick, L., & Taylor, J. (2014). A generalized least-square matrix decomposition. Journal of the American Statistical Association, 109(505), 145-159.
Pakman, A., & Paninski, L. (2014). Exact hamiltonian monte carlo for truncated multivariate gaussians. Journal of Computational and Graphical Statistics, 23(2), 518-542.
Viallon, V., LambertLacroix, S., Hoefling, H., & Picard, F. (2014). On the robustness of the generalized fused lasso to prior specifications. Statistics and Computing, 1-17.
Bühlmann, P., Kalisch, M., & Meier, L. (2014). High-dimensional statistics with a view toward applications in biology. Annual Review of Statistics and Its Application, 1, 255-278.
Cuevas, A. (2014). A partial overview of the theory of statistics with functional data. Journal of Statistical Planning and Inference, 147, 1-23.
Lv, J., & Liu, J. S. (2014). Model selection principles in misspecified models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(1), 141-167.
Chi, E. C., & Lange, K. (2014). Splitting methods for convex clustering. Journal of Computational and Graphical Statistics, (just-accepted), 00-00.
Ročková, V., & George, E. I. (2014). Emvs: The em approach to bayesian variable selection. Journal of the American Statistical Association, 109(506), 828-846.
Schelldorfer, J., Meier, L., & Bühlmann, P. (2014). Glmmlasso: an algorithm for high-dimensional generalized linear mixed models using ℓ1-penalization. Journal of Computational and Graphical Statistics, 23(2), 460-477.
Groll, A., & Tutz, G. (2014). Variable selection for generalized linear mixed models by L1-penalized estimation. Statistics and Computing, 24(2), 137-154.
Yao, Y., & Lee, Y. (2014). Another look at linear programming for feature selection via methods of regularization. Statistics and Computing, 24(5), 885-905.
Kim, H. H., & Swanson, N. R. (2014). Forecasting financial and macroeconomic variables using data reduction methods: New empirical evidence. Journal of Econometrics, 178, 352-367.
Ciuperca, G. (2014). Model selection by LASSO methods in a change-point model. Statistical Papers, 55(2), 349-374.Luo, S., & Chen, Z. (2014).
Vincent, M., & Hansen, N. R. (2014). Sparse group lasso and high dimensional multinomial classification. Computational Statistics & Data Analysis, 71, 771-786.
Yang, Y., & Zou, H. (2014). A fast unified algorithm for solving group-lasso penalize learning problems. Statistics and Computing, 1-13.
Xu, H. K. (2014). Properties and iterative methods for the Lasso and its variants. Chinese Annals of Mathematics, Series B, 35(3), 501-518.
Arribas-Gil, A., Bertin, K., Meza, C., & Rivoirard, V. (2014). LASSO-type estimators for semiparametric nonlinear mixed-effects models estimation. Statistics and Computing, 24(3), 443-460.
Chretien, S., & Darses, S. (2014). Sparse recovery with unknown variance: a LASSO-type approach. Information Theory, IEEE Transactions on, 60(7), 3970-3988.
Leng, C., Tran, M. N., & Nott, D. (2014). Bayesian adaptive lasso. Annals of the Institute of Statistical Mathematics, 66(2), 221-244.
Bühlmann, P., & Mandozzi, J. (2014). High-dimensional variable screening and bias in subsequent inference, with an empirical comparison. Computational Statistics, 29(3-4), 407-430.
Efron, B. (2014). Estimation and accuracy after model selection. Journal of the American Statistical Association, 109(507), 991-1007.
Caner, M., & Zhang, H. H. (2014). Adaptive elastic net for generalized methods of moments. Journal of Business & Economic Statistics, 32(1), 30-47.
Ke, T., Jin, J., & Fan, J. (2014). Covariance assisted screening and estimation. Annals of statistics, 42(6), 2202.
Wainwright, M. J. (2014). Structured regularizers for high-dimensional problems: Statistical and computational issues. Annual Review of Statistics and Its Application, 1, 233-253.
Covas, F. B., Rump, B., & Zakrajšek, E. (2014). Stress-testing US bank holding companies: A dynamic panel quantile regression approach. International Journal of Forecasting, 30(3), 691-713.
Baraud, Y., Giraud, C., & Huet, S. (2014). Estimator selection in the Gaussian setting. In Annales de l'Institut Henri Poincaré, Probabilités et Statistiques (Vol. 50, No. 3, pp. 1092-1119). Institut Henri Poincaré.
Kong, S., & Nan, B. (2014). Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso. Statistica Sinica, 24(1), 25-42.
Fan, J., Fan, Y., & Barut, E. (2014). Adaptive robust variable selection. Annals of statistics, 42(1), 324.
Mohan, K., London, P., Fazel, M., Witten, D., & Lee, S. I. (2014). Node-based learning of multiple gaussian graphical models. The Journal of Machine Learning Research, 15(1), 445-488.
Gefang, D. (2014). Bayesian doubly adaptive elastic-net Lasso for VAR shrinkage. International Journal of Forecasting, 30(1), 1-11.
Liu, J., Li, R., & Wu, R. (2014). Feature selection for varying coefficient models with ultrahigh-dimensional covariates. Journal of the American Statistical Association, 109(505), 266-274.
Bhattacharya, A., Pati, D., Pillai, N. S., & Dunson, D. B. (2014). Dirichlet-Laplace priors for optimal shrinkage. Journal of the American Statistical Association, (just-accepted), 00-00.
van der Pas, S. L., Kleijn, B. J. K., & van der Vaart, A. W. (2014). The horseshoe estimator: Posterior concentration around nearly black vectors. Electronic Journal of Statistics, 8(2), 2585-2618.
Fan, J., & Liao, Y. (2014). Endogeneity in high dimensions. Annals of statistics, 42(3), 872.
Fan, J., Ma, Y., & Dai, W. (2014). Nonparametric independence screening in sparse ultra-high-dimensional varying coefficient models. Journal of the American Statistical Association, 109(507), 1270-1284.
Bühlmann, P., Peters, J., & Ernest, J. (2014). CAM: Causal additive models, high-dimensional order search and penalized regression. The Annals of Statistics, 42(6), 2526-2556.
Belloni, A., Chernozhukov, V., & Hansen, C. (2014). High-dimensional methods and inference on structural and treatment effects. The Journal of Economic Perspectives, 29-50.
Lange, K., Papp, J. C., Sinsheimer, J. S., & Sobel, E. M. (2014). Next generation statistical genetics: Modeling, penalization, and optimization in high-dimensional data. Annual review of statistics and its application, 1(1), 279.
Qian, J., & Su, L. (2014). Shrinkage estimation of common breaks in panel data models via adaptive group fused Lasso. Available at SSRN 2417560.
Homrighausen, D., & McDonald, D. J. (2014). Leave-one-out cross-validation is risk consistent for lasso. Machine Learning, 97(1-2), 65-78.Hong, Z., & Lian, H. (2011). Inference of genetic networks from time course expression data using functional regression with lasso penalty. Communications in Statistics-Theory and Methods, 40(10), 1768-1779.
Lin, J., & Li, S. (2014). Sparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSO. Applied and Computational Harmonic Analysis, 37(1), 126-139.
Zhang, T., & Zou, H. (2014). Sparse precision matrix estimation via lasso penalized D-trace loss. Biometrika, 101(1), 103-120.
Curtis, S. M., Banerjee, S., & Ghosal, S. (2014). Fast Bayesian model assessment for nonparametric additive regression. Computational Statistics & Data Analysis, 71, 347-358.
Narisetty, N. N., & He, X. (2014). Bayesian variable selection with shrinking and diffusing priors. The Annals of Statistics, 42(2), 789-817.
Luo, S., & Chen, Z. (2014). Sequential Lasso cum EBIC for feature selection with ultra-high dimensional feature space. Journal of the American Statistical Association, 109(507), 1229-1240.
Picchini, U. (2014). Inference for SDE models via approximate Bayesian computation. Journal of Computational and Graphical Statistics, 23(4), 1080-1100.
Geer, S. (2014). Weakly decomposable regularization penalties and structured sparsity. Scandinavian Journal of Statistics, 41(1), 72-86.
Chavez-Demoulin, V., Embrechts, P., & Sardy, S. (2014). Extreme-quantile tracking for financial time series. Journal of Econometrics, 181(1), 44-52.
Li, Y., Dicker, L., & Zhao, S. D. (2014). The Dantzig selector for censored linear regression models. Statistica Sinica, 24(1), 251.
Yen, Y. M., & Yen, T. J. (2014). Solving norm constrained portfolio optimization via coordinate-wise descent algorithms. Computational Statistics & Data Analysis, 76, 737-759.
Alquier, P., Friel, N., Everitt, R., & Boland, A. (2014). Noisy Monte Carlo: Convergence of Markov chains with approximate transition kernels. Statistics and Computing, 1-19.
Fastrich, B., Paterlini, S., & Winker, P. (2014). Cardinality versus q-norm constraints for index tracking. Quantitative Finance, 14(11), 2019-2032.
Zeng, P., Wei, Y., Zhao, Y., Liu, J., Liu, L., Zhang, R., ... & Chen, F. (2014). Variable selection approach for zero-inflated count data via adaptive lasso. Journal of Applied Statistics, 41(4), 879-894.
Zhou, H., & Wu, Y. (2014). A generic path algorithm for regularized statistical estimation. Journal of the American Statistical Association, 109(506), 686-699.

Zhao, Y., Chen, H., & Ogden, R. T. (2014). Wavelet-based weighted LASSO and screening approaches in functional linear regression. Journal of Computational and Graphical Statistics, (just-accepted), 00-00.
Kundu, S., & Dunson, D. B. (2014). Bayes variable selection in semiparametric linear models. Journal of the American Statistical Association, 109(505), 437-447.

Zhou, J., Bhattacharya, A., Herring, A. H., & Dunson, D. B. (2014). Bayesian factorizations of big sparse tensors. Journal of the American Statistical Association, (just-accepted), 00-00.
Zhao, W., Zhang, R., Liu, J., & Lv, Y. (2014). Robust and efficient variable selection for semiparametric partially linear varying coefficient model based on modal regression. Annals of the Institute of Statistical Mathematics, 66(1), 165-191.
Meinshausen, N. (2014). Group bound: confidence intervals for groups of variables in sparse high dimensional regression without assumptions on the design. Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Zhang, J., Wang, X., Yu, Y., & Gai, Y. (2014). Estimation and variable selection in partial linear single index models with error-prone linear covariates. Statistics, 48(5), 1048-1070.
Oelker, M. R., Gertheiss, J., & Tutz, G. (2014). Regularization and model selection with categorical predictors and effect modifiers in generalized linear models. Statistical Modelling, 14(2), 157-177.
Chatterjee, S. (2014). A new perspective on least squares under convex constraint. The Annals of Statistics, 42(6), 2340-2381.
Hao, N., & Zhang, H. H. (2014). Interaction Screening for Ultrahigh-Dimensional Data. Journal of the American Statistical Association, 109(507), 1285-1301.
Wen, X. (2014). Bayesian model selection in complex linear systems, as illustrated in genetic association studies. Biometrics, 70(1), 73-83.
Lin, W., Shi, P., Feng, R., & Li, H. (2014). Variable selection in regression with compositional covariates. Biometrika, asu031.
McKeague, I. W., & Qian, M. (2014). Estimation of treatment policies based on functional predictors. Statistica Sinica, 24(3), 1461.
Zheng, Z., Fan, Y., & Lv, J. (2014). High dimensional thresholded regression and shrinkage effect. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(3), 627-649.
Cheng, M. Y., Honda, T., Li, J., & Peng, H. (2014). Nonparametric independence screening and structure identification for ultra-high dimensional longitudinal data. The Annals of Statistics, 42(5), 1819-1849.
Belloni, A., Chernozhukov, V., & Kato, K. (2014). Uniform post-selection inference for least absolute deviation regression and other Z-estimation problems. Biometrika, asu056.
Milanzi, E., Alonso, A., Buyck, C., Molenberghs, G., & Bijnens, L. (2014). A permutational-splitting sample procedure to quantify expert opinion on clusters of chemical compounds using high-dimensional data. The Annals of Applied Statistics, 8(4), 2319-2335.
Rashid, N., Sun, W., & Ibrahim, J. G. (2014). Some Statistical Strategies for DAE-seq Data Analysis: Variable Selection and Modeling Dependencies Among Observations. Journal of the American Statistical Association, 109(505), 78-94.
Fan, Y., Foutz, N., James, G. M., & Jank, W. (2014). Functional response additive model estimation with online virtual stock markets. The Annals of Applied Statistics, 8(4), 2435-2460.
Song, R., Yi, F., & Zou, H. (2014). On varying-coefficient independence screening for high-dimensional varying-coefficient models. Statistica Sinica, 24(4), 1735.
Wu, H., Lu, T., Xue, H., & Liang, H. (2014). Sparse Additive Ordinary Differential Equations for Dynamic Gene Regulatory Network Modeling. Journal of the American Statistical Association, 109(506), 700-716.
Zou, C., Yin, G., Feng, L., & Wang, Z. (2014). Nonparametric maximum likelihood approach to multiple change-point problems. The Annals of Statistics, 42(3), 970-1002.
Wang, X., Nan, B., Zhu, J., & Koeppe, R. (2014). Regularized 3D functional regression for brain image data via Haar wavelets. The Annals of Applied Statistics, 8(2), 1045-1064.
Lai, R. C., Hannig, J., & Lee, T. C. (2014). Generalized fiducial inference for ultrahigh dimensional regression. Journal of the American Statistical Association, (just-accepted), 00-00.
Kaufman, S., & Rosset, S. (2014). When does more regularization imply fewer degrees of freedom? Sufficient conditions and counterexamples. Biometrika, 101(4), 771-784.
Fan, Y., & Lv, J. (2014). Asymptotic properties for combined L1 and concave regularization. Biometrika, 101(1), 57-70.
Zhang, T., & Zou, H. (2014). Sparse precision matrix estimation via lasso penalized D-trace loss. Biometrika, 101(1), 103-120.
Li, J., Zhong, W., Li, R., & Wu, R. (2014). A fast algorithm for detecting gene–gene interactions in genome-wide association studies. The Annals of Applied Statistics, 8(4), 2292-2318.
Marchetti, Y., & Zhou, Q. (2014). Solution path clustering with adaptive concave penalty. Electronic Journal of Statistics, 8(1), 1569-1603.
Stefanski, L. A., Wu, Y., & White, K. (2014). Variable selection in nonparametric classification via measurement error model selection likelihoods. Journal of the American Statistical Association, 109(506), 574-589.
Jiang, B., & Liu, J. S. (2014). Variable selection for general index models via sliced inverse regression. The Annals of Statistics, 42(5), 1751-1786.
Jansen, M. (2014). Information criteria for variable selection under sparsity. Biometrika, 101(1), 37-55.
Bleich, J., Kapelner, A., George, E. I., & Jensen, S. T. (2014). Variable selection for BART: An application to gene regulation. The Annals of Applied Statistics, 8(3), 1750-1781.
Storlie, C., Anderson, B., Vander Wiel, S., Quist, D., Hash, C., & Brown, N. (2014). Stochastic identification of malware with dynamic traces. The Annals of Applied Statistics, 8(1), 1-18.
Bertsimas, D., & Mazumder, R. (2014). Least quantile regression via modern optimization. The Annals of Statistics, 42(6), 2494-2525.
Aue, A., Cheung, R. C., Lee, T. C., & Zhong, M. (2014). Segmented model selection in quantile regression using the minimum description length principle. Journal of the American Statistical Association, 109(507), 1241-1256.
Chen, Z., Tang, M. L., Gao, W., & Shi, N. Z. (2014). New robust variable selection methods for linear regression models. Scandinavian Journal of Statistics, 41(3), 725-741.




http://blog.sciencenet.cn/blog-752541-912944.html

上一篇:高维稀疏统计推断,惩罚估计的专著及Lasso变量选择相关的高引论文
下一篇:1994年华东师大统计教授茆诗松与北大耿直教授之间往来的两通信札

0

该博文允许注册用户评论 请点击登录 评论 (2 个评论)

数据加载中...
扫一扫,分享此博文

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2021-10-25 03:48

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部