Forward Variable Selection for Ultra-high Dimensional Quantile Regression Models

Download Forward Variable Selection for Ultra-high Dimensional Quantile Regression Models PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 0 pages
Book Rating : 4.:/5 (134 download)

DOWNLOAD NOW!


Book Synopsis Forward Variable Selection for Ultra-high Dimensional Quantile Regression Models by : Toshio Honda

Download or read book Forward Variable Selection for Ultra-high Dimensional Quantile Regression Models written by Toshio Honda and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Boosting Methods for Variable Selection in High Dimensional Sparse Models

Download Boosting Methods for Variable Selection in High Dimensional Sparse Models PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : pages
Book Rating : 4.:/5 (656 download)

DOWNLOAD NOW!


Book Synopsis Boosting Methods for Variable Selection in High Dimensional Sparse Models by :

Download or read book Boosting Methods for Variable Selection in High Dimensional Sparse Models written by and published by . This book was released on 2004 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Firstly, we propose new variable selection techniques for regression in high dimensional linear models based on a forward selection version of the LASSO, adaptive LASSO or elastic net, respectively to be called as forward iterative regression and shrinkage technique (FIRST), adaptive FIRST and elastic FIRST. These methods seem to work better for an extremely sparse high dimensional linear regression model. We exploit the fact that the LASSO, adaptive LASSO and elastic net have closed form solutions when the predictor is one-dimensional. The explicit formula is then repeatedly used in an iterative fashion until convergence occurs. By carefully considering the relationship between estimators at successive stages, we develop fast algorithms to compute our estimators. The performance of our new estimators is compared with commonly used estimators in terms of predictive accuracy and errors in variable selection. It is observed that our approach has better prediction performance for highly sparse high dimensional linear regression models. Secondly, we propose a new variable selection technique for binary classification in high dimensional models based on a forward selection version of the Squared Support Vector Machines or one-norm Support Vector Machines, to be called as forward iterative selection and classification algorithm (FISCAL). This methods seem to work better for a highly sparse high dimensional binary classification model. We suggest the squared support vector machines using 1-norm and 2-norm simultaneously. The squared support vector machines are convex and differentiable except at zero when the predictor is one-dimensional. Then an iterative forward selection approach is applied along with the squared support vector machines until a stopping rule is satisfied. Also, we develop a recursive algorithm for the FISCAL to save computational burdens. We apply the processes to the original onenorm Support Vector Machines. We compare the FISCAL with other widely used.

Forward Variable Selection for Sparse Ultra-high Dimensional Generalized Varying Coefficient Models

Download Forward Variable Selection for Sparse Ultra-high Dimensional Generalized Varying Coefficient Models PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : pages
Book Rating : 4.:/5 (123 download)

DOWNLOAD NOW!


Book Synopsis Forward Variable Selection for Sparse Ultra-high Dimensional Generalized Varying Coefficient Models by : Toshio Honda

Download or read book Forward Variable Selection for Sparse Ultra-high Dimensional Generalized Varying Coefficient Models written by Toshio Honda and published by . This book was released on 2020 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt:

Ultra High Dimension Variable Selection with Threshold Partial Correlations

Download Ultra High Dimension Variable Selection with Threshold Partial Correlations PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 0 pages
Book Rating : 4.:/5 (134 download)

DOWNLOAD NOW!


Book Synopsis Ultra High Dimension Variable Selection with Threshold Partial Correlations by : Yiheng Liu

Download or read book Ultra High Dimension Variable Selection with Threshold Partial Correlations written by Yiheng Liu and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: With respect to variable selection in linear regression, partial correlation for normal models (Buhlmann, Kalisch and Maathuis, 2010), was a powerful alternative method to penalized least squares approaches (LASSO, SCAD, etc.). The method was improved by Li, Liu, Lou (2015) with the concept of threshold partial correlation (TPC) and extension to elliptical contoured dis- tributions. The TPC procedure is endowed with its dominant advantages over the simple partial correlation in high or ultrahigh dimensional cases (where the dimension of predictors increases in an exponential rate of the sample size). However, the convergence rate for TPC is not very satis- fying since it usually takes substantial amount of time for the procedure to reach the final solution, especially in high or even ultrahigh dimensional scenarios. Besides, the model assumptions on the TPC are too strong, which suggest the approach might not be conveniently used in practice. To address these two important issues, this dissertation puts forward an innovative model selection al- gorithm. It starts with an alternative definition of elliptical contoured distributions, which restricts the impact of the marginal kurtosis. This posts a relatively weaker condition for the validity of the model selection algorithm. Based on the simulation results, the new approach demonstrates not only competitive outcomes with established methods such as LASSO and SCAD, but also advan- tages in terms of computing efficiency. The idea of the algorithm is extended to survival data and nonparametric inference by exploring various measurements on correlations between the response variable and predictors.

Prediction and Variable Selection in Sparse Ultrahigh Dimensional Additive Models

Download Prediction and Variable Selection in Sparse Ultrahigh Dimensional Additive Models PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : pages
Book Rating : 4.:/5 (855 download)

DOWNLOAD NOW!


Book Synopsis Prediction and Variable Selection in Sparse Ultrahigh Dimensional Additive Models by : Girly Manguba Ramirez

Download or read book Prediction and Variable Selection in Sparse Ultrahigh Dimensional Additive Models written by Girly Manguba Ramirez and published by . This book was released on 2013 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: The advance in technologies has enabled many fields to collect datasets where the number of covariates (p) tends to be much bigger than the number of observations (n), the so-called ultrahigh dimensionality. In this setting, classical regression methodologies are invalid. There is a great need to develop methods that can explain the variations of the response variable using only a parsimonious set of covariates. In the recent years, there have been significant developments of variable selection procedures. However, these available procedures usually result in the selection of too many false variables. In addition, most of the available procedures are appropriate only when the response variable is linearly associated with the covariates. Motivated by these concerns, we propose another procedure for variable selection in ultrahigh dimensional setting which has the ability to reduce the number of false positive variables. Moreover, this procedure can be applied when the response variable is continuous or binary, and when the response variable is linearly or non-linearly related to the covariates. Inspired by the Least Angle Regression approach, we develop two multi-step algorithms to select variables in sparse ultrahigh dimensional additive models. The variables go through a series of nonlinear dependence evaluation following a Most Significant Regression (MSR) algorithm. In addition, the MSR algorithm is also designed to implement prediction of the response variable. The first algorithm called MSR-continuous (MSRc) is appropriate for a dataset with a response variable that is continuous. Simulation results demonstrate that this algorithm works well. Comparisons with other methods such as greedy-INIS by Fan et al. (2011) and generalized correlation procedure by Hall and Miller (2009) showed that MSRc not only has false positive rate that is significantly less than both methods, but also has accuracy and true positive rate comparable with greedy-INIS. The second algorithm called MSR-binary (MSRb) is appropriate when the response variable is binary. Simulations demonstrate that MSRb is competitive in terms of prediction accuracy and true positive rate, and better than GLMNET in terms of false positive rate. Application of MSRb to real datasets is also presented. In general, MSR algorithm usually selects fewer variables while preserving the accuracy of predictions.

Semiparametric Quantile Averaging in the Presence of High-Dimensional Predictors

Download Semiparametric Quantile Averaging in the Presence of High-Dimensional Predictors PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 37 pages
Book Rating : 4.:/5 (13 download)

DOWNLOAD NOW!


Book Synopsis Semiparametric Quantile Averaging in the Presence of High-Dimensional Predictors by : Jan G. De Gooijer

Download or read book Semiparametric Quantile Averaging in the Presence of High-Dimensional Predictors written by Jan G. De Gooijer and published by . This book was released on 2017 with total page 37 pages. Available in PDF, EPUB and Kindle. Book excerpt: The paper proposes a method for forecasting conditional quantiles. In practice, one often does not know the "true" structure of the underlying conditional quantile function. In addition, we may have a potentially large number of the predictors. Mainly intended for such cases, we introduce a flexible and practical framework based on penalized high-dimensional quantile averaging. In addition to prediction, we show that the proposed method can also serve as a valid predictor selector. We conduct extensive simulation experiments to asses its prediction and variable selection performances for nonlinear and linear model designs. In terms of predictor selection, the approach tends to select the true set of predictors with minimal false positives. With respect to prediction accuracy, the method competes well even with those benchmark/oracle methods that know one or more aspects of the underlying quantile regression model. To further illustrate the merit of the proposed method, we provide an application to the out-of-sample forecasting U.S. core inflation using a large set of monthly macroeconomic variables based on the recently developed FRED-MD database. The application offers several empirical findings.

A Non-iterative Method for Fitting the Single Index Quantile Regression Model with Uncensored and Censored Data

Download A Non-iterative Method for Fitting the Single Index Quantile Regression Model with Uncensored and Censored Data PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : pages
Book Rating : 4.:/5 (951 download)

DOWNLOAD NOW!


Book Synopsis A Non-iterative Method for Fitting the Single Index Quantile Regression Model with Uncensored and Censored Data by : Eliana Christou

Download or read book A Non-iterative Method for Fitting the Single Index Quantile Regression Model with Uncensored and Censored Data written by Eliana Christou and published by . This book was released on 2016 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Quantile regression (QR) is becoming increasingly popular due to its relevance in many scientific investigations. Linear and nonlinear QR models have been studied extensively, while recent research focuses on the single index quantile regression (SIQR) model. Compared to the single index mean regression (SIMR) problem, the fitting and the asymptotic theory of the SIQR model are more complicated due to the lack of closed form expressions for estimators of conditional quantiles. Consequently, existing methods are necessarily iterative. We propose a non-iterative estimation algorithm, and derive the asymptotic distribution of the proposed estimator under heteroscedasticity. For identifiability, we use a parametrization that sets the first coefficient to 1 instead of the typical condition which restricts the norm of the parametric component. This distinction is more than simply cosmetic as it affects, in a critical way, the correspondence between the estimator derived and the asymptotic theory. The ubiquity of high dimensional data has led to a number of variable selection methods for linear/nonlinear QR models and, recently, for the SIQR model. We propose a new algorithm for simultaneous variable selection and parameter estimation applicable also for heteroscedastic data. The proposed algorithm, which is non-iterative, consists of two steps. Step 1 performs an initial variable selection method. Step 2 uses the results of Step 1 to obtain better estimation of the conditional quantiles and, using them, to perform simultaneous variable selection and estimation of the parametric component of the SIQR model. It is shown that the initial variable selection method of Step 1 consistently estimates the relevant variables, and that the estimated parametric component derived in Step 2 satisfies the oracle property. Furthermore, QR is particularly relevant for the analysis of censored survival data as an alternative to proportional hazards and the accelerated failure time models. Such data occur frequently in biostatistics, environmental sciences, social sciences and econometrics. There is a large body of work for linear/nonlinear QR models for censored data, but it is only recently that the SIQR model has received some attention. However, the only existing method for fitting the SIQR model uses an iterative algorithm and no asymptotic theory for the resulting estimator of the Euclidean parameter is given. We propose a new non-iterative estimation algorithm, and derive the asymptotic distribution of the proposed estimator under heteroscedasticity.

Computation in Quantile and Composite Quantile Regression Models with Or Without Regularization

Download Computation in Quantile and Composite Quantile Regression Models with Or Without Regularization PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 55 pages
Book Rating : 4.:/5 (959 download)

DOWNLOAD NOW!


Book Synopsis Computation in Quantile and Composite Quantile Regression Models with Or Without Regularization by : Jueyu Gao

Download or read book Computation in Quantile and Composite Quantile Regression Models with Or Without Regularization written by Jueyu Gao and published by . This book was released on 2015 with total page 55 pages. Available in PDF, EPUB and Kindle. Book excerpt: Quantile, composite quantile regression with or without regularization have been widely studied and applied in the high-dimensional model estimation and variable selections. Although the theoretical aspect has been well established, the lack of efficient computation methods and publicly available programs or packages hinder the research in this area. Koenker has established and implemented the interior point(IP) method in quantreg for quantile regression with or without regularization. However, it still lacks the ability to handle the composite quantile regression with or without regularization. The same incapability also existed in Coordinate Descent (CD) algorithm that has been implemented in CDLasso. The lack of handful programs for composite quantile regression with or without regularization motivates our research here. In this work, we implement three different algorithms including Majorize and Minimize(MM), Coordinate Descent(CD) and Alternation Direction Method of Multiplier(ADMM) for quantile and composite quantile regression with or without regularization. We conduct the simulation that compares the performance of four algorithms in time efficiency and estimation accuracy. The simulation study shows our program is time efficient when dealing with high dimensional problems. Based on the good performance of our program, we publish the R package cqrReg, which give the user more flexibility and capability when directing various data analyses. In order to optimize the time efficiency, the package cqrReg is coded in C++ and linked back to R by an user-friendly interface.

Statistical Foundations of Data Science

Download Statistical Foundations of Data Science PDF Online Free

Author :
Publisher : CRC Press
ISBN 13 : 0429527616
Total Pages : 942 pages
Book Rating : 4.4/5 (295 download)

DOWNLOAD NOW!


Book Synopsis Statistical Foundations of Data Science by : Jianqing Fan

Download or read book Statistical Foundations of Data Science written by Jianqing Fan and published by CRC Press. This book was released on 2020-09-21 with total page 942 pages. Available in PDF, EPUB and Kindle. Book excerpt: Statistical Foundations of Data Science gives a thorough introduction to commonly used statistical models, contemporary statistical machine learning techniques and algorithms, along with their mathematical insights and statistical theories. It aims to serve as a graduate-level textbook and a research monograph on high-dimensional statistics, sparsity and covariance learning, machine learning, and statistical inference. It includes ample exercises that involve both theoretical studies as well as empirical applications. The book begins with an introduction to the stylized features of big data and their impacts on statistical analysis. It then introduces multiple linear regression and expands the techniques of model building via nonparametric regression and kernel tricks. It provides a comprehensive account on sparsity explorations and model selections for multiple regression, generalized linear models, quantile regression, robust regression, hazards regression, among others. High-dimensional inference is also thoroughly addressed and so is feature screening. The book also provides a comprehensive account on high-dimensional covariance estimation, learning latent factors and hidden structures, as well as their applications to statistical estimation, inference, prediction and machine learning problems. It also introduces thoroughly statistical machine learning theory and methods for classification, clustering, and prediction. These include CART, random forests, boosting, support vector machines, clustering algorithms, sparse PCA, and deep learning.

Two Tales of Variable Selection for High Dimensional Data

Download Two Tales of Variable Selection for High Dimensional Data PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 95 pages
Book Rating : 4.:/5 (811 download)

DOWNLOAD NOW!


Book Synopsis Two Tales of Variable Selection for High Dimensional Data by : Cong Liu

Download or read book Two Tales of Variable Selection for High Dimensional Data written by Cong Liu and published by . This book was released on 2012 with total page 95 pages. Available in PDF, EPUB and Kindle. Book excerpt: We also conduct similar types of studies for comparison of two corresponding screening and selection procedures of LASSO and correlation screening in classification setting, i.e., $L_{1}$ penalized logistic regression and two-sample t-test. Initial results of exploratory analysis are presented to provide some insights on the preferred scenarios of the two methods respectively. Discussions are made on possible extensions, future works and difference between regression and classification setting.

Topics on Variable Selection in High-Dimensional Data

Download Topics on Variable Selection in High-Dimensional Data PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : pages
Book Rating : 4.:/5 (127 download)

DOWNLOAD NOW!


Book Synopsis Topics on Variable Selection in High-Dimensional Data by : Jia Wang

Download or read book Topics on Variable Selection in High-Dimensional Data written by Jia Wang and published by . This book was released on 2021 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Variable selection has been extensively studied in the last few decades as it provides a principled solution to high dimensionality arising in a broad spectrum of real applications, such as bioinformatics, health studies, social science and econometrics. This dissertation is concerned with variable selection for ultrahigh-dimensional data when the dimension is allowed to grow with the sample size or the network size at an exponential rate. We propose new Bayesian approaches to selecting variables under several model frameworks, including (1) partially linear models (2) static social network models with degree heterogeneity and (3) time-varying network models. Firstly for partially linear models, we develop a procedure which employs the difference-based method to reduce the impact from the estimation of the nonparametric component, and incorporates Bayesian subset modeling with diffusing prior (BSM-DP) to shrink the corresponding estimator in the linear component. Secondly, a class of network models where the connection probability depends on ultrahigh-dimensional nodal covariates (homophily) and node-specific popularity (degree heterogeneity) is considered. We propose a Bayesian method to select nodal features in both dense and sparse networks under a relaxed assumption on popularity parameters. To alleviate the computational burden for large sparse networks, we particularly develop another working model in which parameters are updated based on a dense sub-graph at each step. Lastly, we extend the static model to time-varying cases, where the connection probability at time t is modeled based on observed nodal attributes at time t and node-specific continuous-time baseline functions evaluated at time t. Those Bayesian proposals are shown to be analogous to a mixture of L0 and L2 penalized methods and work well in the setting of highly correlated predictors. Corresponding model selection consistency is studied for all aforementioned models, in the sense that the probability of the true model being selected converges to one asymptotically. The finite sample performance of the proposed models is further examined by simulation studies and analyses on social-media and financial datasets.

Variable Selection by Regularization Methods for Generalized Mixed Models

Download Variable Selection by Regularization Methods for Generalized Mixed Models PDF Online Free

Author :
Publisher : Cuvillier Verlag
ISBN 13 : 3736939639
Total Pages : 175 pages
Book Rating : 4.7/5 (369 download)

DOWNLOAD NOW!


Book Synopsis Variable Selection by Regularization Methods for Generalized Mixed Models by : Andreas Groll

Download or read book Variable Selection by Regularization Methods for Generalized Mixed Models written by Andreas Groll and published by Cuvillier Verlag. This book was released on 2011-12-13 with total page 175 pages. Available in PDF, EPUB and Kindle. Book excerpt: A regression analysis describes the dependency of random variables in the form of a functional relationship. One distinguishes between the dependent response variable and one or more independent influence variables. There is a variety of model classes and inference methods available, ranging from the conventional linear regression model up to recent non- and semiparametric regression models. The so-called generalized regression models form a methodically consistent framework incorporating many regression approaches with response variables that are not necessarily normally distributed, including the conventional linear regression model based on the normal distribution assumption as a special case. When repeated measurements are modeled in addition to fixed effects also random effects or coefficients can be included. Such models are known as Random Effects Models or Mixed Models. As a consequence, regression procedures are applicable extremely versatile and consider very different problems. In this dissertation regularization techniques for generalized mixed models are developed that are able to perform variable selection. These techniques are especially appropriate when many potential influence variables are present and existing approaches tend to fail. First of all a componentwise boosting technique for generalized linear mixed models is presented which is based on the likelihood function and works by iteratively fitting the residuals using weak learners. The complexity of the resulting estimator is determined by information criteria. For the estimation of variance components two approaches are considered, an estimator resulting from maximizing the profile likelihood, and an estimator which can be calculated using an approximative EM-algorithm. Then the boosting concept is extended to mixed models with ordinal response variables. Two different types of ordered models are considered, the threshold model, also known as cumulative model, and the sequential model. Both are based on the assumption that the observed response variable results from a categorized version of a latent metric variable. In the further course of the thesis the boosting approach is extended to additive predictors. The unknown functions to be estimated are expanded in B-spline basis functions, whose smoothness is controlled by penalty terms. Finally, a suitable L1-regularization technique for generalized linear models is presented, which is based on a combination of Fisher scoring and gradient optimization. Extensive simulation studies and numerous applications illustrate the competitiveness of the methods constructed in this thesis compared to conventional approaches. For the calculation of standard errors bootstrap methods are used.

Variable Selection in High Dimensional Data Analysis with Applications

Download Variable Selection in High Dimensional Data Analysis with Applications PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 108 pages
Book Rating : 4.:/5 (92 download)

DOWNLOAD NOW!


Book Synopsis Variable Selection in High Dimensional Data Analysis with Applications by :

Download or read book Variable Selection in High Dimensional Data Analysis with Applications written by and published by . This book was released on 2015 with total page 108 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Exploring Modern Regression Methods Using SAS

Download Exploring Modern Regression Methods Using SAS PDF Online Free

Author :
Publisher :
ISBN 13 : 9781642954876
Total Pages : 142 pages
Book Rating : 4.9/5 (548 download)

DOWNLOAD NOW!


Book Synopsis Exploring Modern Regression Methods Using SAS by :

Download or read book Exploring Modern Regression Methods Using SAS written by and published by . This book was released on 2019-06-21 with total page 142 pages. Available in PDF, EPUB and Kindle. Book excerpt: This special collection of SAS Global Forum papers demonstrates new and enhanced capabilities and applications of lesser-known SAS/STAT and SAS Viya procedures for regression models. The goal here is to raise awareness of current valuable SAS/STAT content of which the user may not be aware. Also available free as a PDF from sas.com/books.

Variable Selection for High-dimensional Data with Error Control

Download Variable Selection for High-dimensional Data with Error Control PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 0 pages
Book Rating : 4.:/5 (139 download)

DOWNLOAD NOW!


Book Synopsis Variable Selection for High-dimensional Data with Error Control by : Han Fu (Ph. D. in biostatistics)

Download or read book Variable Selection for High-dimensional Data with Error Control written by Han Fu (Ph. D. in biostatistics) and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Many high-throughput genomic applications involve a large set of covariates and it is crucial to discover which variables are truly associated with the response. It is often desirable for researchers to select variables that are indeed true and reproducible in followup studies. Effectively controlling the false discovery rate (FDR) increases the reproducibility of the discoveries and has been a major challenge in variable selection research, especially for high-dimensional data. Existing error control approaches include augmentation approaches which utilize artificial variables as benchmarks for decision making, such as model-X knockoffs. We introduce another augmentation-based selection framework extended from a Bayesian screening approach called reference distribution variable selection. Ordinal responses, which were not previously considered in this area, were used to compare different variable selection approaches. We constructed various importance measures that fit into the selection frameworks, using either L1 penalized regression or machine learning techniques, and compared these measures in terms of the FDR and power using simulated data. Moreover, we applied these selection methods to high-throughput methylation data for identifying features associated with the progression from normal liver tissue to hepatocellular carcinoma to further compare and contrast their performances. Having established the effectiveness of FDR control for model-X knockoffs, we turned our attention to another important data type - survival data with long-term survivors. Medical breakthroughs in recent years have led to cures for many diseases, resulting in increased observations of long-term survivors. The mixture cure model (MCM) is a type of survival model that is often used when a cured fraction exists. Unfortunately, currently few variable selection methods exist for MCMs when there are more predictors than samples. To fill the gap, we developed penalized MCMs for high-dimensional datasets which allow for identification of prognostic factors associated with both cure status and/or survival. Both parametric models and semi-parametric proportional hazards models were considered for modeling the survival component. For penalized parametric MCMs, we demonstrated how the estimation proceeded using two different iterative algorithms, the generalized monotone incremental forward stagewise (GMIFS) and Expectation-Maximization (E-M). For semi-parametric MCMs where multiple types of penalty functions were considered, the coordinate descent algorithm was combined with E-M for optimization. The model-X knockoffs method was combined with these algorithms to allow for FDR control in variable selection. Through extensive simulation studies, our penalized MCMs have been shown to outperform alternative methods on multiple metrics and achieve high statistical power with FDR being controlled. In two acute myeloid leukemia (AML) applications with gene expression data, our proposed approaches identified important genes associated with potential cure or time-to-relapse, which may help inform treatment decisions for AML patients.

Variable Selection for High Dimensional Transformation Model

Download Variable Selection for High Dimensional Transformation Model PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 124 pages
Book Rating : 4.:/5 (679 download)

DOWNLOAD NOW!


Book Synopsis Variable Selection for High Dimensional Transformation Model by : Wai Hong Lee

Download or read book Variable Selection for High Dimensional Transformation Model written by Wai Hong Lee and published by . This book was released on 2010 with total page 124 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Variable Selection and Estimation in High-Dimensional Models

Download Variable Selection and Estimation in High-Dimensional Models PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 0 pages
Book Rating : 4.:/5 (137 download)

DOWNLOAD NOW!


Book Synopsis Variable Selection and Estimation in High-Dimensional Models by : Joel L. Horowitz

Download or read book Variable Selection and Estimation in High-Dimensional Models written by Joel L. Horowitz and published by . This book was released on 2016 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: French Abstract: Sélection des variables et calibration des modèles de grande dimension. Les modèles où les covariables sont de grande dimension émergent fréquemment en économie et dans d'autres champs. Souvent seulement quelques covariables ont un effet significatif sur la variable indépendante. Quand cela se produit, on dit du modèle qu'il est parcimonieux. Dans les applications, cependant, on ne sait pas lesquelles covariables sont importantes et lesquelles ne le sont pas. Ce texte passe en revue des méthodes de discrimination entre variables importantes ou non, en portant une attention particulière aux méthodes qui discriminent correctement avec une probabilité qui s'approche de 1 quand la taille de l'échantillon s'accroît. Des méthodes sont disponibles pour une grande variété de modèles - linéaires, non-linéaires, semi-paramétriques et non-paramétriques. La performance de certaines de ces méthodes pour des échantillons finis est illustrée à l'aide de simulations de Monte Carlo et d'un exemple empirique.