LASSO Return is also called lasso return , By generating a penalty function, the variable coefficients in the regression model are compressed , To prevent over fitting , Solve the problem of serious collinearity ,LASSO
The return was first made by the British Robert
Tibshirani propose , At present, it is widely used in prediction model . In the new grand literature , There's Daniel's suggestion , For the model fitting with too many variables and less variables , The first thing to consider is to use LASSO
Penalty function . Today we'll talk about how to use it R Language through LASSO Regression construction prediction model .
First we need to download it R Of glmnet package , from LASSO The inventor of the return , Stanford statistician Trevor Hastie Leading development .
Load the required package , Import data ( It's what we used to be SPSS Breast cancer data ), Delete missing values
library(glmnet) library(foreign) bc <- read.spss("E:/r/Breast cancer survival
agec.sav", use.value.labels=F, to.data.frame=T) bc <- na.omit(bc)

at present ,glmnet Packages can only accept data in matrix form , The data in the data frame will report an error , So first we need to convert the data into matrix form , This is an important step .
y<-as.matrix(bc[,8]) x<-as.matrix(bc[,c(2:7,9:11)])

After conversion , We get two data matrices ,Y It's the result ,X It's a variable of data
Start building the model
f1 = glmnet(x, y, family="binomial", nlambda=100, alpha=1)
# here alpha=1 by LASSO regression , If equal to 0 It's ridge return # parameter family The types of regression models are defined : family="gaussian"
It is suitable for one-dimensional continuous dependent variable (univariate) family="mgaussian" It is suitable for multidimensional continuous dependent variables (multivariate) family=
"poisson" It is suitable for non negative dependent variables (count) family="binomial" It is suitable for binary discrete dependent variable (binary) family=
"multinomial" It is suitable for multivariate discrete dependent variables (category) What is the outcome indicator here 2 Categorical variables , So use binomial print(f1)# hold f1 Result output

You can see that with lambdas increase , The degree of freedom and residual error are reduced , minimum lambda by 0.000233
Output graphics
plot(f1, xvar="lambda", label=TRUE)

The abscissa follows lambdas Logarithm of , The ordinate is the variable coefficient , You can see that with lambdas With the increase of variables, the coefficient decreases , Part of the variable coefficient becomes 0( It means there is no such variable )

Next, cross validation
We can take a part of the data set for verification ( It's OK not to do this step )
predict(f1, newx=x[2:5,], type = "response")

And then through glmnet Cross test with function , And output graphics
cvfit=cv.glmnet(x,y) plot(cvfit)

We have two dashed lines in this picture , One is the least mean square error λ value , One is the standard error of the minimum distance mean square error λ value , It doesn't matter if it's a bit awkward , We just need to know how much it is
cvfit$lambda.min# Find the minimum cvfit$lambda.1se# A standard error method for finding the minimum value λ value

OK, We get these two values and take them into the model to have a look
l.coef2<-coef(cvfit$glmnet.fit,s=0.004174369,exact = F) l.coef1<-coef(cvfit$
glmnet.fit,s=0.04272596,exact = F) l.coef1 l.coef2

We see that the first model variable is gone , The second model has 5 Variables , Therefore, we can only choose the second one 2 It's a long time .
We take these coefficients out to form a generalized linear equation , Time variable time I'm too lazy to take it ( It's just a demonstration , It's OK to take it )
mod<-glm(status~age+pathsize+lnpos+pr,family="binomial",data = bc) summary(mod)

yes 3 Three indicators were selected , We can also find out OR and 95%CI

OK, Do it here , All the models have been made , Have you learned it ?

Technology