I will present a sparse kernelization of logistic regression where the prototypes are not necessarily from the training data. Traditional sparse kernel logistic regression Consider an latexM class logistic regression model given by latexP(y|x)∝exp(βy0+∑djβyjxj) for latexy=0,1,…,M where latexj indexes the latexd features. Fitting the model to a data set latexD={xi,yi}i=1,…,N involves estimating the betas to maximize the likelihood of latexD. The above logistic regression model is quite simple (because the classifier is a linear function of the features of the example), and in some circumstances we might want a classifier that can produce a more complex decision boundary. One way to achieve this is by kernelization . We write latexP(y|x)∝exp(βy0+∑Ni=1βyik(x,xi)) for latexy=0,1,…,M. where latexk(.,.) is a kernel function. In order to be able to use this c...
Comments
Post a Comment