You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 19, 2020. It is now read-only.
Thanks for opening the issue! As you correctly noticed, the current Multinomial Logistic Regression implementation in the framework does not use l-bfgs or newton-cg. However, there are implementations for l-bfgs and newton-cg in the framework that could be used to implement such multinomial logistic regression model. They are implemented by the BroydenFletcherGoldfarbShanno and ConjugateGradient classes, respectively.
Basically, what would have to be done is to implement the cross-entropy loss and their gradient and pass their function definitions to one of the solvers (i.e. by setting their Function and Gradient function delegates).
While this shouldn't be that hard to do (there are formulas for the objective function and its gradient function around), unfortunately it has not been implemented yet. Maybe someone would like to give it a try and submit a pull request?
…oss)" in accord like sklearn
Adding a generic method for learning multinomial logistic regression that can be used with any optimization algorithm, such as conjugate gradient, gradient descent and L-BFGS.
hi,
I used this algorithm in a python program.
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
solver: ‘newton-cg’ or ‘lbfgs’
I am looking for a "multinomial Logistic Regression (cross-entropy loss)" algorithm.
Is there a similar algorithmin accord.net?
http://accord-framework.net/docs/html/T_Accord_Statistics_Models_Regression_MultinomialLogisticRegression.htm
I found this one but it uses a different solver, right?
The text was updated successfully, but these errors were encountered: