Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ask for some code detail #5

Open
ddmm2020 opened this issue Apr 28, 2022 · 0 comments
Open

Ask for some code detail #5

ddmm2020 opened this issue Apr 28, 2022 · 0 comments

Comments

@ddmm2020
Copy link

Whether your code var_update_op = state_ops.assign(var, var_update) change the w1,i of COCOB-Backprop Algorithm?

In your paper Training Deep Networks without Learning Rates
Through Coin Betting

You said that:
Note that the update in line 10 is carefully defined: The algorithm does not use the previous wt,i in
the update. Indeed, this algorithm belongs to the family of the Dual Averaging algorithms, where the
iterate is a function of the average of the past gradients [Nesterov, 2009].

You do so in Algorithm 2 COCOB-Backprop line 10

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant