-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cyclic Markov Policy Graph #736
Comments
There is no single "optimal" approach. It depends on your model
Change the transition probability in:
At the moment, it looks like you are using a uniform distribution. |
Thanks for your response. I have an idea to use the stochastic process distribution parameters to estimate the transition probabilities. |
Sure. SDDP.jl doesn't provide tools to help with this. It is up to you to design the graph that is most appropriate for your problem. |
Closing because I don't think there is anything left to do here. Please comment if you have further questions and I will re-open. |
Hello Oscar,
I'm trying to model an inifinite/cyclic horizon markov policy graph just like in the pastoral farming example here this paper, and I have done that by adding edges from the last node to the first node
It works with the nodal transition shared evenly amongst the nodes. Is this the optimal approach? How can I add weights to nodal transition based on my simulated stochastic process?
The text was updated successfully, but these errors were encountered: