-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use historical data to train a model? #583
Comments
You can use the scenarios = [
[(t, w) for (t, w) in enumerate(pvdata.year1)],
[(t, w) for (t, w) in enumerate(pvdata.year2)],
]
SDDP.train(model; sampling_scheme = SDDP.Historical(scenarios)) This might not be doing what you think it's doing though. The model is still assuming that the radiation is independent between weeks. And you'll just keep resampling the same 5 trajectories, which means that the policy will perform poorly if you simulate it out-of-sample on a future year's realization. If you just have five years of weekly data, and you haven't fitted a statistical model, then multistage stochastic optimization is probably the wrong framework. Thinking about it though, solar radiation probably is independent from week to week? Have to attempted to quantify the level of inter-week correlation in the data? I'd train with the defaults, and then simulate the 5 historical scenarios to see how they perform. |
Thank you for your answer, Oscar. I now managed to get 20 years of historical solar data. My solar data is on an hourly resolution and I would like to run SDDP on an hourly resolution, i.e. not weekly. Solar irradiation is (kind of) independent between weather spells, but the problem is that these weather spells can last anywhere from a few hours to a few weeks. So it is hard to separate them. Just to clarify your answer: if I use After reading you answer and reading more deeply into SDDP, I start to understand the principles more (please correct me if I am wrong). In basic SDDP, only stagewise-independent uncertainty is possible. Stagewise-dependent uncertainty can be included when the system is modeled as a Markov chain, and this can be optimized with SDDP. (This is good, because solar irradiation can be modeled as Markov process!). Furthermore, there are a number of "tricks" to include stagewise-independence, such as auto-regression and objective states. Unfortunately I do not have the time to do a Markov solar model, but I will try out several heuristic methods that may not be optimal, but still better than what exists:
|
So there are two parts to the training:
If you use
Correct. See https://ieeexplore.ieee.org/abstract/document/9546644 for something that is close to what you're after.
I would just train with the defaults, and then simulate your 20 years of historical data to see how the policy performs. |
Hi Oscar,
I am still working on optimizing an energy system with photovoltaics, grid power, electrolyser, hydrogen storage and hydrogen fueling station for cars using SDDP. I have implemented stochasticity for the hydrogen demand.
Now I want to add stochasticity for solar irradiation / PV generation. Solar irradiation can be modeled as a Markov Chain but it is very complex to identify the right model. Therefore, I prefer to simply use the previous 5 years of solar data to sample from. How can I train the model using historical data?
This is my current model:
So I want to replace optdata.P_pv[t] by a stochastic variable.
I thought of doing the following:
However, this is not possible since solar irradiation data is stagewise-dependent.
Potential alternative options:
I could not find anything helpful in the documentation.
Thanks,
Josien
The text was updated successfully, but these errors were encountered: