-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when using nonlinear function of state.in #735
Comments
Just adding more context: From what I understand, we want for SDDP to neglect the influence its states ( |
Gurobi does not support nonlinear constraints. Can you provide a reproducible example of your real problem? Also note that SDDP requires that the value function is convex with respect to the incoming state variables. Have you verified that this is true for your stochastic process? Instead of using a variable and using SDDP, Gurobi, Test
function fast_hydro_thermal()
model = SDDP.LinearPolicyGraph(;
stages = 2,
upper_bound = 0.0,
sense = :Max,
optimizer = Gurobi.Optimizer,
) do sp, t
@variable(sp, 0 <= x <= 8, SDDP.State, initial_value = 0.0)
@variables(sp, begin
y >= 0
p >= 0
end)
@constraints(sp, begin
p + y >= 6
# Right-hand side of 0.0 should be ξ
c_rhsξ, x.out - x.in + y <= 0.0
# 1.0 should be 1 / exp(ξ)
c_expξ, 1.0 * x.in >= 0.0
end)
@stageobjective(sp, -5 * p)
RAINFALL = (t == 1 ? [6] : [2, 10])
SDDP.parameterize(sp, RAINFALL) do ω
set_normalized_rhs(c_rhsξ, ω)
set_normalized_coefficient(c_expξ, x.in, 1 / exp(ω))
return
end
end
SDDP.train(model)
return
end
fast_hydro_thermal()
I'm not sure I understand this part. Are you working together? |
I was just helping him out. He want's to evaluate another type of time-inconsistency in the hydrothermal economic dispatch.
Yeah, this is not the case, because of the non-linear dependency of the actualized inflow with respect to previous inflow (inflow comes from a non-linear autoregressive). Nevertheless, we want to neglect this dependency when calculating the cuts. I will leave to @arthur-brigatto to provide a MWC for the true problem. |
Yes, the stochastic process is non-convex on the state variable and time-dependent. This is not compatible with SDDP requirements, but a similar approach is actually used in Brazilian official models. I wanted to test the approach on SDDP.jl. I haven't been able to come up with a working case, so I will try to describe what I am trying to achieve. Usually, we would model the stochastic process inside of each subproblem as follows:
However, in my case ξ is not simply a regular sample from a given distribution. It is actually a function of inflow.in and random sample from a normal distribution. Hence, I have to construct the noise after sampling from a normal distribution and observing noise.in. So one way would be to evaluate ξ for each subproblem using decision variables and constraints similarly to what I described on my previous comment. But this leads to non-convexities and I am now convinced that it will not work very well. Or I could try something like this:
Where non_convex_function() generates ξ by considering the last evaluated value for the state inflow, preferably as a Float64. In this approach, non-convexities and dependencies would be 'hidden' from SDDP, which is acceptable for my application. However, I believe that SDDP.jl does not rebuild each subproblem during the convergence procedure, and therefore, this approach would also not work. Is this correct? And if so, is there a way around it? |
There is no way around this. You will need to do something like Andrew's paper, where you train a policy with a convex approximation of the inflow dynamics, and then simulate with the non-convex dynamics. You could start with SDDP.parameterize(sp, Ω) do ω
return JuMP.fix(ξ, ω)
end
@constraint(sp, inflow.out = ϕ*inflow.in + ξ) and assume that the inflows are stagewise independent. But you could simulate the non-convex dynamics by a-priori constructing a set of simulations and using SDDP.train(model; sampling_scheme = my_sampling_scheme) Alternatively if the inflow is univariate, then you could approximate the inflows by a Markov chain: https://sddp.dev/stable/guides/create_a_general_policy_graph/#Creating-a-Markovian-graph-automatically |
Is there a way to fetch the value of |
No. Assuming that your inflows are just a function of |
I have been talking to the folks at PSR about this. We should modify: |
Great idea! I will have a chat with @arthur-brigatto and submit a PR as soon as possible. |
@arthur-brigatto @andrewrosemberg not sure if you’ve already started working on the implementation but I’ve been thinking about a similar problem and would like to chat with you to understand how you’re thinking to implement this (and possibly contribute) |
@andrewrosemberg @grochacunha just added a pull request for this. Have a look. I implemented it as suggested by @odow, and now it would be possible to create a custom sampling scheme that is conditional on the state in the backward step. For my specific application, I used:
This worked fine in my case. |
I am trying to solve a problem that is dependent on a nonlinear function of both the incoming state (that sould be a parameter) and the uncertainty.
However, since both the uncertainty and the incoming state are treated as variables in JuMP I am getting the following error:
Here is a minimum working case based on one of SDDP.jl documentation examples, where we demonstrate the error on a trivial constraint. Is there a way around?
For us this problem appears when we are implementing a specific stoachstic process for modeling the inflow inside the SDDP. Here are the equations for our case.
Another way around would be to treat ϵ[i] as a parameter that is evaluated before solving the subproblem.
The text was updated successfully, but these errors were encountered: