Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Markov graph + Risk aversion #722

Closed
Thiago-NovaesB opened this issue Dec 14, 2023 · 5 comments
Closed

Markov graph + Risk aversion #722

Thiago-NovaesB opened this issue Dec 14, 2023 · 5 comments

Comments

@Thiago-NovaesB
Copy link

Thiago-NovaesB commented Dec 14, 2023

I'm using Markovian policy graphs and the results don't make sense to me when I use worst-case.

I would like to better understand how SDDP.jl handles these two features together. What would be the worst case scenario? Is it the worst case scenario over all nodes in layer t+1? Or will it receive a cut referring to the worst case scenario for each node in the next layer, referring to the current forward/backward?

@odow
Copy link
Owner

odow commented Dec 15, 2023

Is it the worst case scenario over all nodes in layer t+1?

It's this. Which is also the the worst case scenario of the entire tree. I don't find this a very useful risk measure, because it is too cautious.

@Thiago-NovaesB
Copy link
Author

This is exactly what I need! Maybe I'm understanding something wrong, so I prepared this small case to illustrate:

model = SDDP.MarkovianPolicyGraph(
    transition_matrices = Array{Float64,2}[
        [1.0]',
        [0.5 0.5],
        [0.5 0.5; 0.5 0.5],
    ],
    sense = :Max,
    upper_bound = 10000.0,
    optimizer = HiGHS.Optimizer,
) do subproblem, node
    t, markov_state = node
    @variable(subproblem, 0 <= volume <= 1.0, SDDP.State, initial_value = 1.0)
    @variables(subproblem, begin
        hydro_generation >= 0
    end)
    @constraints(
        subproblem,
        begin
            volume.out == volume.in - hydro_generation
        end
    )
    if markov_state == 1
        @stageobjective(
            subproblem,
            (1.5-0.1*t) * hydro_generation
        )
    else
        @stageobjective(
            subproblem,
            (2.0+2.5^t) * hydro_generation
        )
    end
end

SDDP.train(model, risk_measure=SDDP.EAVaR(lambda=0.0, beta=1.0))

simul = SDDP.simulate(model, 100,[:volume])

for i in 1:100
    println(simul[i][2][:volume])
end

In this problem, I just want to empty the reservoir optimally. There are 2 markov states. One of them gets a little worse over time and the other gets a lot better over time.
I expected the reservoir to empty in the first stage, since the worst case of the second stage is worse than the first stage. But he is saving the resource for the last stage.
Could you help me understand this result?

@odow
Copy link
Owner

odow commented Dec 15, 2023

Your risk measure is equivalent to the expectation:

help?> SDDP.EAVaR(lambda=0.0, beta=1.0)
  EAVaR(;lambda=1.0, beta=1.0)

  A risk measure that is a convex combination of Expectation and Average Value @ Risk (also called
  Conditional Value @ Risk).

      λ * E[x] + (1 - λ) * AV@R(β)[x]

  Keyword Arguments
  –––––––––––––––––––

    •  lambda: Convex weight on the expectation ((1-lambda) weight is put on the AV@R component.
       Inreasing values of lambda are less risk averse (more weight on expectation).

    •  beta: The quantile at which to calculate the Average Value @ Risk. Increasing values of beta
       are less risk averse. If beta=0, then the AV@R component is the worst case risk measure.

use instead

help?> SDDP.WorstCase()
  WorstCase()

  The worst-case risk measure. Places all of the probability weight on the worst outcome.

@Thiago-NovaesB
Copy link
Author

Thiago-NovaesB commented Dec 16, 2023

I chose the beta value wrongly, I wanted to make beta=0 instead of beta=1. With this fix, I'm getting the results I expected, thank you.

@odow
Copy link
Owner

odow commented Dec 16, 2023

No problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants