-
Notifications
You must be signed in to change notification settings - Fork 20
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example of a simple (very routine) problem in economics #131
Comments
Hi there, thanks for reaching out. The principle limitation of InfiniteOpt (in its current form) is support for non-quadratic nonlinear functions of decision variables (e.g., log(c(t))). This is due to our dependence on JuMP which does allow general nonlinear functions/expressions when modeling directly in JuMP but this nonlinear interface is unextendible for packages like ours. This limitation will be lifted once jump-dev/MathOptInterface.jl#846 is addressed. There are plans to make a temporary patch to address this problem (#16). I hope to address this sometime this summer, but we'll see what my schedule allows. It might be possible to use a change of variables to adapt your formulation. Otherwise, you can transcribe the problem manually into a traditional NLP form that can be used by JuMP. |
something like: using JuMP, Ipopt
Δt = 0.01
T = 10
time = 0.0:Δt:T
model = Model(Ipopt.Optimizer)
@variable(model, c[time] >= 0.0001) # To avoid log(0).
@variable(model, B[time])
fix(B[0.0], 100.0)
fix(B[T], 0.0)
@NLobjective(model, Max, sum(exp(-0.01 * t) * log(c[t]) for t in time))
@constraint(
model,
[i = 2:length(T)],
B[t] == B[t - 1] + Δt * (0.05 * B[t - 1] - c[t]),
)
optimize!(model) |
@pulsipher using InfiniteOpt, JuMP, Ipopt, Plots
m = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(m, t in [0, 10],num_supports = 10_001)
@infinite_variable(m, B(t))
@infinite_variable(m, 0 <= c(t) <= 99) #m, c(t) >= 0
@objective(m, Max, integral( (-1)*(c - 100)^2, t) )
@BDconstraint(m, (t == 0), B == 100) # B[t=0] = 100
@BDconstraint(m, (t == 10), B == 0)
@constraint(m, c1, @deriv(B, t) == 0.01*B - c)
m
optimize!(m)
termination_status(m)
opt_obj = objective_value(m) #
#c, B, value.(c), value.(B)
c_opt, B_opt = value.(c), value.(B)
u_ts = supports.(c)
u_ts =[u_ts[j][1] for j in eachindex(u_ts)]
plot(u_ts, B_opt, lab = "B: wealth balance")
plot!(u_ts, c_opt, lab = "c: consumption") Here are the control/state vars from your package:
|
@odow thanks! using JuMP, Ipopt, Plots
Δt = 0.01; T = 10.0; time = 0.0:Δt:T;
m = Model(Ipopt.Optimizer)
@variable(m, 0.0001 <= c[time] <= 99.0)
@variable(m, B[time])
fix(B[0.0], 100.0)
fix(B[T], 0.0)
@NLobjective(m, Max, sum((-1.0)*(c[t] - 100.0)^2.0 for t in time) )
@constraint(m,
[i = 2:length(T)],
B[t] == B[t - 1] + Δt * (0.01 * B[t - 1] - c[t])
)
optimize!(m)
objective_value(m)
c_opt, B_opt = value.(c), value.(B)
plot(time, B_opt.data, lab = "B: wealth balance")
plot!(time, c_opt.data, lab = "c: consumption") Do you have any idea what it might be? |
The perils of writing code without checking that it runs: Δt = 0.01; T = 10.0; time = 0.0:Δt:T;
m = Model(Ipopt.Optimizer)
@variable(m, 0.0001 <= c[time] <= 99.0)
@variable(m, B[time])
fix(B[0.0], 100.0)
fix(B[T], 0.0)
@NLobjective(m, Max, sum((-1.0)*(c[t] - 100.0)^2.0 for t in time) )
for i in 2:length(time)
t_1, t = time[i - 1], time[i]
@constraint(m, B[t] == B[t_1] + Δt * (0.01 * B[t_1] - c[t]))
end
optimize!(m) |
@odow works beautifully. Thank you! |
@azev77 That's a cool example, nice job setting it up. Yes you can add in the discount factor since it is only a function of the infinite parameter Option 1: use a parameter function and embed it in the expression: using InfiniteOpt, JuMP, Ipopt, Plots
discount(t) = exp(-0.01 * t) # this will work as a function for our parameter function
m = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(m, t in [0, 10], num_supports = 100)
@infinite_variable(m, B(t))
@infinite_variable(m, 0 <= c(t) <= 99)
pfunc = parameter_function(discount, t)
@objective(m, Min, integral(pfunc * (c - 100)^2, t))
@BDconstraint(m, (t == 0), B == 100)
@BDconstraint(m, (t == 10), B == 0)
@constraint(m, c1, deriv(B, t) == 0.01*B - c)
optimize!(m)
termination_status(m)
opt_obj = objective_value(m) #
c_opt, B_opt = value(c), value(B)
ts = supports(t)
plot(ts, B_opt, lab = "B: wealth balance")
plot!(ts, c_opt, lab = "c: consumption") EDIT: See correction in the comments below. Option 2: use the using InfiniteOpt, JuMP, Ipopt, Plots
discount(t) = exp(-0.01 * t) # this will work as a weight function in the integral
m = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(m, t in [0, 10], num_supports = 100)
@infinite_variable(m, B(t))
@infinite_variable(m, 0 <= c(t) <= 99)
@objective(m, Min, integral((c - 100)^2, t, weight_func = discount))
@BDconstraint(m, (t == 0), B == 100)
@BDconstraint(m, (t == 10), B == 0)
@constraint(m, c1, deriv(B, t) == 0.01*B - c)
optimize!(m)
termination_status(m)
opt_obj = objective_value(m) #
c_opt, B_opt = value(c), value(B)
ts = supports(t)
plot(ts, B_opt, lab = "B: wealth balance")
plot!(ts, c_opt, lab = "c: consumption") |
Oh, my bad that temporarily introduces an expression with 3 variable reference multiplied together. This can be alleviated by introducing a placeholder variable: using InfiniteOpt, JuMP, Ipopt, Plots
discount(t) = exp(-0.01 * t) # this will work as a function for our parameter function
m = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(m, t in [0, 10], num_supports = 100)
@infinite_variable(m, B(t))
@infinite_variable(m, 0 <= c(t) <= 99)
@infinite_variable(m, placeholder(t))
pfunc = parameter_function(discount, t)
@constraint(m, placeholder == (c - 100)^2)
@objective(m, Min, integral(pfunc * placeholder, t))
@BDconstraint(m, (t == 0), B == 100)
@BDconstraint(m, (t == 10), B == 0)
@constraint(m, c1, deriv(B, t) == 0.01*B - c)
optimize!(m)
termination_status(m)
opt_obj = objective_value(m) #
c_opt, B_opt = value(c), value(B)
ts = supports(t)
plot(ts, B_opt, lab = "B: wealth balance")
plot!(ts, c_opt, lab = "c: consumption") Alternatively, option 2 provides a more compact avenue. |
@odow is there a way to do this in JuMP.jl with:
|
No. @pulsipher has done an excellent job adding these extensions to InfiniteOpt instead... |
@pulsipher can I help you add this econ example to the docs? |
@azev77 Ya that would be great. I think this would make a nice addition to the Examples page in the docs. Please refer the contribution guide (https://github.com/pulsipher/InfiniteOpt.jl/blob/master/CONTRIBUTING.md) for info on how to implement this and make a pull request. You can follow-up here with any questions you have. |
Great.
Are these possible yet? |
Unbounded infinite domains can be defined directly In this case, the only integral evaluation method we support is Gauss Laguerre quadrature (see the developmental docs https://pulsipher.github.io/InfiniteOpt.jl/dev/guide/measure/#Evaluation-Methods). However, this does not work properly on the current release, but this will be fixed with the next release coming in about 2 weeks. I am not quite sure how to enforce the limit constraint, perhaps we could try it via a bounded constraint with a sufficiently large t. My experience is limited with infinite time horizon problems, but I can look further in to how to properly model it when I have some free time later this week. |
@azev77 Sorry that it's been a little while, but I finally finished releasing the new version and now am taking a look back at this. First, the new version has some breaking syntax changes for better long term stability, so here is the updated finite horizon case (be sure to update to the latest version first): using InfiniteOpt, Ipopt, Plots
discount(t) = exp(-0.01 * t) # this will work as an additional weight function in the integral
m = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(m, t in [0, 10], num_supports = 100)
@variable(m, B, Infinite(t))
@variable(m, 0 <= c <= 99, Infinite(t))
@objective(m, Min, integral((c - 100)^2, t, weight_func = discount))
@constraint(m, B == 100, DomainRestrictions(t => 0))
@constraint(m, B == 0, DomainRestrictions(t => 10))
@constraint(m, c1, deriv(B, t) == 0.01*B - c)
optimize!(m)
termination_status(m)
opt_obj = objective_value(m) #
c_opt, B_opt = value(c), value(B)
ts = supports(t)
plot(ts, B_opt, lab = "B: wealth balance")
plot!(ts, c_opt, lab = "c: consumption") Now we can try to implement the infinite horizon case: using InfiniteOpt, Ipopt, Plots
int_discount(t) = exp(0.99 * t) # additional weight function for the integral (will be multiplied by e^-t by Gauss Laguerre)
discount(t) = exp(-0.01 * t) # weight function for parameter function in the limit constraint
m = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(m, t in [0, Inf])
@variable(m, B, Infinite(t))
@variable(m, 0 <= c <= 99, Infinite(t))
@parameter_function(m, dis == discount(t))
@objective(m, Min, integral((c - 100)^2, t, eval_method = GaussLaguerre(), num_nodes = 100, weight_func = int_discount))
@constraint(m, B == 100, DomainRestrictions(t => 0))
@constraint(m, dis * B == 0, DomainRestrictions(t => supports(t)[end])) # not sure what u'(c_t) is...
@constraint(m, c1, deriv(B, t) == 0.01*B - c)
optimize!(m)
termination_status(m)
opt_obj = objective_value(m) #
c_opt, B_opt = value(c), value(B)
ts = supports(t)
plot(ts, B_opt, lab = "B: wealth balance")
plot!(ts, c_opt, lab = "c: consumption") Let me know if you have any further questions. More information about the various integral evaluation methods is provided here. As for adding an example to the docs, the documentation now details how to do this here. |
Hi:
Here is the first figure (created in my example): Two issues:
To Do:
|
Hi @azev77, thank you for your contribution please refer to #141 to finish that up.
This is because the
I am not exactly sure where m = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(m, t in [0, 1])
@variable(m, y >= 0, Infinite(t))
@expression(m, integrand, (1 - y)^2) # make the integrand function
@objective(m, Min, integral(integrand, t)) # make the objective function
optimize!(m)
opt_obj = objective_value(m) # the optimized scalar value of the integral
opt_integrand = value(integrand) # the values of the integrand function Note that in this case you won't want to use the
This would be interesting, I don't have a lot of experience with the solution methods for these types of problems and I am sure we can make additions to enhance InfiniteOpt's capabilities as we better learn what else would be helpful.
Yep, I look forward to that day!
That should be straightforward to do.
Yes this should be possible depending on exactly how you want to model the stochastic elements. Stochastic optimization is my primary expertise. |
I see, that makes sense. Thank you for the clarification. That should be fine as it is then. |
@pulsipher the table below summarizes the 4 main approaches to modeling in economics: We now have a simple example in the docs for solving I can't find a using InfiniteOpt, Ipopt, Plots
ρ = 0.020 # discount rate
k = 90.0 # utility bliss point
T = 10.0 # life horizon
r = 0.05 # interest rate
B0 = 100.0 # endowment
u(c; k=k) = -(c - k)^2 # utility function
#discount(t; ρ=ρ) = exp(-ρ*t) # discount function
opt = Ipopt.Optimizer # desired solver
m = InfiniteModel(opt)
@variable(m, B[0.0:1.0:11.0]) ## state variables B[0.0] B[1.0] ⋯ B[11.0]
@variable(m, c[0.0:1.0:10.0]) ## control variables c[0.0] c[1.0] ⋯ c[10.0]
@objective(m, Max, sum(u(c[i]) for i ∈ 0.0:1.0:10.0) )
@constraint(m, [i in 0.0:1.0:T], B[i+1] == B[i] + r*B[i] - c[i] )
@constraint(m, B[0.0] == B0)
@constraint(m, B[11.0] == 0)
julia> optimize!(m)
ERROR: ArgumentError: reducing over an empty collection is not allowed
Stacktrace:
[1] _empty_reduce_error()
@ Base .\reduce.jl:299
[2] mapreduce_empty(f::Function, op::Base.BottomRF{typeof(Base.mul_prod)}, T::Type)
@ Base .\reduce.jl:342
[3] reduce_empty(op::Base.MappingRF{typeof(length), Base.BottomRF{typeof(Base.mul_prod)}}, #unused#::Type{Union{}})
@ Base .\reduce.jl:329
[4] reduce_empty_iter
@ .\reduce.jl:355 [inlined]
[5] reduce_empty_iter
@ .\reduce.jl:354 [inlined]
[6] foldl_impl(op::Base.MappingRF{typeof(length), Base.BottomRF{typeof(Base.mul_prod)}}, nt::Base._InitialValue, itr::Tuple{})
@ Base .\reduce.jl:49
[7] mapfoldl_impl(f::typeof(length), op::typeof(Base.mul_prod), nt::Base._InitialValue, itr::Tuple{})
@ Base .\reduce.jl:44
[8] mapfoldl(f::Function, op::Function, itr::Tuple{}; init::Base._InitialValue)
@ Base .\reduce.jl:160
[9] mapfoldl
@ .\reduce.jl:160 [inlined]
[10] #mapreduce#218
@ .\reduce.jl:287 [inlined]
[11] mapreduce
@ .\reduce.jl:287 [inlined]
[12] #prod#224
@ .\reduce.jl:555 [inlined]
[13] prod(f::Function, a::Tuple{})
@ Base .\reduce.jl:555
[14] build_transcription_model!(trans_model::Model, inf_model::InfiniteModel; check_support_dims::Bool)
@ InfiniteOpt.TranscriptionOpt C:\Users\azevelev\.julia\packages\InfiniteOpt\8U5IK\src\TranscriptionOpt\transcribe.jl:875
[15] #build_optimizer_model!#95
@ C:\Users\azevelev\.julia\packages\InfiniteOpt\8U5IK\src\TranscriptionOpt\optimize.jl:34 [inlined]
[16] build_optimizer_model!
@ C:\Users\azevelev\.julia\packages\InfiniteOpt\8U5IK\src\TranscriptionOpt\optimize.jl:32 [inlined]
[17] build_optimizer_model!(model::InfiniteModel; kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ InfiniteOpt C:\Users\azevelev\.julia\packages\InfiniteOpt\8U5IK\src\optimize.jl:525
[18] build_optimizer_model!
@ C:\Users\azevelev\.julia\packages\InfiniteOpt\8U5IK\src\optimize.jl:524 [inlined]
[19] #optimize!#386
@ C:\Users\azevelev\.julia\packages\InfiniteOpt\8U5IK\src\optimize.jl:921 [inlined]
[20] optimize!(model::InfiniteModel)
@ InfiniteOpt C:\Users\azevelev\.julia\packages\InfiniteOpt\8U5IK\src\optimize.jl:920
[21] top-level scope
@ REPL[107]:1 Do you know how to solve this? PS: I can solve the discrete time/deterministic version directly in JuMP.jl hacking @odow's code using JuMP, Ipopt, Plots # InfiniteOpt,
ρ = 0.020 # discount rate
k = 100.0 # utility bliss point
T = 10.0 # life horizon
r = 0.05 # interest rate
B0 = 100.0 # endowment
time_c = 0.0:1.0:T
time_B = 0.0:1.0:T+1
u(c; k=k) = -(c - k)^2.0 # utility function
d(t; ρ=ρ) = exp(-ρ*t) # discount function
m = Model(Ipopt.Optimizer)
register(m, :u, 1, u; autodiff = true)
register(m, :d, 1, d; autodiff = true)
@variable(m, c[time_c]) #@variable(m, 1e-4 <= c[0.0:1.0:10.0] <= 99.0)
@variable(m, B[time_B])
fix(B[0.0], B0)
fix(B[T+1], 0.0)
@NLobjective(m, Max, sum(d(t)*u(c[t]) for t in time_c) )
#@NLobjective(m, Max, sum(exp(-ρ*t)*(-1.0)*(c[t]-k)^2.0 for t in 0.0:1.0:10.0) )
for i in 2:length(time_B)
t_1, t = time_B[i - 1], time_B[i]
@constraint(m, B[t] == B[t_1] + 1.0 * (r * B[t_1] - c[t_1]))
end
optimize!(m)
objective_value(m)
c_opt, B_opt = value.(c), value.(B)
scatter(time_B, B_opt.data, lab = "B: wealth balance");
scatter!(time_c, c_opt.data, lab = "c: consumption") |
@azev77 Nice write up. InfiniteOpt.jl is intended to model infinite-dimensional optimization problems such that we decouple the infinite (e.g., continuous) model from the discrete model. Hence, InfiniteOpt is not intended for modeling finite models (e.g., discrete time models directly), these should instead be modeled in JuMP as you did above which will use the same syntax since InfiniteOpt is a JuMP extension (basically if you don't use Note that in the JuMP above, registering Note that the discrete forms above implicitly approximate the derivatives via forward difference and approximate the integrals as weighted sums. InfiniteOpt allows one to lift such models in their continuous form (thus decoupling them from these inherent approximations) and promote the use of various approximation strategies as desired (implementing them automatically). With regard to the stochastic model, it will be interesting to see how best to model it. I'll have to digest it a little more to think of how best to tackle it. Based on the discrete case it seems to me that an interesting extension would be to use a Gaussian random field to model the Gaussian uncertainty in continuous time. |
Directly modeling discrete time-space models with infinite horizons is probably beyond the scope of InfiniteOpt. Our modeling abstraction focuses around the use of infinite parameters, which are defined over infinite (e.g., continuous) domains, that parameterize the decision variables. In other words, the focus is on optimization problems that use infinite variables (i.e., decision functions whose input domain is parameterized by infinite parameters). However, we do implicitly model discrete time problems when solving the the continuous time problems using If you are aware of techniques people use to solve discrete/continuous infinite horizon problems please let me know. We currently only provide Gauss-Hermite and Gauss-Laguerre quadrature as solutions techniques, but I am sure there are more elegant things we could do. |
That's a long conversation. I'll add some stuff here later.
Perhaps the simplest way to solve the 4-problems is to formulate an LQ/LQG version: |
@azev77 Thanks for the info. I'll take a closer look at this once my schedule frees up some time. |
@azev77 First off, thank you for all your interest, kind words, and contributions thus far. I had some time to take a deeper dive and this is what I think I have found:
I am currently working on the incorporating random field theory into this abstraction, so I likely resolve #95 later this year. For more information on the abstraction and background behind InfiniteOpt.jl, please see our paper: https://arxiv.org/abs/2106.12689. As for writing an extension and/or addition for solving infinite-horizon problems, I don't think I will have time in the near future to lead such an effort. However, should anyone else want to lead that effort, I would be happy to assist/support them. |
That is a good question and an understandable one since InfiniteOpt doesn't readily abstract stochastic ODEs in its current state. The short answer is that we cannot directly incorporate traditional SODEs, but we are actively working on adding this capability. However, there is a workaround for this in the meantime. I will detail the context of this functionality in InfiniteOpt and the potential workaround below (please excuse the length). Plans for Stochastic Modeling over Continuous Spaces: Naturally, we plan to further develop InfiniteOpt to accommodate more general stochastic models such as your model. The difficulty is that stochastic considerations are defined, abstracted, and modeled very differently across various disciplines. Hence, coming up with a intuitive modeling abstraction for general infinite-dimensional optimization problems is nontrivial. For this reason, I am not aware of any constrained optimization modeling interface that supports stochastic modeling over other domains (e.g., space and time). However, we are investigating the overlap of random field (e.g., Gaussian processes) and stochastic ODE/PDE approaches to hopefully come up with such an interface in the relatively near future. The current working idea is to define modeling constructs called infinite parameter functions (i.e., an infinite parameter that is a function of other infinite parameters) that can then be embedded into variables and constraints in like manner to infinite parameters. Thus, for a stochastic temporal parameter (e.g., y(t) in your problem) the syntax would be something like: @infinite_parameter(model, t in [0, T], num_supports = 100) # normal time parameter
@infinite_parameter(model, y(t) ~ MyBrownianDist, num_supports = 100) # infinite parameter function where # Stochastic time decision functions whose values will depend on t and y(t)
# We explicitly put y(t) to distinguish these from deterministic time functions
@variable(model, B, Infinite(t, y))
@variable(model, c, Infinite(t, y))
# Define objective
@objective(model, Max, 𝔼(∫(exp(-ρ * t) * u, t), y) # this features the new NLP syntax that is coming soon
# Make the constraints
@constraint(model, ∂(B, t) == r * B + y - c)
@constraint(model, B(0) == B0)
@constraint(model, B(T) == 0) We are very much open to syntax/modeling suggestions. Note my current focus is on finishing general nonlinear support. Current Modeling Workaround: First, we need to determine With this, we can now model our problem. We can write: # Define the stochastic income info
n = 100 # define number of samples wanted
y0 = 10 # TODO replace with actual value
μ = 0; σ= 0.1 # TODO replace with actual values
y_analytical(t, W) = y0 * exp((μ - σ^2/2) * t + σ * W) # TODO update with desired W behavior
y_sampled = [(t) -> y_analytical(t, rand()) for 1:n]
# Define parameters
ρ = 0.020 # discount rate
k = 90.0 # utility bliss point
T = 10.0 # life horizon
r = 0.05 # interest rate
B0 = 100.0 # endowment
u(c) = -(c - k)^2 # utility function
discount(t) = exp(-ρ*t) # discount function
# Define Model
model = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(model, t in [0, T], num_supports = 100)
@parameter_function(model, y[i = 1:n] == y_sampled[i](t))
@variable(model, B[1:n], Infinite(t)) # have a variable for each sample of y
@variable(model, 0 <= c[1:n] <= 99, Infinite(t))
@objective(model, Min, 1/n * sum(∫(u(c[i]), t, weight_func = discount) for i in 1:n)) # use sum for expectation over y
@constraint(model, [i = 1:n], B[i](0) == 100)
@constraint(model, [i = 1:n], B[i](T) == 0)
@constraint(model, [i = 1:n], ∂(B[i], t) == r * B[i] + y[i] - c[i]) That should do it. |
Reminder to self: y_sampled = [(t) -> y_analytical(t, randn()) for i=1:n] In objective replace Min w/ Max |
Also, for simplicity in example I just let |
In the example in the docs, we use Max, & u=-(c-k)^2 https://pulsipher.github.io/InfiniteOpt.jl/stable/examples/Optimal%20Control/consumption_savings/ |
Yep, I was mistaken. Thanks for the clarification. Also, an easier way to handle to the geometric Brownian motion and generate the samples would be to use DifferentialEquations.jl with their noise processes: https://diffeq.sciml.ai/stable/features/noise_process/#noise_process For this example, we would have the following: Edit using InfiniteOpt, Ipopt, DifferentialEquations
# Define parameters
ρ = 0.020 # discount rate
k = 90.0 # utility bliss point
T = 10.0 # life horizon
r = 0.05 # interest rate
dt = 0.1 # time step size TODO replace as wanted
ts = collect(0:dt:T) # time support points
B0 = 100.0 # endowment
u(c) = -(c - k)^2 # utility function
discount(t) = exp(-ρ*t) # discount function
# Define the stochastic income info
n = 100 # define number of samples wanted TODO replace with actual number wanted
y0 = 10.0 # TODO replace with actual value
μ = 0; σ= 0.1 # TODO replace with actual values
y_model = GeometricBrownianMotionProcess(μ, σ, 0.0, y0, 1.0) # TODO replace Brownian params as wanted
y_sim = NoiseProblem(y_model, (0.0, T))
y_samples = Vector{Function}(undef, n)
for i in eachindex(y_samples)
y_scenario = DifferentialEquations.solve(y_sim; dt = dt)
y_samples[i] = t -> y_scenario(t)[1]
end
# Define Model
model = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(model, t in [0, T], supports = ts)
@parameter_function(model, y[i = 1:n] == y_samples[i](t))
@variable(model, B[1:n], Infinite(t)) # have a variable for each sample of y
@variable(model, 0 <= c[1:n] <= 99, Infinite(t))
@objective(model, Max, 1/n * sum(∫(u(c[i]), t, weight_func = discount) for i in 1:n)) # use sum for expectation over y
@constraint(model, [i = 1:n], B[i](0) == 100)
@constraint(model, [i = 1:n], B[i](T) == 0)
@constraint(model, [i = 1:n], ∂(B[i], t) == r * B[i] + y[i] - c[i])
# Solve the model and get the results
optimize!(model)
opt_obj = objective_value(model) |
WOW! Amazing! Check out the picks: (note how smooth consumption is rel to inc/wealth, as it should be!) I'll compare to closed-forms later... This is great bc economists usually have to solve non-linear PDEs like Ben Moll & @matthieugomez EconPDEs.jl. A few comments.
|
I am glad it worked out nicely!
It would make since that increasing the noise would make the ODEs harder to solve. You can try adjusting the sample/support amounts to see if that helps. For increased speed, I also noticed that this problem is fully decoupled which means you could instead solve a separate model for each y sample and then aggregate them together (these subproblems could even be solved in parallel). This means that the model portion of the above code can be rewritten: # Set storage arrays
Bs = zeros(n, length(ts))
cs = zeros(n, length(ts))
objs = zeros(n)
# Solve each subproblem
for i in eachindex(y_samples)
model = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(model, t in [0, T], supports = ts)
@parameter_function(model, y == y_samples[i](t))
@variable(model, B, Infinite(t))
@variable(model, 0 <= c <= 99, Infinite(t))
@objective(model, Max, ∫(u(c), t, weight_func = discount))
@constraint(model, B(0) == 100)
@constraint(model, B(T) == 0)
@constraint(model, ∂(B, t) == r * B + y - c)
optimize!(model)
Bs[i, :] = value(B)
cs[i, :] = value(c)
objs[i] = objective_value(model)
end
# Compute the overall objective
expect_obj = 1 / n * sum(objs)
The framework above is quite general where |
Can you explain why it should be? If you don't know the future realization of the process, how you do know the intercept and slope of the consumption to end the time period with 0 wealth? This is only possible if you have perfect foresight of the process? Here's what I get with SDDP.jl: Your consumption is dependent on your wealth and current income, so very much not smooth. |
@odow Optimal consumption should itself be a stochastic process, a function of stochastic income/wealth. |
@odow I think the difference is in how these problems are modeled. In the InfiniteOpt formulation, we treat the uncertainty y as a fixed geometric Brownian process and the solve the problem with respect to sampled trajectories. In other words, we essentially have a generalization of a 2-stage program where we have a stochastic process (giving random time trajectory realizations) instead of a traditional distribution. This means that we treat the uncertainty propagation jointly which implicitly assumes that we know the stochastic process uncertainty apriori and that the decisions we make will not change it. Hence, in this case y is the known stochastic process uncertainty and B & c are the decision functions we are shaping (and denote stochastic processes in and of themselves). In the SDDP formulation, you are using a multi-stage stochastic program which naturally models the uncertainty conditionally (i.e., following a discrete scenario tree). This of course operates under different assumptions leading to the different behavior of c. For reference here is the comparable output of InfiniteOpt: These differences highlight how domain correlated uncertainty is modeled quite differently across various fields. From my research, it seems that a variety of fields (e.g., space-time statisticians, economics, etc.) that use stochastic ODEs and/or random field theory tend to solve treat the uncertainty jointly in like manner to the above formulation. Whereas the stochastic programming community tend to multi-stage formulations that are based on scenario trees. The question here becomes how do we "best" model the uncertainty in these systems? Handling it jointly doesn't account for conditional decision making which may be desired, and on the other hand, to my knowledge multi-stage models don't generalize to continuous time and it is not obvious how they could be applied to other domains (e.g., spatial uncertainty). These are the types questions that are coming up in developing the unifying abstraction for infinite-dimensional optimization problems that is behind InfiniteOpt. |
Before I forget, it would be nice to try an example w a bang-bang solution at some point: http://www.sfu.ca/~wainwrig/Econ331/Chapter20-bangbang-example.pdf |
Reminder to self: https://academic.oup.com/ectj/article-abstract/23/3/S1/5802896 |
@azev77 If it is alright with you, I would like to close this question discussion here and recommend that any further discussion be done in our new discussion forum instead: https://github.com/pulsipher/InfiniteOpt.jl/discussions. |
Continued in #194. |
Just a comment in passing: While I haven't looked at the details of the modelling strategy, it seems to me that consumption is too smooth here. In the typical setup, consumption satisfies a form of Milton Friedman's "permanent income theory". Now the devil is in the details and how the stochastic process is set up matters a lot obviously, and how expectations are formed matters too; still, one would expect consumption to fluctuate according to the "permanent" shocks, while absorbing the "transitory" shocks. In the case of a finite horizon, the marginal propensity would typically be expected to rise non-linearly near the end of the horizon (the finite-life consumer gulps all the remaining wealth before blissfully expiring). So perhaps the assumptions made here are a little different or the parameter values not typical... Here's a clear overview of the model assumptions with some Julia code, for comparison: https://julia.quantecon.org/dynamic_programming/perm_income.html |
@ptoche The smoothness of the above answer can likely be attributed to the anticipative nature of the simple discretization approach used to solve it. Admittedly, this simplifying assumption is not satisfactory for a number of problems such as this one. A nonanticipative approach should probably be used instead. This would entail correctly applying stochastic calculus (e.g., Ito calculus) to integrate the SDEs (above simplifying assumption negated this consideration). Currently, InfiniteOpt.jl does not implement this, but we have plans to add modeling capabilities for general random fields (e.g., stochastic processes) along with solution capabilities that can employ nonanticipative stochastic calculus approaches. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Hi & thank you for this package!
I'd like to use it to solve a very simple (routine) problem in economics.
This is a consumption-saving problem.
The household lives during time: t ∈ [0, 10]
The household's wealth @ time t is: B(t)
The household's choice is consumption: c(t)
The household is endowed w/ $100: B(0) = 100
The household cannot die in debt: B(10) = 0
I have the closed form solution for comparison.
Is it possible to solve this problem w/ your package?
The text was updated successfully, but these errors were encountered: