-
Notifications
You must be signed in to change notification settings - Fork 20
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Help Wanted] NL Constraints #94
Comments
Hi there! InfiniteOpt.jl does not currently support general nonlinear expressions (e.g., NLconstraints) like JuMP. The details of what expressions are accepted are given on https://pulsipher.github.io/InfiniteOpt.jl/dev/guide/expression/. This limitation is due to JuMP's nonlinear interface being nonextendable for packages like this one (see jump-dev/JuMP.jl#2402). JuMP NLP won't be extendable until it is fundamentally redone via jump-dev/MathOptInterface.jl#846 (maybe in 2-3 years). However, in accordance with #16, I do plan on implementing our own NLP interface from the ground up (aiming at sometime this year) since JuMP is a ways out, but this will require a significant amount of work. Regarding your above code:
If we want the constraint Prob(f(ξ) <= 0) <= δ (a chance constraint where f denotes the RHS function) we can equivalently represent it Expectation[indicator_(f(ξ) <= 0)(ξ)] <= δ. This can in turn be represented by big-M constraints (introducing an infinite variable y(ξ) in {0,1} for the indicator function and a sufficiently large scalar constant M). For example, if f(ξ) := w(ξ) - c * x(ξ) then in InfiniteOpt.jl we can do the following: using JuMP, InfiniteOpt, Distributions
c = 42 # whatever it is
δ = 0.95 # whatever it is
M = 1000 # a sufficient upper bound to relax the constraints
m = InfiniteModel()
@infinite_parameter(m, ξ in Normal(0, 1), num_supports = 1000)
@infinite_variable(m, w(ξ))
@infinite_variable(m, x(ξ))
@infinite_variable(m, y(ξ), Bin) # for the indicator function
@constraint(m, big_m_constr, w - c * x <= y * M)
@constraint(m, probabilistic_constr, expect(1 - y, ξ) <= δ) |
How would you do the first constraint with the min/max. I tried defining boundary conditions, but that kept erroring (I think it's because I can only define boundary conditions with respect to the infinite parameter, not with respect to other variables).
For the probability aspect, when I incorporate why you suggested, I get the following error: MathOptInterface.UnsupportedConstraint{MathOptInterface.SingleVariable,MathOptInterface.ZeroOne}: Any thoughts? Thank you very much! |
With respect to the use of using JuMP, InfiniteModel
m = InfiniteModel()
@infinite_parameter(m, t in [0, 10]) # overall time domain (infinite domain) is the interval [0, 10]
@infinite_variable(m, x(t)) # define some time variable
@constraint(m, x <= 42) # this will be enforced over the entirety of the infinite domain (i.e., t in [0, 10])
@BDconstraint(m, t in [0, 10], x <= 42) # equivalent to above with @constraint since [0, 10] is the full domain
@BDconstraint(m, t == 0, x ==5) # this will only be enforced over the subdomain t in [0, 0] (a subset of [0, 10])
@BDconstraint(m, t in [2, 5], x^2 <= 9) # this constraint will only be enforced over the subdomain t in [2, 5] Thus, With respect to your min-max constraint, my intuition is that it can probably be reformulated into a more convenient form when the entire optimization problem is considered. I am not exactly sure what overall formulation you are tackling, but I am guessing there might be a way to drop the min operator and then only have to deal with the max operator which is straightforward to reformulate if the objective is minimizing. I don't quite follow the reformulation you are attempting above, but the error is due to the nature of what bounded constraints are in accordance with my conversation above. If you can get it in a form with only the max operator then we can reformulate it as follows: using JuMP, InfiniteOpt, Distributions
m = InfiniteModel()
@infinite_parameter(m, ξ in Normal(0, 1), num_supports = 1000)
@infinite_variable(m, x(ξ))
# Enforce the constraint max(42, x) <= 80
@infinite_variable(m, z(ξ)) # auxiliary variable to represent max(42, x)
@constraint(m, z >= 42) # helps characterize the auxiliary variable
@constraint(m, z >= x) # helps characterize the auxiliary variable
@constraint(m, z <= 80) # defines the constraint we wanted with the max operator Finally, the MOI error is due to your choice of optimizer (solver). This is because the big-M constraint reformulation uses binary variables and you'll need to use an appropriate mixed-integer solver. Depending on the rest of the model, you could probably use Gurobi, Cplex, or Cbc. |
I see what you are saying. I guess my problem is a little more involved than the example and for that reason I'm unsure how to connect the two. Mainly because my constraint is w == max(42,x) where w is a variable in my model rather than a constant. I don't see how you can define that using the auxiliary variable method. |
That is not a problem at all. To formulate the constraint w(ξ) = max(42, x(ξ)) we do the following: using JuMP, InfiniteOpt, Distributions
m = InfiniteModel()
@infinite_parameter(m, ξ in Normal(0, 1), num_supports = 1000)
@infinite_variable(m, x(ξ))
@infinite_variable(m, w(ξ))
# Enforce the equivalent set of constraints
@infinite_variable(m, z(ξ)) # auxiliary variable to represent max(42, x)
@constraint(m, z >= 42) # helps characterize the auxiliary variable
@constraint(m, z >= x) # helps characterize the auxiliary variable
@constraint(m, w == z) # defines the constraint we wanted with the max operator Since w = z, this can be further simplified to: using JuMP, InfiniteOpt, Distributions
m = InfiniteModel()
@infinite_parameter(m, ξ in Normal(0, 1), num_supports = 1000)
@infinite_variable(m, x(ξ))
@infinite_variable(m, w(ξ))
# Enforce the equivalent set of constraints
@constraint(m, w >= 42)
@constraint(m, w >= x) Note that I am making some assumptions about the objective function (namely that it is minimizing). These types of reformations stem from considering min-max problems in linear programming. For more information refer to https://math.stackexchange.com/questions/2446606/linear-programming-set-a-variable-the-max-between-two-another-variables and http://apmonitor.com/me575/index.php/Main/MiniMax. |
There are many values of z that satisfy the constraints z >= 42 and z >= x. One of them is z = max(42,x). However, if x is 50, the value z = 60 would satisfy those constraints. To me this doesn't ensure that z = max(42,x), just that z is greater than both 42 and x. Am I missing something? |
This reformulation depends on the objective function. If the objective will tend for z to be minimized then this reformulation is exact. Otherwise, it can be reformulated by introducing a binary variable. Again, this is all better explained in detail here https://math.stackexchange.com/questions/2446606/linear-programming-set-a-variable-the-max-between-two-another-variables (you can also refer to online linear programming course materials like https://laurentlessard.com/teaching/524-intro-to-optimization/#:~:text=This%20course%20is%20an%20introduction,in%20industry%20and%20research%20applications.). Without knowing your full formulation, I cannot determine which reformulation will suit your situation. |
Ah, this makes so much more sense. Thank you. |
@smitch151 If there are no further questions, would you mind me closing this issue? |
Seeing that the immediate issue is resolved, I will close this issue. Further discussion about the development of the NLP interface can be made in #16. |
I'm trying to revisit in a simple case. Let's say I want to have the constraint p == min(A*x, p_bar). From my understanding of the Big M stuff you described, I need four constraints:
where ind = 1 when p_bar < A*x. When I try to code this up I get the following error: "MethodError: no method matching isless(::GenericAffExpr{Float64,GeneralVariableRef}, ::Float64)" How can I generate this indicator to do this comparison.
Once again, thank you. |
The error is occurring since the comparison syntax Your reformulation doesn't appear to be correct. Would you please algebraically define the optimization problem you are trying to solve before any reformulation is applied to it and also label each symbol as a constant, infinite parameter, or infinite variable (perhaps this could be provided via a short attachment). Once, I better understand the full problem you are trying to solve, I should be better able to help you. This sounds like an interesting application class. |
Okay I'll try to explain more fully. We want to maximize the expected value of p.x subject to the constraint that p == min(Ax, p_bar) where x is drawn from a beta distribution. The parameters of the beta distribution are fixed in this case. The parameters A and p_bar are allowed to vary. The complication comes from the fact that we can't just use a minimum function in infinite opt, so we need to use big M to get around it. The big M specification of C == min(A,B) is:
I updated the code your @constraint comment. I think I have the formulation of the big M aspect right in the code. However, I still get the error "MethodError: no method matching isless(::GenericAffExpr{Float64,GeneralVariableRef}, ::Float64)" when I do the comparison in the @constraint.
|
If I create an arbitrary variable w, with the same dimensions as x, the code works just fine. Also, I did need to switch the indicator direction you were correct about that. Good catch. Is there anyway to do this with x the infinite parameter rather than just some vector?
|
Also this issue occurs if x is an infinite variable, not just when x is an infinite parameter. For the problem I'm trying to solve x should be an infinite parameter, but just wondering if that helps you understand it. |
First, the syntax Let's return to problem you want to solve. If I understand correctly, your desired formulation is: Let's first ignore the last observation and model this as an optimization problem. This actually doesn't require the introduction of big-M constraints to handle the min because none of its inputs are decision variables, it is just a function of infinite parameters x. This means that once we generate samples for x the min function will evaluate to a constant (not containing any decision variables). Thus, we can employ an using InfiniteOpt, JuMP, Clp, Distributions, Random, LinearAlgebra
using JuMP.Containers: @container # convenient for infinite parameter functions
Random.seed!(42) # seed to reproduce same samples for the sake of example
N = 2
p_bar = [5.0, 5.0]
A = [3.0 10.0; 10.0 3.0]
# Initialize the model
m = InfiniteModel(Clp.Optimizer)
# Define the random parameters
@infinite_parameter(m, x[i=1:N] in Beta(1.0, 50.0), num_supports = 1000)
# Define the decision variables
@infinite_variable(m, p[i=1:N](x), start = 0)
# Define the infinite parameter function for each element of min(Ax, p_bar)
@container(my_func[i = 1:N], parameter_function(x_sample -> min((A * x_sample)[i], p_bar[i]), x))
# Define the objective
@objective(m, Max, expect(p' * x, x))
# Define the constraints
@constraint(m, p .== my_func)
# Optimize the model (i.e., discretize and solve the model)
optimize!(m)
# Get results
println("Objective value: ", objective_value(m))
println("Values of p(x): ", value.(p)) Notice in the above that p(x) = using InfiniteOpt, JuMP, Clp, Distributions, Random, LinearAlgebra
using JuMP.Containers: @container # convenient for infinite parameter functions
Random.seed!(42) # seed to reproduce same samples for the sake of example
N = 2
p_bar = [5.0, 5.0]
A = [3.0 10.0; 10.0 3.0]
# Initialize the model
m = InfiniteModel(Clp.Optimizer)
# Define the random parameters
@infinite_parameter(m, x[i=1:N] in Beta(1.0, 50.0), num_supports = 1000)
# Define the infinite parameter function for each element of min(Ax, p_bar)
@container(my_func[i = 1:N], parameter_function(x_sample -> min((A * x_sample)[i], p_bar[i]), x))
# Define the objective
@objective(m, Max, expect(my_func' * x, x))
# Optimize the model (i.e., discretize and solve the model)
optimize!(m)
# Get results
println("Objective value: ", objective_value(m)) Now we readily see that our model contains no decision variables which backs up our original observation that this need not be an optimization problem since it has no degrees of freedom. Thus, we can forego the optimization and just compute the expectation directly using sample average approximation to obtain the same answer. using Distributions, LinearAlgebra, Random
Random.seed!(42)
N = 2
S = 1000 # number of samples
p_bar = [5.0, 5.0]
A = [3.0 10.0; 10.0 3.0]
# Define the distribution and get the samples
dist = Beta(1.0, 50.0)
x = hcat([rand(dist, S) for i = 1:N]...)
# Compute the expectation via sample average approximation
my_func(x_sample) = [min((A * x_sample)[i], p_bar[i]) for i = 1:N]
objective = 1 / S * sum(my_func(x[s, :])' * x[s, :] for s = 1:S)
# Get results
println("Objective value: ", objective) I hope that helps. |
This is fantastic. Thank you so much. One, hopefully final, question: What's the syntax if I want to make A a variable in my model, rather than some external variable, but still want the same container function? i,e.
|
Glad to be of help. First, as a clarification the using JuMP.Containers: @container
# Make a vector `a` where each element is `i + 2`
@container(a[i = 1:10], i + 2)
# The above is equivalent to
a = Vector{Int}(undef, 10) # preallocate the array
for i = 1:10
a[i] = i + 2
end Now let's consider you problem where the A matrix is considered to be a nonnegative matrix of finite decision variables (i.e., first-stage variables in this context).
Here, we can no longer use
This reformulation is exact since the objective wants to make using InfiniteOpt, JuMP, Clp, Distributions, LinearAlgebra
# Define the constants
N = 2
p_bar = [5.0, 5.0]
# Initialize the model
m = InfiniteModel(Clp.Optimizer)
# Define the random parameters
@infinite_parameter(m, x[1:N] in Beta(1.0, 50.0), num_supports = 1000)
# Define A as a matrix of variables:
@hold_variable(m, A[1:N, 1:N] >= 0)
# Define the reformulation infinite variables z(x)
@infinite_variable(m, z[1:N](x))
# Define the objective
@objective(m, Max, expect(z' * x, x))
# Define the constraints
@constraint(m, z .<= A * x)
@constraint(m, z .<= p_bar)
# Solve the model and get the results
optimize!(m)
opt_objective = objective_value(m)
opt_A = value.(A) Hope that helps! |
Thank you. This is all very helpful. I'm beginning to understand it. What if we are trying to minimize, so that "this reformulation is exact since the objective wants to make z(x) as large as possible, but this is limited by Ax and p_bar (whichever is smaller)." is not satisfied. i.e.
The optimizer runs, but it says the problem is unbounded as expected. Essentially, is there a way to generate an exact minimum function using constraints involving decision variables? |
The previous reformulation indeed is no longer exact for a minimization problem. For such a problem, we need to introduce binary variables
So in InfiniteOpt we have: using InfiniteOpt, JuMP, Cbc, Distributions, LinearAlgebra
# Define the constants
N = 2
p_bar = [5.0, 5.0]
M = 1000 # replace with an appropriate value
# Initialize the model
m = InfiniteModel(Cbc.Optimizer) # Gurobi would be much faster
# Define the random parameters
@infinite_parameter(m, x[1:N] in Beta(1.0, 50.0), num_supports = 1000)
# Define A as a matrix of variables:
@hold_variable(m, A[1:N, 1:N] >= 0)
# Define the reformulation infinite variables z(x) and y(x)
@infinite_variable(m, z[1:N](x))
@infinite_variable(m, y[1:N](x), Bin)
# Define the objective
@objective(m, Max, expect(z' * x, x))
# Define the constraints
@constraint(m, z .<= A * x)
@constraint(m, z .<= p_bar)
@constraint(m, z .>= A * x - M * y)
@constraint(m, z .>= p_bar - M * (1 .- y))
# Solve the model and get the results
optimize!(m)
opt_objective = objective_value(m)
opt_A = value.(A) Note that we now have a MILP and that it will scale exponentially in complexity with All of these reformulations pertain to optimization theory fundamentals and it might be worth it to review some optimization modeling fundamentals such as those covered in the slides from this optimization modeling course: https://laurentlessard.com/teaching/524-intro-to-optimization/#:~:text=This%20course%20is%20an%20introduction,in%20industry%20and%20research%20applications Thus, for future reference I would recommend first referring to those resources to better focus questions asked here with concerns specific to InfiniteOpt and limit those that pertain to general optimization modeling questions. I hope that helps and thank you for your interest so far in using InfiniteOpt! We are genuinely very grateful for you and all of our users that are using a tool we have put a lot of time into developing to make a difference. |
@smitch151 Did the above reformulations work for the problem you're working on? |
Yes that was perfect. Thank you! |
Perfect, I will go ahead and close this issue then. Please reach out again in the future with any other questions you have. Happy modeling! |
As a follow-up, full nonlinear support beyond what JuMP does was added in |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Hello, I'm just checking to see whether using @NLconstraints/@NLoptimize is possible in infinite opt? If so, is there an example in any of the documentation? I didn't find any.
The following thread: #16 suggests that it was in the cards at some point, but unsure if it was ever fully developed.
The code I'm trying to write includes the following constraints:
which causes an error presumably because of the use of min/max. The error is: MethodError: no method matching isless(::GenericQuadExpr{Float64,GeneralVariableRef}, ::Float64)
Closest candidates are:
isless(!Matched::Float64, ::Float64) at float.jl:465
isless(!Matched::Missing, ::Any) at missing.jl:87
isless(!Matched::AbstractFloat, ::AbstractFloat) at operators.jl:156
...
The second constraint is:
This gives the error: MethodError: no method matching register(::InfiniteModel, ::Symbol, ::Int64, ::typeof(prob)). So I assume there's no ability for users to define functions in constraints.
The other way I try to define this constraints:
This gives the error: MethodError: no method matching _init_NLP(::InfiniteModel)
Thank you,
The text was updated successfully, but these errors were encountered: