non-discrete knapsack

Discussion in 'Scientific Statistics Math' started by analyst41, Oct 23, 2010.

  1. analyst41

    analyst41 Guest

    The knapsack without integer constraints can be solved very easily:

    max sum over j v(j).x(j)

    such that

    sum over j w(j).x(j) <= B

    0 <= x(j) <= b(j)

    (Sort in order of value/weight and allocate to the upper bound or
    until the weight-capacity is reached).

    If we now add a second constraint

    sum over j x(j) = C

    It is still an LP and can be solved.

    But is there a simpler way to get the solution without running the
    problem through an LP solver?

    analyst41, Oct 23, 2010
    1. Advertisements

  2. analyst41

    Paul Guest

    Can you solve it without an LP solver? Yes. Is it simpler? That
    depends on
    what your criterion for "simpler" is.

    I think you can solve the dual to the two-constraint version by
    nesting a
    greedy heuristic inside a bisection search. Once you have the dual
    solution, you can directly construct the primal solution.

    Paul, Oct 24, 2010
    1. Advertisements

  3. analyst41

    Ray Vickson Guest

    As an alternative to Paul's suggestion, you could put the other
    constraint up into the objective via a Lagrange multiplier u, then
    have an ordinary knapsack problem with parameter u in its solution.
    You can do a one-dimensional search over u to find the appropriate u.

    F(u) = max_{x} sum_{j} v(j) * x(j) + u*[ C - sum_{j} x(j) ]
    sum_{j} w(j) * x(j) <= B,
    0 <= x(j) <= b(j) for j = 1,...,n.

    The overall solution is obtained from min_{u} F(u). Alternatively,
    search over u until sum x(j;u) = C.

    R.G. Vickson
    Ray Vickson, Oct 24, 2010
  4. analyst41

    Paul Guest

    In fact, I think what I have in mind is equivalent to this (although I
    was going
    to solve the dual rather than the primal). The bisection search I
    would have been over the domain of the Lagrange multiplier for the
    primal constraint. The dual solution step (within each iteration of
    the bisection
    search) is essentially the mirror of the greedy heuristic for the
    primal. Since
    the OP probably already has the greedy heuristic for the primal coded,
    approach is more direct than mine.

    Paul, Oct 24, 2010
  5. analyst41

    analyst41 Guest

    Thanks both.

    I see a problem with this approach.

    If you look at

    Max 2x + 3y


    4x +5y <=23
    x + y = 5

    The unique optimum occurs at x = 2, y=3.

    The Lagrangean formulation cannot "see" this point:

    Max 2x+3y + lamda(5 - x- y)


    4x + 5y <= 23
    x,y, >= 0

    For any value of lamda, this is an Lp with only three extreme points

    ( 0,23/5)

    So all the optima will occur at one of these points and no value of
    lamda can guide you to (2,3).

    Am I missing something here?
    analyst41, Oct 26, 2010
  6. analyst41

    Paul Guest

    (2, 3) is not optimal in the original LP; (0, 23/5) is. (2, 3) is
    optimal if you restrict x and y to integer values; but you indicated
    you were solving a continuous knapsack (with an extra constraint).

    Paul, Oct 26, 2010
  7. analyst41

    analyst41 Guest

    only solving a continuous case.

    (2,3) optimizes

    Max 2x + 3y
    4x +5y <=23
    x + y = 5

    (2,3) will never be found by the Lagrangean relaxation for any value
    of lamda

    Max 2x+3y + lamda(5 - x- y)


    4x + 5y <= 23
    x,y, >= 0

    Since, for each value of lamda the objective is linear and will
    optimize at one of

    ( 0,23/5)
    analyst41, Oct 27, 2010
  8. analyst41

    edadk Guest


    Writing a primal simplex solver for a problem with only 2 constraints
    is very simple. Maybe one page of code in C. That is what I would do
    if I had solve
    many such problems. I bet that that is going to very efficient.


    edadk, Oct 27, 2010
  9. analyst41

    Paul Guest

    (2, 3) is feasible and has an objective value of 13. (0, 23/5) is
    feasible and has an objective value of 13.8. You are maximizing. How
    is an objective value of 13 optimal?

    Paul, Oct 27, 2010
  10. analyst41

    Ray Vickson Guest

    The OP is correct: here is a LINGO11 file for his problem (of course,
    manual solution via simplex would be straightforward):

    [1] MAX= 2 * X + 3 * Y ;
    [INEQ] 4 * X + 5 * Y <= 23 ;
    [EQ] X + Y = 5 ;

    Global optimal solution found.
    Objective value: 13.00000
    Infeasibilities: 0.000000
    Total solver iterations: 2

    Variable Value Reduced Cost
    X 2.000000 0.000000
    Y 3.000000 0.000000

    Row Slack or Surplus Dual Price
    1 13.00000 1.000000
    INEQ 0.000000 1.000000
    EQ 0.000000 -2.000000

    The point (x,y) = (2,3) is, indeed, the unique optimal solution. (The
    point (0,23/5) does not satisfy the equality constraint). So, the OP's
    question is a valid one, but it also seems to violate _standard
    methods_ in mathematical progamming! Right now I can't see where the
    problem lies.

    R.G. Vickson
    Ray Vickson, Oct 27, 2010
  11. analyst41

    analyst41 Guest

    Yes: I'll probably do that. If you add a slack variable to the
    inequality, then there are n(n+1)/2 bases consisting of two variables
    each. For each case, all the non-basics will be at either upper bound
    or lower bound that can be determined by inspection. The feasible
    basis with the best objective can be chosen - except for - degeneracy!

    For a given basis, non-basics with zero reduced cost can be any value
    - which means that once the non-basics with non-zero reduced cost are
    set to upper or lower bound, you have a feaibility problem which it
    turns out is easy.

    In fact the whole problem is easy if the "count constraint" is <=
    instead of =.

    Max sum v(j).x(j)


    sum w(j)x(j) <= B
    sum x(j) <= C
    0 <= x(j) <= b(j)

    variables are sorted in decreasing v(j)/w(j) and allocated up the min
    of b(j), remaining B and remaining C.
    analyst41, Oct 28, 2010
  12. analyst41

    Paul Guest

    My bad. Apologies to the OP. I read the second constraint as an
    inequality (<=). Remind me to use a bigger font in my browser. :-(

    You're right, though, that this seems to contradict well known
    results. I'll have to take a closer look too.

    Paul, Oct 28, 2010
  13. analyst41

    analyst41 Guest

    could it be because the inequality has two dimensions and the equality
    constraint has reduced the dimensions of the problem?
    analyst41, Oct 28, 2010
  14. analyst41

    Paul Guest

    At least part of the problem is that the relaxation of the second
    constraint is incorrect. Given that it is an equality, you need to
    relax it by penalizing the absolute value of its deviation in the
    objective (or the squared difference). You probably don't want a
    quadratic objective if you are looking for a simple algorithm, and the
    absolute value function cannot be used directly. The usual LP kludge
    to get rid of the absolute value leads you to something like this:

    Max 2x+3y + lambda(s + t)

    4x + 5y <= 23
    x + y + s - t = 5
    x,y,s,t >= 0

    But now you're back with two constraints, so we haven't really
    accomplished anything. (For the record, though, if lambda > 2 the
    optimal solution to this problem is indeed x = 2, y = 3; so Ray and I
    can sleep securely tonight.)

    I'll have to rethink the approach I originally had in mind to see if
    it works any better than the penalty function.

    Paul, Oct 28, 2010
  15. analyst41

    analyst41 Guest

    What if we made the equality the main constraint and put the
    inequality in the Lagrangean?
    analyst41, Oct 28, 2010
  16. analyst41

    Paul Guest

    No joy. Your only constraint then (absent upper bounds on x and y) is
    x + y = 5 (plus the sign restrictions), so you'll get either (5, 0) or
    (0, 5) as the solution, depending on the multiplier you use in the
    relaxation. It turns out the ideal penalty multiplier is 1 (the dual
    value of the capacity constraint in the original LP), in which case
    (5, 0) and (0, 5) are both optimal in the relaxed problem -- as is (2,
    3), thus satisfying the Ghost of Lagrange, but it's not an extreme
    point in the relaxed problem, so you have no obvious way to find it.

    Paul, Oct 28, 2010
  17. analyst41

    Paul Guest

    Is this still of interest? Assuming that (a) there are always two
    constraints, one knapsack limit and one count limit (sum of x's =
    constant), I think I've got a method that involves one initial sort
    (so time is O(n^2) for a bubble sort, O(n log n) for a tree sort)
    followed by a number of O(n) steps. Worst case, the number of steps
    should be O(n^2) (I think), but probably less in most instances. Each
    step would involve fairly trivial computations (such as solving a 2x2
    linear system). I don't have a formal proof of correctness done,

    Paul, Nov 3, 2010
  18. analyst41

    analyst41 Guest

    Yes, please post your solution.

    Right now I am thinking of following the suggestion of implementing
    primal simplex with bounded variables with only two rows.
    analyst41, Nov 5, 2010
  19. analyst41

    Paul Guest

    Sorry for the delay; the INFORMS conference has kept me hopping. My
    algorithm is a bit long to explain here, so I posted it on my blog
    count.html). I have proof-of-concept code (in Java), although I
    didn't bother to code the bisection search for the first phase (I just
    brute-force it). If you want to see the code, you can reply directly
    to me (or to the list, or drop a comment on my blog, your choice) and
    send me your e-mail address.

    Paul, Nov 11, 2010
  20. analyst41

    analyst41 Guest

    Thanks. I'll take a loot at it.
    analyst41, Nov 13, 2010
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.