In the knapsack problem, you are given a knapsack of size B ∈ Z + and a set S = {a 1 , . . . , a n } of objects with corresponding sizes and profits s(a i) ∈ Z + and p(a i) ∈ Z + . The goal is to find the optimal subset of objects whose total size is bounded by B and has the maximum possible total profit. This problem is also sometimes called the 0/1 knapsack problem because each object must be either in the knapsack completely or not at all. There are other variations as well, notably the multiple knapsack problem, in which you have more than one knapsack to fill. The obvious greedy algorithm would sort the objects in decreasing order using the objects' ratio of profit to size, or profit density, and then pick objects in that order until no more objects will fit into the knapsack. The problem with this is that we can make this algorithm perform arbitrarily bad. 2 Approximation Schemes Let Π be an NP-hard optimization problem with objective function f Π . The algorithm A is an approximation scheme for Π if on input I, where I is an instance of Π and > 0 is an error parameter, it outputs a solution s such that: • f Π (I, s) ≤ (1 + · OPT if Π is a minimization problem. • f Π (I, s) ≥ (1 − · OPT if Π is a maximization problem. The approximation scheme A is said to be a polynomial time approximation scheme, or PTAS, if for each fixed > 0, its running time is bounded by a polynomial in the size of instance I. This, however, means that it could be exponential with respect to 1//, in which case getting closer to the optimal solution is incredibly difficult.
CITATION STYLE
Lai, K., & Goemans, M. X. (2006). The knapsack problem and fully polynomial time approximation schemes (FPTAS). Retrieved November, 3, 2012.
Mendeley helps you to discover research relevant for your work.