- Published on
Aspects of OR - Linear Programming (Part 3)
- Authors

- Name
- Till Heller
In Part 1 we formulated linear programs and in Part 2 we solved them with the simplex method. In this final part of the LP mini-series, we turn to one of the most elegant concepts in optimization: duality. Every linear program has a companion problem — its dual — and the relationship between the two reveals deep structural insights. Duality is not just a theoretical curiosity; it has direct practical applications in sensitivity analysis, pricing, and algorithm design.
The dual problem
Consider a primal LP in standard form:
The dual of this LP is:
Here is the vector of dual variables — one for each primal constraint. The construction is systematic: every primal constraint gives rise to a dual variable, and every primal variable gives rise to a dual constraint. The primal objective coefficients become the dual right-hand side, and vice versa.
The following table summarizes the correspondence:
| Primal (min) | Dual (max) |
|---|---|
| Variable | Constraint |
| Constraint | Variable |
| Objective | Objective |
An important property: the dual of the dual is the primal. So duality is a symmetric relationship.
Weak duality
The first key result is weak duality: for any primal feasible and any dual feasible ,
This says that every dual feasible solution provides a lower bound on the primal optimal value, and every primal feasible solution provides an upper bound on the dual optimal value. The proof is a short chain of inequalities:
where the first inequality uses primal feasibility (, ) and the second uses dual feasibility (, ).
Weak duality has an immediate corollary: if we find a primal feasible and a dual feasible with , then both must be optimal for their respective problems — no other feasible solution can do better.
Strong duality
Strong duality goes further: if the primal has an optimal solution, then the dual also has an optimal solution, and their objective values are equal:
This is one of the central theorems of linear programming. The proof typically follows from the simplex method: the final simplex tableau provides both a primal optimal solution and a dual optimal solution (the dual values can be read from the objective row entries corresponding to the slack variables).
Strong duality can fail only when the primal or dual is infeasible or unbounded:
| Primal | Dual |
|---|---|
| Optimal solution exists | Optimal solution exists (same value) |
| Unbounded | Infeasible |
| Infeasible | Unbounded or infeasible |
Complementary slackness
Complementary slackness provides a precise characterization of when a pair is simultaneously optimal for the primal and dual. The conditions are:
In words:
- If a dual variable , then the corresponding primal constraint must be binding ().
- If a primal variable , then the corresponding dual constraint must be binding ().
Complementary slackness is extremely useful for verifying optimality and for understanding the structure of optimal solutions. It also connects directly to the economic interpretation of dual variables.
Shadow prices and economic interpretation
The dual variables have a natural economic interpretation: represents the shadow price (or marginal value) of the -th resource. It answers the question: "By how much would the optimal objective improve if we had one additional unit of resource ?"
Formally, if we perturb the right-hand side from to (with a small enough perturbation that the current basis remains optimal), the change in the optimal objective value is:
This is the essence of sensitivity analysis — understanding how the optimal solution changes when the problem data changes.
Example: Production planning revisited
Recall the furniture workshop from Part 1:
The dual of this problem (after converting max to min) is:
In Part 2, we found the optimal simplex tableau:
| Basis | RHS | ||||
|---|---|---|---|---|---|
| 1 | 0 | 0.375 | -0.25 | 45 | |
| 0 | 1 | -0.25 | 0.5 | 30 | |
| obj | 0 | 0 | 13.75 | 7.5 | 4650 |
The dual optimal values can be read from the objective row under the slack variables: and . These are the shadow prices.
Interpretation:
- One extra hour of carpentry (increasing 240 to 241) would increase the maximum profit by approximately €13.75.
- One extra hour of finishing (increasing 180 to 181) would increase the maximum profit by approximately €7.50.
This tells the workshop manager that carpentry capacity is more valuable than finishing capacity. If they could invest in expanding one department, carpentry offers a better return.
We can verify strong duality: the dual objective is , which matches the primal optimal value.
Sensitivity analysis in practice
Beyond shadow prices, sensitivity analysis addresses several practical questions:
Range of optimality (objective coefficients)
How much can we change an objective coefficient before the current optimal basis changes? For a basic variable, this range can be computed from the simplex tableau. As long as all reduced costs remain non-negative (for minimization) or non-positive (for maximization), the current basis stays optimal (though the optimal values of the variables will change).
Range of feasibility (right-hand side)
How much can we change a right-hand side value before the current basis becomes infeasible? The current BFS remains feasible as long as , where is the basis matrix. This gives an interval for each .
Gurobi makes this easy
Modern solvers provide sensitivity analysis output directly:
import gurobipy as gp
from gurobipy import GRB
model = gp.Model('production_planning')
x1 = model.addVar(name='tables', lb=0)
x2 = model.addVar(name='chairs', lb=0)
model.setObjective(70 * x1 + 50 * x2, GRB.MAXIMIZE)
carpentry = model.addConstr(4 * x1 + 2 * x2 <= 240, 'carpentry')
finishing = model.addConstr(2 * x1 + 3 * x2 <= 180, 'finishing')
model.optimize()
# Shadow prices (dual values)
print(f'Shadow price carpentry: €{carpentry.Pi:.2f}')
print(f'Shadow price finishing: €{finishing.Pi:.2f}')
# Right-hand side sensitivity ranges
print(f'\nCarpentry RHS range: [{carpentry.SARHSLow:.1f}, {carpentry.SARHSUp:.1f}]')
print(f'Finishing RHS range: [{finishing.SARHSLow:.1f}, {finishing.SARHSUp:.1f}]')
# Objective coefficient sensitivity ranges
print(f'\nTables obj range: [{x1.SAObjLow:.2f}, {x1.SAObjUp:.2f}]')
print(f'Chairs obj range: [{x2.SAObjLow:.2f}, {x2.SAObjUp:.2f}]')
The output tells us for example that the shadow price of €13.75 for carpentry is valid as long as carpentry capacity stays within a certain range — beyond that, the basis changes and the shadow price may differ.
Reduced costs
The simplex method also provides reduced costs for non-basic variables. The reduced cost of a non-basic variable tells us by how much the objective would improve per unit increase of . At optimality, all reduced costs are non-negative (for minimization), meaning no non-basic variable can improve the objective.
For basic variables, the reduced cost is always zero. For non-basic variables, the reduced cost indicates how much cheaper variable would need to become before it would be worthwhile to include it in the solution. In Gurobi, reduced costs are accessible via x.RC.
Duality beyond LP
The primal-dual relationship is not limited to linear programming. It extends in various forms to:
- Integer programming: LP relaxation duality provides bounds used in branch-and-bound.
- Network optimization: dual variables correspond to node potentials (shortest paths, min-cost flow).
- Lagrangian relaxation: a generalization of LP duality for hard combinatorial problems.
- Game theory: LP duality is closely tied to minimax theorems and zero-sum games.
These connections make duality one of the most important ideas in all of optimization.
Summary of the LP mini-series
Across the three parts, we covered:
- Part 1 — LP definition, standard form, geometric intuition, production planning and blending examples.
- Part 2 — The simplex method: slack variables, tableaux, pivoting, degeneracy, Bland's rule, complexity.
- Part 3 — Duality, strong duality, complementary slackness, shadow prices, sensitivity analysis.
Linear programming is the foundation for much of Operations Research. In upcoming mini-series, we will build on these ideas to tackle integer programming, where decision variables are required to take integer values — a constraint that makes problems much harder but also much more practically relevant.