Econ Math

From UBC Wiki
MathHelp.png This article is part of the MathHelp Tutoring Wiki
EconHelp.png This article is part of the EconHelp Tutoring Wiki



Lagrange multiplier as used in Economics

Lagrange multiplier is a method to solve constrained optimization problems. In economics, constrained optimization problem usually takes the form

,

where f(x, y) is a non-linear function of two variables, typically representing utilities, production, etc, and g(x, y) is a linear constraint that represents budget, costs, etc.

Below is the application of this method on the general solution as well as a running problem

.

Lagrange multiplier and FOC

First we write the problem into the form

,

where n (usually lambda) is the Lagrange multiplier. The function L is called the Lagrangian. In our example,

.

Then we take the derivative of L write respect to the variables x, y and the Lagrange multiplier n. We set the derivatives to zero.



Now we have a system of three equations and three unknowns, we can solve for x, y and n.

We divide the equation (1) by (2) to eliminate n:

.

The above is the first order condition (it's denoted FOC in econ). In economics, the function f(x, y) is often the utility function u(x, y) and the constraint is often a budget constraint in the form of

.

In that case, the FOC is the famous

.

With the FOC, we can plug it into the third condition -- the constraint. In our example, putting y = 3x into (3) yields 2x + 18x = 2000. x = 100 and y = 300.

Second order condition (SOC)

Once we found the solution, how do we know if we have found the maximum or the minimum? (This question is analogous to the one we have in single-variable calculus. After we solved for x in f'(x) = 0, we know x can be a max or a min).

The easiest way in multi-variable calculus is to plug in the values around it and check. With the solution, the **objective function** is f(100, 300) = 1002 + 3002 = 100000. Now plug in f(99, 300.333) = 992 + 300.3332 > 100000. Since the value of the objective function at the solution is smaller than the value around it, the solution is a minimum. Note that the value (99, 300.333) is chosen such that it is nearby the solution and satisfies the constraint g(x, y). It is important that our test values satisfy the constraint because those values are the only ones that are defined for our problem.

While the above method is the easiest, it does not guaranteed to work (it works most of the time). A better method is to test the second order condition using bordered Hessian matrix.

In general, the sufficient second order condition associated with a constrained maximization is that the symmetric matrix of second derivatives of the Lagrangian is negative definite. This can be verified by showing that the determinants of the principal minors of the Bordered Hessian alternate in sign. Thus, we need to calculate the Bordered Hessian matrix for the Langrangians associated with the problem.

The Hessian is defined as the following matrix:

where is the second derivative of the Lagrangian with respect to the i-th and j-th variable (for i = 1, the variable is x, for i = 2, the variable is y, for i = 3, the variable is lambda, or n). Some examination of the Lagrangian reveals that the entries in the matrix A is actually the followings: L11 = f11, L12 = L21 = f12, L13 = L31 = -px, L22 = f22, L23 = L32 = -py, and L33 = 0. The matrix is thus:


After computing these second derivatives, we can calculate the principle minors of the bordered Hessian. The first principle minor is the following matrix:

The second principle minor is defined by:

In general, the kth principle minor for a bordered Hessian is the matrix that includes the first k rows and k columns of the original matrix, plus the border.

To check whether the the bordered Hessian is negative definite (if so, then this second derivative test tells us that the solution we found is a max), we need

,

where |.| is the determinant. In other words, we need the determinants of principle minors to alternate in sign, starting from negative.

The determinant of the first principle minor

.

Note that the determinant is trivially negative, regardless of the form of the objective function. Recall that the determinant of a 2x2 matrix is ad - bc.

For a general NxN matrix, the determinant can be calculated using minors as follows: |B| = where, Bij is the (N-1)x(N-1) matrix constructed by deleting the i-th row and j-th column of B

Next, we need to calculate the determinant of the 2nd principal minor of the Bordered Hessian (which is just the determinant of the entire matrix), and show that if it is positive (it should be positive since we need

, then the solution is a maximum.

Thus, expanding along the third row (to take advantage of the fact that one of the three terms will be multiplied by 0) of A2, the determinant of the second derivative matrix is given by:


Back to our running example (we've shown that the solution we found is a minimum, so we know the second order condition will not work out. This is just for illustration purpose). The Lagrangian of our function is L = x^2 + y^2 + n(2000 - 2x - 6y). Taking the different second derivatives give us the bordered Hessian matrix as follows:

As indicated above, the two principle minors are:

The determinant of A1 is

as expected. On the other hand, expanding along the first row,

, not satisfying the second order condition, as expected.