Monday, February 3, 2014

Why We Trade: Some Fundamental Economic Principles

How to Slice a Pie

Welcome to economics boot camp.  There are a few key ideas that I'd like to take time to explain, because they're absolutely crucial to building up the models I'll be writing about here. Even more important than the math, though, is getting the intuition behind these concepts. If you understand the rest of this post, you should have a solid grasp on how I am building the models for my simulation, and why they work the way they do.

To see more of these ideas in action, have a look at these practice problems and solutions for a General Equilibrium model, from an economics course taught at Columbia University. (I originally planned to dig through a box in my attic to find some examples from my own grad school notes, but Google just makes it so easy to find these sorts of things.) If you follow through problem 3, in particular, you will have a very good understanding of how I came up with the transaction function that I described in my previous post on Peter Norvig's Economic Simulator.


Economics is, above all else, the study of how people make choices. Choice is all about balancing two opposing forces: benefits and cost. Costs and resource constraints are easy enough to measure. Measuring the benefit you get from owning or doing something is a bit more subjective. That's where the idea of utility comes in.

The word "utility" implies usefulness, but when we're speaking Economese, there's more to it than that. Broadly speaking, utility is a measure of the satisfaction or happiness that someone is getting out of life, based on the things they have and the things they're doing. In real life, you know how much you enjoy going to the movies, and how much you're willing to pay for a ticket. In economic models, a utility function is a tool that we can use to calculate these values for simulated agents.

How do we choose a mathematical function to model an abstract concept like utility? We don't want to make strong assumptions, but there are a few generally accepted principles that can be used to build reasonable utility functions.
  • Utility functions are monotonically increasing. As you get more and more of a good, you will always be better off; or at least, no worse off. This assumes free disposal, that is, if you have too much of something you can always get rid of it for free.
  • Utility functions should be convex. That means that the slope of the function either holds steady or gets flatter as it the inputs increase. This captures the idea of diminishing returns: maybe you'd rather eat four donuts than just one, but the last one won't taste as good as the first one did. (Unless you started with a jelly and ended with a Boston Creme -- but stop complicating things, okay?)
  • Utility functions should be "smooth". That is, you should be able to calculate as many derivatives as you need to. In English, that means there should be no gaps or sharp corners on the curve. Smoothness is important because we use the derivative to measure marginal utility, or how much the last (or next) unit of a good is worth to a person. We'll discuss marginal utility next.
Some good candidates for utility functions are logarithms and square roots (or other exponents between 0 and 1). They are increasing, convex, and it's easy to take their derivatives.

Marginal Utility

Given how much of a good you currently have, how much is the next one worth to you? We calculate this value, the marginal utility, as the slope of the utility curve at the current allocation. Continuing with the idea of a square root utility function, the slope of that tangent line is

$$ \frac{d}{dx}U(x) = \frac{1}{2}x^{-\frac{1}{2}} $$

Marginal utility is a very important concept because people make decisions at the margin. Maybe you have a list of projects you want to do around the house; think of these as investments in your home. You might plan all of these out on January 1st and figure you'll spend $10,000 on home improvement projects this year. Then December 31st rolls around and you find you've spent $20,000 instead. And let's assume this is because you decided to do more work, not because you were surprised by how much each project cost. That's because when it came time to making decisions, you actually wanted to do more than you originally thought you would. You made plans at the beginning of the year, but you made decisions throughout the course of the year.

Remember, economists are more interested in studying people's actions than their intentions. It's called the dismal science because it sucks all the romance and idealism out of life and just looks at the cold hard facts.

So, why is marginal utility so important? You've heard the expression "get the most bang for your buck." Well, marginal utility is the bang in that expression, and marginal cost, or price, is the buck. If the marginal utility is more than the marginal cost (i.e., the utility curve is steeper than the price line), then the benefit you gain from consuming more is greater than the cost, so you should buy more. If the marginal utility is less than the marginal cost (utility is flatter than the price), then you're paying more for your last unit of the good than how much you are enjoying it, and you should buy less.

If an apple costs 50 cents, then you buy apples until the last one is worth 50 cents to you. In the utility graph above, a price of 50 cents can be represented by a slope of 1/2. If we model an agent whose utility of apples is the square root of the number of apples they eat, and the price of apples is 50 cents, then they eat the number of apples where the slope of the utility function is 1/2 -- which in this case is one apple.

So what about models with more than one good?

Marginal Rate of Substitution

In a barter economy where you have a basket full of oranges and you want to have some apples to make a fruit salad, you trade oranges for apples until an apple and an orange are both worth the same to you. 

This is actually more significant than it sounds, because whereas in a one-good model we were just looking at how you can get more utility by buying more stuff, now we'll take a look at how you can get more utility by adjusting your allocation of two goods. The total value of your allocation stays the same, but now you can be made better off by adjusting the balance of different things you have. When we move up to three or more goods, we'll still examine the same basic ideas as in the two-good model, but we'll have to add more parameters to our model and make our analysis more complicated. For now, while we're still learning the ropes, there's really nothing to gain from looking at models with more than two goods.

So how do we improve utility without changing the value of our allocation? Let's talk about substitutes and complements. Hot dogs and hamburgers are substitutes. Hot dogs and hot dog buns are complements. When goods are substitutes, we can freely trade one for the other without changing our total utility. When they are complements, we have to buy them in some fixed ratio, so we don't have that freedom to make trade offs.

Easy enough? What about cake and ice cream? Each is delicious on its own, but they're also better together. So cake and ice cream are substitutes (sort of), but they're also complements (sort of). Complements enhance each other's value when you have both of them. Substitutes can be freely interchanged and you will be indifferent toward the results.

From a theoretical point of view, when we look at utility functions over two goods, we're looking at a surface in three dimensions. This makes it harder to draw on paper, so instead of having the x-axis represent quantity of a good and the y-axis represent utility levels, we have the axes represent the quantities of two goods, and then we draw contour lines of the utility function at different levels (i.e., every point along a contour line represents the same level of utility, based on different allocations of the two goods).  These contour lines are called indifference curves, because every allocation along them yields the same utility, so an agent would be indifferent between two such allocations.

I'll use the Cobb-Douglas utility function for an example as we continue. It has some properties that make it very easy to calculate the values we will be interested in during this study. A Cobb-Douglas utility function for two goods x and y takes the form:

$$ U(x, y) = x^a y^b $$

The utility function will be increasing, convex, and smooth as long as the preference parameters a and b are both between 0 and 1. If we hold U constant, then it is easy to plot indifference curves at different utility levels, by solving for y as a function of x:

Remember how when there is only one good, the derivative of the utility function is the marginal utility of that good? Well, along these contour lines, the derivative is the marginal rate of substitution. It tells us how much of good 1 we are willing to give up for one unit of good 2, at the margin. More formally, the marginal rate of substitution is calculated as the ratio of the marginal utility of each good. This is expressed mathematically as

$$ MRS(x, y) = \frac{MU(x)}{MU(y)} = \frac{\frac{d}{dx}U(x, y)} {\frac{d}{dy}U(x, y)} $$

One commonly used utility function for models with two goods is the Cobb-Douglas utility function. It follows the form U(x, y) = (x^a) * (y^b). It's useful for modeling goods that are somewhere in between being substitutes and complements. If a and b are between 0 and 1, then the utility function is increasing, convex, and smooth. One major reason for using Cobb-Douglas utility functions is that if you choose the right values for a and b, the math becomes quite simple when you're calculating the marginal rate of substitution.

Now, what's so great about the marginal rate of substitution? Well, whenever two people have different marginal rates of substitutions for two goods, they can both be made better off if they trade with each other. Not only that, but we also know how much they will trade: they will trade until their MRS's are equal. As we'll see, there are actually lots of allocations where that happens.

Let's consider that in a bit more detail. How do we know that two people with different marginal rates of substitution will trade? Intuitively, it's because one of them values good X (in terms of Y) relatively more than the other one does -- which means that the other one values good Y (in terms of X) relatively more than the first one. This means that the first agent will trade some amount of Y for some amount of X from the other one. They can continue to find better allocations until they reach an allocation where their MRS's are equal. At this point, they each value the two goods equally, so there is no incentive for further trade.

We can see this graphically, but first I need to introduce a new kind of chart that helps illustrate what is going on here.

Edgeworth Boxes

An Edgeworth box is a tool for modeling trade between two agents, each of whom has some quantity of two different goods. It is constructed by taking one agent's utility chart, turning it upside down, and placing it on top of the other one's chart so that the two form a rectangle. The width of the rectangle is equal to the total amount of X owned by the two agents, and the height is equal to the total amount of Y. Somewhere inside the box we find the allocation that the two agents start with.

Measured from the lower left corner, we have the first agent's allocation; and measured from the top right corner, we have the second agent's allocation. Unless we are very lucky, the two agents' utility curves will intersect at two points (one being the initial allocation), and there will be some area enclosed between them. 

Note that every point inside the box is a feasible allocation of goods X and Y between the two agents, and every point inside the area between the initial utility curves will make both agents better off than where they started. So they can trade to any point inside this enclosed area and be better off. Our job as economists is to find an equilibrium within this area -- an equilibrium is an allocation where neither agent can gain any more from trading.

So what condition will tell us when there are no more gains to be made from trading?

Pareto Efficiency

How do we find an equilibrium? Well, remember that our agents will trade as long as they have different marginal rates of substitution; so they will find an equilibrium when their marginal rates of substitution are equal. Graphically, this means that at an equilibrium, their two utility curves will be tangent to each other -- they will only touch each other at one point.

When utility functions are increasing and convex, then for a given indifferent curve for agent 1, there will be one and only one indifference curve for agent 2 that is tangent to agent 1's indifference curve, and they will only be tangent at one point. That means that at one of these points, not only will there be no incentive to trade any more (due to the MRS's being equal), but it will be impossible to find another allocation without making one of the agents better off and the other one worse off. This condition -- where there are no more win/win trades to be made -- is called Pareto efficiency or Pareto optimality. It is optimal in the sense that the two agents have captured all of the potential gains from trade.

(Like many terms in economics, this one is named after the economist who first popularized the concept. Personally, I would prefer using terms that are more descriptive of the concept, but economists like to secure their legacy by giving their own names to ideas they come up with, and who am I to rob them of being remembered for their work?)

There are actually an infinite number of equilibria in this model. They are all along a path called the contract curve.

Contract Curves

So, there is a Pareto efficient allocation for every indifference curve of each agent. If we plot all of these allocations in the Edgeworth box, we will have plotted the contract curve. So how do we plot it? We have four values that define an allocation: the quantities of goods X and Y that go to agents 1 and 2. We also have three equations that must be satisfied in an equilibrium:
  • MRS(1) = MRS(2)
  • X(1) + X(2) = X(total)
  • Y(1) + Y(2) = Y(total)
The first condition ensures Pareto optimality of the final allocation. The second and third conditions are called feasibility conditions: the totals of the allocations equal the total available amounts of the goods; no more, no less. Solving this system is rather straightforward, as long as we have well-behaved utility functions. ("Well-behaved" is a euphemism that economists use to mean "easy enough for me to solve without having to spend hours on the algebra.")

(Note that contract curves tend to be straight lines when the agents have identical utility functions.)

So here's how we solve it: first, use the feasibility conditions to solve for X(2) and Y(2):
  • X(2) = X(total) - X(1)
  • Y(2) = Y(total) - Y(1)
Now substitute these values into the Pareto optimality condition. Note that MRS(1) is a function of X(1) and Y(1), and MRS(2) is a function of X(2) and Y(2). When we substitute our solutions to the feasibility conditions, we can get rid of X(2) and Y(2) in the optimality condition. So now the optimality condition is an equation that depends only on X(1) and Y(1). If we rearrange terms to solve for Y(1) as a function of X(1), then we have the equation that defines our contract curve.

Competitive Equilibrium

Finally, this brings us to the "edgeworth_trade" transaction function I implemented in the simulator last week. Of all the possible Pareto optimal allocations, why did I choose the one I did? I chose to go with the competitive equilibrium. In economics, a competitive equilibrium is one in which none of the agents has enough influence to determine the market price -- the market price is determined by supply and demand, and the agents take that as given and try to maximize utility within that constraint.

Now, this seems like an odd way to solve things in a barter economy where there are only two agents, but here's how it works. In the final equilibrium, we know that the MRS's of both agents will be equal. We also know from our discussion on utility that agents will buy a good until their marginal utility is equal to the price. So in this model, the relative "price" that the agents negotiate, i.e., the terms of trade that determine how many units of good X will be traded per unit of good Y, will have to be equal to the MRS of the agents in equilibrium.

Graphically, a price is just a straight line that shows how much of X must be given up for a quantity of Y. All that matters is the relative price, so we can say that the price of Y is 1, and we are trying to solve for the price of X. So we are looking for the point on the contract curve where the MRS's of the two agents (which will be equal) will form a straight line (with slope equal to the price of X) that connects the equilibrium allocation to the original allocation. But how can we find that if we don't know what the final price will be? Let's look at the conditions that must be met, if we somehow know what the final price is:
  • The MRS's of each agent will be equal to the equilibrium price.
  • The total value of agent 1's final allocation (the quantities of the goods times their prices) must be equal to the value of agent 1's initial allocation.
  • The total value of agent 2's final allocation (the quantities of the goods times their prices) must be equal to the value of agent 2's initial allocation.
  • Markets must clear. That is, X(1) + X(2) = X(total), and Y(1) + Y(2) = Y(total). In English, the sums of the final allocations of each good must equal the sums of the initial allocations.
These last two conditions are known as the agents' budget constraints. What they buy can't be worth more than the value of what they start out with. So how do we solve this system of equations?
  1. Find the demand function for each agent. MRS(1) is a function of X(1) and Y(1). Set this expression equal to the equilibrium price, and then solve the equation for Y(1) in terms of X(1) and the equilibrium price.
  2. Substitute these equations in for the values Y(1) and Y(2) in the budget conditions of the two agents. Recall that the budget condition for agent 1 is a function of the unknown final allocations X(1), Y(1), the known initial allocations, and the unknown equilibrium price. This substitution for Y(1) eliminates one of the unknown variables, and now we are left with a function relating X(1) to the equilibrium price. This is agent 1's demand function, and tells us how much of X(1) she will buy at a given price. We can do this for agent 2 as well.
  3. Now we can substitute these demand functions in for the values in the market clearing condition, X(1) + X(2) = X(total). We are left with a single equation with a single unknown value, which is the equilibrium price. We can now solve for that equilibrium price.
  4. Plugging that equilibrium price back into our demand functions, we can now calculate X(1) and X(2). Finally, we can plug X(1),  X(2), and the price into the budget conditions we found in step 2, and thus calculate Y(1) and Y(2). We now know the price and allocations in the Pareto optimal competitive equilibrium.

Congratulations, you now understand general equilibrium theory

Well, you know the basics, anyway. Obviously there's a lot more to it than this, but if you've made it this far you understand a lot of the fundamental principles. Review these definitions, and read through the Columbia notes I linked to at the top of the post, and you will be well on your way toward knowing how this stuff works.

What's  next?

In the next post, I'm going to simulate a model for bargaining power and analyze trades that end up at Pareto optimum allocations that are not competitive equilibria.

No comments:

Post a Comment