The concept of Pareto-optimality is the furthest one can go in the direction of “maximizing total happiness”.

]]>But ‘total utility’ is such an ill-defined concept that we can also reject the principle of ‘maximizing total utility’ on general theoretical grounds, not just on the grounds that it feels wrong.

]]>For example, if you have a dental cavity that is painful (daily payoff=-2) but not as much as going to the dentist to get it cured (daily payoff=-10), then the “present you” does not want to go the dentist. The “next moment you” may wish you had done it before but won’t go to the dentist either.

And you will end up always postponing the appointment at the dentist.

]]>It’s very hard to even define happiness in a way that justifies summing it over people. What if I claim I’m always twice as happy as you under any conditions? If true, this means my happiness contributes twice as much to the total than yours, so it’s more important to please me than to please you, if we’re trying to maximize this total! If false, how do we show it’s false?

The von Neumann–Morgenstern utility theorem gives conditions under which we can numerically compute someone’s happiness (or strictly speaking, utility). However, the result is only well-defined up to an additive constant and a multiplicative constant. So, it doesn’t let us determine whether I am really twice as happy as you!

Furthermore, the conditions of this theorem are rarely true in ordinary life… much to the secret shame of economists.

]]>http://en.wikipedia.org/wiki/Brain_stimulation_reward

There might be a problem since according to that article an overabundance of happiness can lead to starvation and death. I’m not sure if it was ever tested in humans though. If true and the “maximum total happiness” is the goal the implant should probably be limited somehow, for example it could require certain minimal level of blood sugar to work.

That makes me wonder if and when such technology becomes popular. It seems to be within our technological capabilities now but I have yet to see an internet ad for a brain implant. Of course if widespread such practice would have far reaching consequences.

]]>So: we say a rational agent does the best possible job of maximizing their payoff given the information they have. But in economics, this payoff is often called utility.

For the amusement of the class, this reminds me of a line in “Gut Feelings” by G. Gigerenzer (also recommended by Tim van Beek in our Recommended reading)

]]>A professor from Columbia University was struggling over whether to accept an offer from a rival university or to stay. His colleague took him aside and said, “Just maximize your expected utility — you always write about doing this.” Exasperated, the professor responded, “Come on, this is serious.”