“The fact that our lifespans are not infinite in duration makes it difficult to accept any set of assumptions that lead to a dollar having nonzero value to a person after their death.”

I think it’s very, very weird to posit that people not value their fortunes outlasting them. Why do they e.g. go to so much effort to minimize transfer taxes?

As for the scale affecting things, the Ainslie experiment was about fairly small amounts of money, either now or in the future.

]]>You got it. The derivation as you describe is on the Wikipedia page for hyperbolic discounting

https://en.wikipedia.org/wiki/Hyperbolic_discounting#Uncertain_risks

]]>Thank you very much!

]]>Okay, I updated it!

]]>I would like to join the community on Zulip. Would it be possible for you to kindly update the link?

Many thanks in advance. ]]>

Sorry for quoting you out of context, but it seemed worth getting people to hear this. I think a lot of economics is based on this scalar notion of value (“utility”), so it’s worth seeing what one can do in this framework, even though I’m very skeptical of it.

]]>Interesting! So you’re saying can be written as integral over of functions weighted by some function of

(It obviously can, I’m just wondering if this is math you’re alluding to—and what’s the weighting function.)

]]>When looking at the assumptions that exponential discounting is based on, I actually take issue with both (1) and (2). I also think there is a missing third variable in the function V – something that measures the scale. The more money you handle, the less impact one dollar makes. Much of human perception scales logarithmically (e.g. volume measured in decibels), so I wouldn’t be surprised if the third variable ended up being the logarithm of how much money is handled, rather than simply how much money is handled.

But I think a flaw with both the stated assumptions can be brought to light just by considering life expectancy tables. I don’t think this is the only flaw, but it’s the easiest to articulate. As mentioned above, I think V(t,s) should be zero if you can guarantee the beneficiary will be dead at time s. The simplest fix for this flaw would be to start with a function W(t,s) that satisfies (1) and (2), and then multiply by the conditional probability function P(alive at time s | alive at time t). The product would be a better candidate for V(t,s). It should be obvious this V(t,s) would not satisfy (2), and with a little bit of thought it should be clear that it would not satisfy (1), either.

]]>Just a clarifying note, some of what I said in that “elsewhere” is a bit out of context here. There, the discussion included the idea that every decision, financial or otherwise, could be measured by some scalar notion of value, which one could weight by a future-devaluing function (nominally exponential). Maximizing your lifetime value would presumably provide the most self-fulfilled life.

]]>Yes, good point!

Elsewhere Jason Erbele wrote:

]]>To use the simplified “value of a dollar” analogy, a dollar is useless to me if I am dead. Thus, the ratio of Friday’s dollar to Thursday’s dollar will depend on the probabilities of being alive on Thursday and on Friday. The probability of surviving until Thursday is generally slightly higher than the probability of surviving until Friday, so the ratios of probabilities is usually close to constant, but can vary significantly in terminal or near-death situations. (And in those situations, someone who believes in an afterlife might consider it rational to focus significantly more on the believed currency of the afterlife than on the currency of life.)

There are also scenarios where the value derived from doing something now will be less than the value of doing the same thing later because we do not yet have the equipment to get the full value now – new technologies or new techniques can sometimes add more value than the loss of value that decay takes away. A breakthrough in camera sensitivity could potentially make up for a delay in launching a spacecraft. An aspiring artist with a grand concept may not be able to communicate that concept properly without putting it off for a time in order to learn the technical details necessary for the vision to become reality. Reading the works of great writers in the original languages is less valuable before you learn the languages than after. The examples are numerous.

So I think this idea of maximizing “value” has some merit, but a) reduction of value to a single number probably loses a lot of information, and b) our current abilities to estimate and measure value are probably inadequate to perform maximization calculations. To some extent, I think everyone has some kind of mental heuristic that attempts to maximize value, and idea behind this thread could contribute toward better heuristics.