hopefully the government is expediting the grant proposal review process

Feeling snarky in my Cost-Benefit Analysis class last night, I tweeted:

But of course, like any good nerd, I decided there is no question too facetious to ponder, and began to think about how I’d answer this question. And it led me to an interesting conclusion:

Let’s define the apocalypse here as “a discrete, single-occurrence, theoretically-preventable event with a 100% change of causing human extinction.” Think “sizeable impact event.” To determine whether we should attempt to prevent the impact event, the net present benefits must outweigh the net present costs. And while the costs may be easy to calculate (space shuttle, nukes, slightly lower tax revenue, etc), the benefits are a bit trickier to define.

Essentially, the benefit of preventing the apocalypse at some time t is the expected sum of all human utility (which we will encode here as ☺) at times t->∞, discounted by the social discount rate r. This creates a perpetuity, the present value of which is defined by a simple formula:

PV = C/i

Which is to say, the value of an individual payment of the perpetuity divided by the prevailing interest rate. In this case:

PV = ☺/r

But this is an odd result – it posits that the sum of all human utility from times t->∞ is finite and well-defined (assuming that ☺ is measurable, which of course it is not but as we will see soon does not have to be). However, this result does not account for a key factor – the growth of J.

Let’s assume that we expect ☺ to grow (which I believe is reasonable for all kinds of reasons) and that it grows at an expected rate (which may be less reasonable, but hey, that’s why we say “assume”). The formula for a perpetuity in which the payment grows over time is:

PV = ☺/(r -g)

Where g is the rate of growth in C. Now this sets up a clear question: if g is ≥ than r the PV of our perpetuity is ∞; if g is < r than the PV is finite.

How do we know? The key here is that without having to speculate at all about the value of g or r we can say something definitive about the relationship between g and r.

Let’s assume that 1 utile = $1. If r < g then we would expect to see social investment in projects that reduce E(☺). Now, we do see those, because humans are fallible and politics is politics. But if consumption of $1b at t1 would bring us 1b(1+g) utiles at t2, but investing it in a social project would create 1b(1+r) utilizes at t2, then we would see a tremendous amount of investment lowering overall human utility.

However, if r > g then we would see the inverse situation; society would forgo investments expected to increase E(☺). Now, we do see that, for the same reasons we see social investment in projects that reduce E(☺). We should also expect to see it because of risk; ie, a project with an expected return in excess of g might be forgone because too much risk or uncertainty surrounds the outcome. However, this is distinct from r, which is the fundamental underlying social discount rate. In that case, what someone is saying is that r+p>g, where p is a risk premium associated with a specific investment.

So if r is neither < or > than g, then…they must be equal! This leads us to two conclusions:

1) PV (Σ[t =0, ∞]☺) = ∞; ergo, we should pay any conceivable cost to forestall the apocalypse.

2) The social discount rate is by identity always equal to E(g), where g is the expected rate of growth in ☺.

This is, analytically, not a terribly useful result if your problem is trying to select a social discount rate for use in cost-benefit analysis, since it is not terribly easy to know what g is. Philosophically, however, it’s a great way to understand what the social discount rate is and why we use it – it tells us that we as a society ought to invest in things that increase the expected sum of human happiness, relative to the counterfactual. This is a utilitarian result, which has its problems, but as a starting point it is at least an interesting one.

And now, the inevitable:

Advertisements