some people never bayesian learn

Noah Smith had a great post yesterday about becoming a Bayesian Superhero. Because I am an inveterate nitpicker and a routine abandoner of my commitment to Spreadsheets Anonymous, I want to dig into the math behind his example. In this case, it actually matters quite a bit, because the math in this case mutes the power of the example somewhat:

But nevertheless, every moment contains some probability of death for a non-superman. So every moment that passes, evidence piles up in support of the proposition that you are a Bayesian superman. The pile will probably never amount to very much, but it will always grow, until you die.
The thing is that ‘the pile will probably never amount to very much.’ Here are the Social Security’s life tables. I am a 27-year-old-male, so my probably of dying (without adding in any other life expectancy modifying factors) is just 0.001362; as odds, that’s 1 in 734. That means Bayes’ Rule is not going to make very much of my not dying as evidence. Just to put it as starkly as possible, if I believed right now that there was a 48% prior probability of my being an invincible superhero, living to 40 (ceteris paribus) would be still be insufficient to push the posterior over 50%.
What does that mean? To ever believe you are a superhero through Bayesian inference, at least one of two things have to be the case:
1) You have to have a very large prior – essentially, superderp.
2) You have to do survive things that drastically increase your odds of dying.
The first thing, I think, is what Noah was getting at with teenagers; the latter thing is basically the plot of Unbreakable. If you want to generate real evidence for the proposition that you are a superhero, you need to survive some deadly encounters. And even then, you could still just be Boris.
that thing where the gold star runs out
Actually, though, the real meat of Noah’s post is in this aside:
But this gets into a philosophical thing that I’ve never quite understood about statistical inference, Bayesian or otherwise, which is the question of how to choose the set of hypotheses, when the set of possible hypotheses seems infinite.
To which I actually have a good answer! When selecting hypothesis from the infinite, simply go with the existing consensus and try to generate evidence that supports or undermines it at the Bayesian margin. This should actually be the right strategy whether you’re operating under a Popperian or a Kuhnian framework.


A great deal of the vital information that forms the backbone of the social sciences is collected through surveys. The problem with this is that most of the surveyors are academics, and therefore the surveyees they have ready access to are unrepresentative of the population at large. They are, to use a popular acronym, WEIRD – Western, Educated, Industrialized, Rich, and Democratic. Beyond that, college students tend to be unrepresentative of even the WEIRDos; they are the weirdest of all. Even if it is very hard to imagine surveying many non-WEIRDos at less-than-prohibitve cost, we should strive to find a way to make at the very least a broader cross-section of Americans available to social scientists, somehow.

What if I told you, then, there is a place where millions of Americans from almost every stratum of America’s diverse socioeconomic fabric spend a tremendous amount of time just…waiting? Doing nothing? Simply sitting? That almost any engaging activity proposed to them would sound amazingly appealling right about now?

Well, there is such a place – the Department of Motor Vehicles. Americans rich and poor, old and young, of all colors and faiths spend hours just waiting to renew liscences or take exams. And it is during those long and painful waits that America’s social scientists should shake down that captive audience for all the input they can muster.

So my proposal is – DMVs should generally open their doors, free of charge, to any researcher from any accredited university who would like to conduct a survey of folks waiting at the DMV, obviously still contingent on the individual consent of each subject. I imagine plenty of people, otherwise bored out of their wits, would love to spend that time conversing with a human being, taking a test, whatever. There you go. Free idea, America.

he came dancing across the water

The spouse and I travelled to Mexico and Colombia this year (Mexico City and Cartagena, to be precise), our first journies to Latin America. In keeping with our general belief that travel should entail and encourage learning, we read up on our destinations before, during, and since our trips. In addition to some excellent books more particular to the nations we visited (Earl Shorris’ The Life and Times of Mexico in particular is an amazing book) we also read, of course, Eduardo Galeano’s Open Veins of Latin America. If nothing else, the book makes vivid and imemdiate the scale and depth of the horrors visited upon Latin America by its conquerors, crimes comparable in their systematic brutality to almost any evil ever visited by man upon man.

In conversations with my wife, we discussed the seeming impossibility of those conquerors, most notably Spain and Portgual, to ever make substantive amends for those crimes. That got me thinking if it was possible, and since I’m apparently now the internet’s foremost specialist in reparation spreadsheets, I tried to see if and how pecuniary restitution could be made.

Unlike my calculation of the reparations for slavery, though, I decided instead to take a forward-looking approach. Rather than calculate a number equal to the crime, I instead tried to, through a very simple model, see if Spain and Portugal could ever, even over a very long time, ever put together enough cash to substantively make an impact on the whole of Latin America. Spain and Portugal are far from the only offenders on the long list of wrongdoers in Latin America, but I decided to limit my analysis to them mostly for the sake of simplicity.

The key problem facing Spain and Portugal in trying to make restitution to Latin America is that they are currently far, far smaller parties than those they might be making restitution to; their combined GDP just over a quarter of that of Latin America and the Carribean as calculated by the World Bank. To fork over enough wealth in the short term to make a dent on the fortunes of Latin America would involve a degree of impoverishment of the present citizens of those nations that most would find untenable. So I decided to give them a little time – 150 years, to be exact. Specifically, they would make annual contributions into a wealth fund equal to a fixed share of national output for a century and a half, and during that time 100% of all returns would be reinvested. In 2166, Iberian contributions would cease and Latin America would be free to do with the accumulated capital what it wished.

Would this be enough? Well, it depends – depends, specifically, on four-and-a-half variables – the averagegrowth rate of the Spanish & Portugese economies (two variables that I simplified into one), the average growth rate of Latin American GDP, the return to capital, and the size of the annual contribution. That’s a lot of moving parts, and a lot of big assumptions.

At this point, I clarified the question I wanted to ask – conditional on fixing two of those variables (Iberian growth at 1% and the return to capital at 5%), how large of an annual contribution would Spain and Portugal need to make to this fund to target a total valuation equal to one-quarter of Latin American GDP in 2166?

Where is this all going? Well, in what is a wholly unsurprising result for anyone who’s read their Piketty, the key to the answer to that question is Latin American growth. Specifically – is r>g? And by how much?

Here is a quick table of possible answers:

If Latin American Growth Is… Then The Annual Share of GDP Devoted to The Fund Needs To Be
2.0% 4.8%
2.5% 10.2%
3.0% 21.3%
3.5% 44.6%

The World Bank projects Latin American growth over the next 20 years at ~3.5%; should that growth persist for another thirteen decades, that would make the Iberian task almost impossible, nearly half of GDP being devoted to the project. But should average Latin American growth fall over that time, the Spanish-Portugese lift becomes easier and easier, to the point where if Latin American growth is still even double that of Spanish-Portugese growth, they can devote less than 5% of GDP to the fund each year and still hit their 25% target.

What conclusions can we draw from this? Firstly, it is an excellent demonstration of the long-term impact of r>g. The larger the gap between r>g the easier it is for accumulated wealth to grow relative to an economy (though this case is somewhat muddled by the annual contribution of non-return income to the fund).

Secondly, it shows how public-good wealth funds can turn r>g from an anti-social force to a pro-social force. The larger we expect the gap between r and g to be, the more pernicious it can be if most wealth is private, untaxed, and unregulated, but the more beneficial it can be if wealth is public, taxed, and regulated. In my Piketty write-up (you didn’t think you were getting away without a reference to that, did you?) I advocated for a sovereign wealth fund devoted to funding a national university system; this idea, too, becomes more compelling if you are pessimistic about growth or bullish on long-term returns to capital.

Lastly, it highlights certain ironies particular to this situation; two in particular stand out. It demonstrates most clearly to me the futility of plunder as an economic model – if all the blood-soaked gold and silver stolen from Latin America has made the lives of modern Iberians better off, it’s hard to see how. It also highlights the irony of the tradeoff between growth and wealth accumulation. If Latin America really does maintain a growth rate of 3.5% over the next 150 years, real Latin American GDP will just surpass $1,000 trillion; even if Latin American population quadruples to over two billion in that time that would still imply a real GDP per capita of nearly half-a-million dollars per person by then, ten times that of the modern American – and several times what these figures project Spanish GDP per capita to be in 2166. Indeed, for all Latin America to reach parity with Spain and Portugal in fifteen decades at most only 2.5% annual average real growth is required over that time even assuming the highest bounds of population estimates. The irony, then, is that the largest obstacle to the descendents of the conquista state making meaningful reparations to the descendents of their conquests is the conequered surpassing the conqueror.

As always, data attached. LAmerRep

itsame! satoshi nakamoto!

I’m nearing the end of David Graeber’s Debt: The First 5000 Years (don’t tell Brad DeLong) and at the very least it is most certainly an interesting book, and on that basis I can highly recommend it even if I’m not sure what to make of some of its generalizations and conclusions. One of the most important and lasting contributions I think the book will make is in its discussions of the origins and purposes of money, but to some extent I think Graeber himself doesn’t quite get what he has. Let me explain.

Money is classically defined by its use, not by its nature; it is some thing, anything really, that can be used as a unit of account, medium of exchange, and store of value. By these lights, anything can be money – coins, surely, but also livestock, shells, cloth, cronuts, anything. Indeed, defining anything as definitively “money” at all can be tricky, which is why one of the best contemporary thinkers on the subject, JP Koning, has focused instead on money as a spectral phenomenon, in which various things have differing amounts of “moneyness” over time. “Moneyness” is, in fact, the name of his blog, and its tagline stresses the adjectival nature of money over its nounitude.

Graeber spends a good deal of time at the beginning of the book deflating the myth that money arose primarily to serve as a medium of exchange to alleviate the inefficiencies of barter, and noted rather that exchange happens nearly everywhere as a credit-based processed and that money arose as a unit of account to tally those credits. He notes that coinage appears much later, usually in periods of instability, and is often induced into circulation by the state, who concurrently pays soldiers wages with it while simultaneously demanding it in taxes.


Come on, Alex. You can do it! Come on, Alex. There's nothing to it!

This dual embrace of the credit and chartalist accounts of money, though, are a little muddled, because it confuses two very intimately related but ultimately distinct concepts – money and currency. Let’s disentangle them. Money is anything that is used as money, that humans imbue with moneyness to facilitate relations, and therefore really is more of a category or an adjective than a specific noun. No one thing is, quite, definitively, money. But currency is most definitely a noun; it is a definitive, definable thing that is used as money. So overwhelmingly has currency become our money, in fact, that we often use the latter term to refer interchangeably to both concepts. It’s this confusion that I think makes one of the key passages in Graeber’s book, where he outlines the credit and chartalist accounts of the origins of money, less clear than it ought to be, since the credit theory explains the origin of money, and the chartalist account the origin of currency.

Let’s tie this back to something I wrote recently, something that makes a little more sense when wrapped into this insight:

I think trying to sort those and the myriad other solutions to the money problem into “fiat” and “backed” is as irrelevant as it is obscurant. In each of those schemes there are two identifiable foci from which value regulation derive and distinguish various schemes from each other:

-The algorithm – the rule governing the value path of the currency.

-The credibility – the likelihood of the currency following the value path promised by the algorithm, and the accountable party for those outcomes.

It’s not so much that this isn’t true – it is – as much as that it is an equally good way of describing debt, which makes sense since money – and currency – are, essentially, debt instruments. Debts are contracts, and what are contracts if not algorithms, nested and intertwined sets of calculations and if-then statements that govern the interaction of inputs and outputs? And what determines the value of debt more than credibility – the belief that a debt will be redeemed as promised? This view of debt as an algorithm-credibility matrix can go back to the earliest virtual credit moneys, those that existed solely as units of account to quantify and record credit relations.

The algorithm & credibility heuristic described in my post and repeated above, though, refers specifically to currencies - state-issued debt instruments designed and intended to serve as society’s money. Money predates currency, and currencies were introduced by states because they induced forms of socioeconomic organization that were hospitable to state aims under prevailing conditions. Currencies are, and always have been, impersonal, interchangeable increments of state liability. And in spite of the oft-prevailing metalist fiction that metallic backing meant the nature of the state’s currency debt was a quantum of metal, currencies have always, in fact, been totally self-referential. A dollar bill is an instrument that entitles the bearer to one dollar. What is a “dollar?” A purely abstract quantum of economic value. And the state owes it to you. If you go to the state and give them your dollar bill and try to claim your dollar, the state will comply and gladly give you…another dollar bill. Or maybe a coin with Warren Harding’s face on it. The point is that the state pays its bills with dollars, and demands taxes in dollars, and therefore the money-space in society is most effectively inhabited by currency under those conditions.

This is what squares the credit and chartalist theories – money is credit and emerged as such, currency was induced chartally because when society uses currency for money it benefits the state. It also explains the bonanza of secondary currency-denominated instruments that make up our broader “money supply,” such as commercial paper and T-Bills. It also explains the unifying thread between debt, money, currency, and algorithms.

This, of course, leads to one final question – what about cryptocurrencies? If all money is debt, and state currency is a liability of the state, whose liability is a Bitcoin? Technologically, of course, Bitcoin is a major step forward in distributed trust and secure decentralized transaction; but in some ways Bitcoin is also a return to something older. Bitcoin is almost a reification of Graeber’s note that, if money is fundamentally a unit of account, “[y]ou can no more touch a dollar or a deutschmark than you can touch an hour or a cubic centimeter. Units of currency are merely abstract units of measurement, and as the credit theorists correctly noted, historically, such abstract systems of accounting emerged long before the use of any particular token of exchange.”

much cheddar, so checking

Bitcoin, and cryptocurrencies in general, are liabilities of the code itself, and in that way are pure liabilities of the social system. Bitcoin leverages the most advanced and modern of technological innovation to create a currency that occupies the most archetypal space money can occupy. Yet at the same time, its volatility works at cross-purposes. And while it is possible to solve the volatility problem in using Bitcoin as a medium of exchange, it’s at the expense of using Bitcoin as a unit of account. But if accounting, and not exchanging, is the original genesis of money (as opposed to currency) then cryptocurrency’s potential, barring further innovating, is handicapped.

A weekend thought: my father is the kind of guy who likes to come up with big monocausal theories to explain every little thing; he missed his calling as a columnist for a major newspaper. Anyway, last week we were chatting and he expounded on one of these theories, in this case a coherent and compelling narrative for the dramatic increase in dog ownership in recent years. The theory is unimportant (it had to do with a decline in aggregate nachas) but afterwards I decided for the heck of it to fact-check his theory. And what do you know? According to the AVMA’s pet census, dog ownership rates have declined, very slightly, from 2007 to 2012.

Now, I know why my dad thought otherwise – over the past few years, dogs have become fantastically more visible in the environments he inhabits, mainly, urban and near-suburban NYC. I am certain that, compared to 5-10 years, ago, many more dogs can be seen in public, more dog parks have emerged, and there are many more stores offering pet-related goods-and-services. But these are intertwined with substantial cultural and demographic changes, and authoritatively  not driven by a change in the absolute number of dogs or dog-ownership rate.

It’s hard to prove things with data, even if you have a lot of really good data. There will always be multiple valid interpretations of the data, and even advanced statistical methods can be problematic and disputable, and hard to use to truly, conclusive prove a single interpretation. As Russ Roberts is fond of pointing out, it’s hard to name a single empirical econometric work that has conclusively resolved a dispute in the field of economics.

But what data can do is it can disprove things, often quite easily. While Scott Winship will argue to death that Piketty’s market-income data is not the best kind of data to understand changes in income inequality, but what you can’t do is proclaim or expound a theory explaining a decrease in market income inequality. This goes for a whole host of things – now that data is plentiful, accessible, available, and manipulable to a degree exponentially vaster than any before in human history, it’s become that much more harder to promote ideas contrary to data. This is the big hidden benefit to bigger, freer, better data – it may not conclusively prove things, but it can most certainly disprove them, and thereby help better hone and focus our understanding of the world.

Of course, I’m well over halfway into writing my Big Important Thinkpiece about Capital in the 21st Century and the FT decides to throw a grenade. Smarter and more knowledgeable people than I have gone back and forth on the specific issues, and my sense seems to align with the general consensus with there being specific issues with some of the data, but that the FT criticisms were at least somewhat overblown and that there is not nearly enough to overturn some of the central empirical conclusions of Piketty’s work.

What strikes me about this episode most is just how unbelievably hard true data and methodological transparency is. The spreadsheet vs. statistical programming platform debate seems to me to be a red herring – at least as the paradigm stands, each has their uses and limitations, as well as common pitfalls, and for the kind of work Piketty was doing, which didn’t rely on more complex statistical methods but mostly careful data aggregation and cleaning, etc, a spreadsheet is probably as fine a tool as any.

The bigger issue is that current standards for data transparency, while certainly well-advanced by the power of the internet to make raw data freely available, are still sorely lacking. The real problem is that published data and code, while useful, is still the tip of a much larger methodological iceberg whose base, like a pyramid (because I mix metaphors like The Avalanches mix phat beats), extends much deeper and wider than the final work. If a published paper is the apex, the final dataset is still just a relatively thin layer, when what we care about is the base.

To operationalize this a little, let me pick an example that’s both a very good one and also one I happen to be quite familiar with, as I had to replicate and extend the paper for my Econometrics course. In 2008, Daron Acemoglu, Simon Johnson, James A. Robinson, and Pierre Yared wrote a paper entitled “Income and Democracy” for American Economic Review in which they claimed to have demonstrated empirically that there is no detectable causal relationship between levels of national income and democratic political development.

The paper is linked; the data, which is available at AER’s website, are also attached to this post. I encourage you to download it and take a look for yourself, even if you’re far from an expert or even afraid of numbers altogether. You’ll notice, first and foremost, that it’s a spreadsheet. An Excel spreadsheet. It’s full of numbers. Additionally, the sheets have some text boxes. Those textboxes have Stata code. If you copy and paste all the numbers into Stata, then copy and paste the corresponding code into Stata, then run the code, it will produce a bunch of results. Those results match the results published in the corresponding table in the paper. Congratulations! You, like me, have replicated a published work of complex empirical macroeconomics!

Except, of course, you haven’t done very much at all. You just replicated a series of purely algorithmic functions – you’re a Chinese room of sorts (as much as I loathe that metaphor). Most importantly, you didn’t replicate the process that led to the production of this spreadsheet full of numbers. In this instance, there are 16 different variables, each of which is drawn from a different source. To truly “replicate” the work done by AJR&Y you would have to go to each of those sources and cross-check each of the datapoints – of which there are many, because the unit of analysis is the country year; their central panel alone, the 5-Year Panel, has 36,603 datapoints over 2321 different country-years. Many of these datapoints come from other papers – do you replicate those? And many of them required some kind of transformation between their source and their final form in the paper – that also has to be replicated. Additionally, two of those variables are wholly novel – the trade weighted GDP index, as well as its secondary sibling the trade-weighted democracy index. To produce those datapoints requires not merely transcription but computation. If, in the end, you were to superhumanly do this, what would you do if you found some discrepancies? Is it author error? Author manipulation? Or your error? How would you know?

And none of these speaks to differences of methodological opinion – in assembling even seemingly-simple data judgment calls in how they will be computed and represented must be made. There are also higher level judgment calls – what is a country? Which should be included and excluded? In my own extension of their work, I added a new variable to their dataset, and much the same questions apply – were I to simply hand you my augmented data, you would have no way of knowing precisely how or why I computed that variable. And we haven’t even reached the most meaningful questions – most centrally, are these data or these statistical methods the right tools to answer the questions the authors raise? In this particular case, while there is much to admire about their work, I have my doubts – but to even move on to addressing those doubts, in this case, involves some throwing up of hands in the face of the enormity of their dataset. We are essentially forced to say “assume data methodology correct.”

Piketty’s data, in their own way, go well beyond simply a spreadsheet full of numbers – there were nested workbooks, with the final data actually being formulae that referred to preceding sources of raw-er data that were transformed into the variables of Piketty’s interest. Piketty also included other raw data sources in his repository even if they were not linked via computer programming to the spreadsheets. This is extremely transparent, but still leaves key questions unanswered – some “what” and “how” questions, but also “why” questions – why did you do this this way vs. that way? Why did you use this expression to transform this data into that variable? Why did you make this exception to that rule? Why did you prioritize different data points in different years? A dataset as large and complex as Piketty’s is going to have hundreds, even thousands of individual instances where these questions can be raised with no automatic system of providing answers other than having the author manually address them as they are raised.

This is, of course, woefully inefficient, as well as to some degree providing perverse incentives. If Piketty had provided no transparency at all, well, that would have been what every author of every book did going back centuries until very, very recently. In today’s context it may have seemed odd, but it is what it is. If he had been less transparent, say by releasing simpler spreadsheets with inert results rather than transparent formulae calling on a broader set of data, it would have made it harder, not easier, for the FT to interrogate his methods and choices – that “why did he add 2 to that variable” thing, for example, would have been invisible. The FT had the privilege of being able to do at least some deconstruction of Piketty’s data, as opposed to reconstruction, the latter of which can leave the reasons for discrepancies substantially more ambiguous than the former. As it currently stands, high levels of attention on your research has the nasty side-effect of drawing attention to transparent data but opaque methods, methods that, while in all likelihood at least as defensible as any other choice, are extremely hard under the status quo to present and defend systematically against aggressive inquisition.

The kicker, of course, is that Piketty’s data is coming under exceptional, extraordinary, above-and-beyond scrutiny – how many works that are merely “important” but not “seminal” never undergo even the most basic attempts at replication? How many papers are published in which nobody even plugs in the data and the code and cross-checks the tables – forget about checking the methodology undergirding the underlying data! And these are problems that relate, at least somewhat, to publically available and verifiable datasets, like national accounts and demographics. What about data on more obscure subjects with only a single, difficult-to-verify source? Or data produced directly by the researchers?

On Twitter in discussing this, I advocated for the creation of a unified data platform which not only allowed users to merge the functions and/or toggle between spreadsheet and statistical programming GUIs and capabilities, but also to create a running annotatable log of a user’s choices, not merely static input and output. Such a platform could produce a user-friendly log that could either be read in a common format (html, pdf, doc, epub, mobi) or uploaded by a user in a packaged file with the data and code to actually replicate, from the very beginning, how a researcher took raw input and created a dataset, as well as how they analyzed that dataset to draw conclusions. I’m afraid that without such a system, or some other way of making not only data, but start-to-finish methodologies, transparent, accessible, and replicable, increased transparency  may end up paradoxically eroding trust in social science (not to mention the hard sciences) rather than buttressing it.

Income and Democracy Data AER adjustment_DATA_SET_AER.v98.n3June2008.p808 (1) AER Readme File

To start, I’m just going to put this right here:

Slade’s piece is, in essence, a defense of conservative anti-poverty policy as expressed through a critique of progressive/social democratic anti-poverty policy and its critique of conservative anti-poverty policy. I made it sound confusing, when actually it wasn’t – Slade focuses on defending conservative anti-poverty policy by explaining that the conservative counterfactual to the status quo is not the current pre-transfer distribution of income but instead a larger pie and fewer barriers to work and entrepreneurship.

As Slade suspects, I disagree, though obviously there is some overlap between Slade’s conservativism, which is definitely libertarian-flavored, and my own preferences – more immigration, less war-on-drugs, less occupational licensing, etc. What I want to dig into, though, is more Counterfactuals 202, and for that I want to hone in on this part of Slade’s piece:

It’s important to realize here that standing against a certain policy proposal is not the same as standing with the status quo. When right-of-center reformers say Obamacare is a bad law, they’re not endorsing the health care system that was in place immediately before its passage. Similarly, when conservatives and libertarians question the wisdom of the “war on poverty,” we are not putting a stamp of approval on the levels of poverty that existed 50 years ago, or on the ones that remain today. Our position isn’t that poverty does not matter. We just recognize the chosen prescription has turned out to be a poor one.

The Obamacare example in particular is a productive one to discuss, since it’s much more narrow in scope – a single, well-defined, recent reform package as opposed to half-a-century of a broad philosophy of governance – and because the issue of Obamacare and counterfactuals cuts both ways.

You may recall that, in addition to the larger and more-vocal right-wing opposition movement to Obamacare, there was a smaller but no less vocal or strident progressive opposition movement, perhaps best epitomized by Marcy Wheeler dubbing the Senate bill “neo-fuedalism.” While I don’t agree with that perspective, I am not wholly sanguine about Obamacare. Firstly because of specific problems or drawbacks to the act as written and enacted, but also because I vastly prefer adopting a single-payer or even nationalized system – universal coverage, better outcomes, and a trillion dollars a year? Yes, please.

But I supported and advocated for the passage of Obamacare. Why? Well, I could have, like Slade, simply stated “I prefer my counterfactual to Obamacare; ergo, oppose” and moved on; but instead, I looked at my preferred counterfactual probabilistically – what are the likely actual counterfactuals to Obamacare? Do I support those more than Obamacare? And the answer to that question was “absolutely not” – I would much rather, even though a kludge too friendly too industry, expand coverage to the uninsured and experiment with serious cost-control reforms then leave the status quo in place indefinitely, which was the overwhelmingly likely actual counterfactual in the case of Obamacare.

I’m not certain how much this applies to the conservative counterfactual case; it’s entirely possible that many conservatives genuinely believe that Obamacare is a net negative development. I would argue, though, that conservatives passed up a tremendous amount of leverage in shaping Obamacare, which Democratic leaders from the President on down would have gladly exchanged for political buy-in. So the conservative counterfactual in the case of Obamacare should be something more like “knowing that Obamacare would be enacted and Obama would be reelected, should we have played ball with the inevitable and shape it more to our liking rather than dig in to indefinite total opposition?” An interesting question.

And while this logic is, as I said above, much harder to apply to the overall war on poverty, it’s not impossible. A point I always try to stress to conservatives is that the opposite of welfare-state social democracy is not conservatism; it’s Communism. The modern welfare states of Western Europe and the United States fundamentally emerged as the capitalist response to the then-seemingly-inexorable growth of Communist power. “We can have our cake of economic growth and individual freedom and eat social justice too,” was the message, to totally dismember the metaphor.

I’m not certain, from reading Slade’s piece, exactly how the contours of her conservative counterfactual to welfare-state democracy differ from the policy status quo of the Eisenhower Administration. But I will ask her to think a little harder on Michael Lind’s question of “why are there no libertarian countries?” and to consider not just the idea of a preferred counterfactual, but the odds of that counterfactual coming to pass, and coming to pass in the way you imagine it, and working out the way you think it might. Which is not to say that principle should always be sacrificed on the altar of hyper-realistic incrementalism; just that a realism in the realm of political economy has as much to say to ideological priors as vice-versa.

I’ve been working on collecting some longer thoughts on Piketty’s book now that I’ve finished it (so yes, keep your eyes open for that) and in the meantime I’ve been having fun/getting distracted by playing around with his data, and especially the data from his paper with Gabriel Zucman, which, you know, read, then play too.

One thing I realized as I was going through is that Piketty and Zucman may have incidentally provided a new route to answer an old question – were America to at last make reparations for the vast and terrible evil of slavery, how much would or should the total be?

What is that route? Well, they provide certain annual estimates of the aggregate market value of all slaves in the United States from 1770 through abolition:


As you can see, the amount was persistently and stunningly high right through abolition.

Now, without wading too much into heck who am I kidding diving headfirst into the endlessly-resurrected Cambridge Capital Controversy, the price of capital is determined in large part by the income it generates; so the market value of an enslaved person was an implicit statement about the expected income that slaveholders receive from the forced labor of their prisoners. So we can (by imputing the intervening annual values in their time-series, which I did linearly, which may not be a great assumption but it’s what I did so there it is) compute the real aggregate dollar market value of slaves from 1776-1860, then impute the annual income produced by, and stolen from, America’s slaves. For that, I used 4%, being conservative with Piketty’s 4-5% range.

Then you have two more steps: firstly, you have to select a discount rate in order to compute the present value of the total of that income on the eve of the Civil War in 1860; then you have to select a discount rate to compound that through 2014.

Well, that’s where things get interesting. For now, let’s pick 1% for both of those discount rates (which I am doing for a reason, as you will see). That makes the value in 1860 of all the income stolen by the Slave Power since the Declaration of Independence said liberty was inalienable roughly $378 billion*. That $378 billion, compounded at 1% annually for 154 years, is worth just about $1.75 trillion.

But those discount rates are both low – really, really low, in fact. Lower than the rate of economic growth, the rate of return on capital, and lower than the discount rate used by the government. When you increase those discount rates, though, you start to get some very, very, very large numbers. If you increase just the pre-1860 discount rate to 4%, for example, the 1860 figure leaps to over a trillion dollars, which even at a discount rate of 1% thereafter still comes to well over four-and-a-half trillion dollars today. Even vaster is the increase that comes from increasing the post-1860 rate, even if you leave the pre-1860 rate at 1%. At 2%, today’s bill comes due at just under $8 trillion; at 3%, $35 trillion; at the government’s rate of 7%, it comes to over $12.5 quadrillion. That’s over six times the entire income of the planet since 1950, a number that even if we concluded it was just – and given the incalculable and incomparable horror of slavery as practiced in the antebellum United States, it’s difficult to say any amount of material reparation is adequately just – is in practice impossible to pay.

There are three conclusions I think are worth considering from the above analysis:

1) First and foremost, slavery was a crime beyond comparison or comprehension, since compounded by our collective failure to not only to make right the crime as best we are able but to not even make the attempt (not to mention Jim Crow and all the related evils it encompasses).

2) Compound interest is a powerful force. Mathematically, obviously; but also morally. These large numbers my spreadsheet is producing are not neutral exercises – they are telling us something not only about the magnitude of the grave injustice of slavery but also the injustice of failing, year after year, to begin to pay down our massive debt to those whose exploitation and suffering was our economic backbone. And that only refers to our material debt; our moral debt, although never fully repayable, grows in the absence of substantive recognition (or the presence of regressive anti-recognition).

3) Discount rates tell us a lot about how we we see our relation to our past and our future. The Stern Review, the widely-discussed report that recommended relatively large and rapid reductions in carbon emissions, became notable in good part because it triggered a debate about the proper discount rate we should use in assessing the costs and benefits of climate change policy. Bill Nordhaus, hardly a squish on the issue, notably took the report for task for using a very low discount rate – effectively, just over 1% on average.

It is hard to crystallize precisely the panoply of philosophical implications of how discount rates interact differently with different kinds of problems. In the case of climate change, a low discount rate implies that we today should place a relatively higher value on the costs future generations will suffer as a consequence of our activity, sufficiently high that we should be willing to bear large costs to forestall them. Commensurately, however, a low discount rate also implies a lower sensitivity to the costs borne by past generations, relative to the benefits received today. High discount rates, of course, imply the inverse in both situations – a greater sensitivity to the burden of present costs on future generations and the burden of past costs on present generations.

There is no consensus – and that is putting it lightly – over what discount rates are appropriate for what situations and analysis, and whether discount rates are even appropriate at all. And when we decide on how to approach policies whose hands stretch deeply into our past or future, it is worth considering what these choices, superficially dry and mathematical, say not just about inputs and outputs, but also the nature of our relationship to the generations that preceded us and those that will follow.

Data attached:

piketty slave reparations

*2010 dollars throughout.

So late last year Matt Yglesias found a simple and concise way to create a good-enough estimate of the value of all privately-held American land, using the Fed’s Z1. He did not, however, go on to take the most-obvious next step, which was to use FRED to compile all the relevant series to calculate the entire time-series.

I have taken that bold step. Behold – the real value in present dollars of all privately held American land since FY 1951:

it's good to have land

Oh, look – a housing bubble!

But because this is the Age of Piketty, why stop there? Thanks to the magic of the internet and spreadsheets, all of the data Piketty relied on in his book is freely available – and perhaps even more importantly, so is all the data Piketty and Zucman compiled in writing “Capital is Back,” which may be even more comprehensive and interesting. So using that data, I was able to calculate land as a share of national income from 1950-2012. Check it out*:

 this land is my land; it isn't your land


Oh look – a housing bubble!

And why stop there? We know from reading our Piketty that the capital-to-income ratio increased substantially during that time, so let’s calculate the land share of national capital:


image (5)

Oh look – a…two housing bubbles?

It’s hard to know what to make of this at first glance, but after two decades steadily comprising a quarter of national capital, land grew over another two decades to nearly a third of it; and after a steep drop to under a fifth of national capital in less than a decade, just about as quickly rebounded, then plummeted even faster to under a fifth again.

So the question must be asked – why didn’t we notice the first real estate bubble, just as large (though not as rapidly inflated) as the first? There are two answers.

The first answer is – we did! Read this piece from 1990 – 1990! – about the “emotional toll” of the collapse in housing prices. Or all these other amazing pieces from the amazing New York Times archive documenting the ’80s housing bubble and the collapse in prices at the turn of the ’90s.

The second answer is – to the extent we didn’t, or didn’t really remember it, it’s because it didn’t torch the global financial system. Which clarifies a very important fact about what happened to the American economy in the late aughties – what happened involved a housing bubble, but wasn’t fundamentally about or caused by a housing bubble.

For context, here’s the homeownership rate for the United States:

get off my property


The 00′s housing bubble clearly involved bringing a lot of people into homeownership in a way the 80′s bubble did not; that bubble, in fact, peaked even as homeownership rates had declined.

There are a lot of lessons to learn about the 00s bubble, about debt and leverage and fraud and inequality, but the lesson not to learn – or, perhaps, to unlearn – is that a bubble and its eventual popping, regardless of the asset under consideration, is a sufficient condition of a broader economic calamity. Now, it does seem clear that the 80s housing bubble was in key ways simply smaller in magnitude than the previous one; it represented a 50% increase as a ratio to national income rather than the doubling experienced in the aughties even though both saw land increase similarly relative to capital. But there have been – and, no matter the stance of regulatory or, shudder, monetary policy, will continue to be – bubbles in capitalist economies. The policy goal we should be interested in is not preventing bubbles but building economic structures and institutions that are resilient to that fact of life in financialized post-industrial capitalism.

*Piketty and Zucman only provide national income up through 2010, so I had to impute 2011-2012 from other data with a few relatively banal assumptions.

The “sharing economy” is the new big idea:

Screen Shot 2014-05-04 at 9.10.45 AMScreen Shot 2014-05-04 at 9.45.53 AM


And like all new big ideas, it has generated its share of consternation, some justified, much not, and all of it a little confused. For me the crux came together a little better after reading Emily Badger’s piece from a few weeks ago discussing some of the challenges in integrating “sharing economy” services into the existing regulatory framework and Daniel Rothschild’s piece on how sharing economy firms empower individuals to operationalize their erstwhile “dead capital.”

The thing I realized, really, is that we already know exactly what these firms do. They leverage economies of scale to provide a regulated and standardized forum to connect buyers and sellers. They are, for all intents and purposes, exchanges.

Financial exchanges have existed for a long time. In some form they may have existed in Rome; they have definitely existed since the late Middle Ages. Financial products are uniquely well-suited to exchanges: economies of scale are high since trading pure institutional claims is low-cost and exchange volumes and aggregate values are high; regulation and standardization are both fairly simple since intra-product shares are almost always equivalent. The thing about exchanges in the past is that, until recently, creating large, standardized, regulated exchanges for more heterogenous asset classes and relatively small consumers and sellers was a logistically monumental project with highly uncertain returns.

The internet, obviously, changed that - and in the long-run, we may see the rise of Internet commercial fora as more important than information fora. eBay has been around for a while, as has Amazon (which hosts other sellers as well as sells directly); and what is Craigslist if not an exchange? The internet also, of course, allowed smaller investors to engage in traditional exchanges.

The sharing economy, then, is basically just the rise of service and rental exchanges. You have a car, I want a ride. You have a house, I need a room. I have an idea, you want to take a flyer on buying it before it’s complete. I need a task done, and you are a rabbit. In the end, it’s just about new kinds of exchanges arising, from a combination of the accelerating technology and penetration of the internet as well as just plain old creativity. Nobody is sharing anything. People are exchanging goods and services. They’re just doing more exchange, and more different kinds of exchange, then they could previously.

And while the article I linked to above highlights some of the potential knock-on effects of this change, the fundamental arc is towards democratization and empowerment. Students need income between classes and studying, so they drive Ubers; homeowners have a room to spare, so they rent out their room to tourists and students. The losers are the people who “made” markets before exchanges could make them, who provided the centralizing, organizing force and reaped the benefits of owning capital – taxi medallion owners, hoteliers. If people stay in Airbnb rooms when they otherwise would have stayed in hotels, that shifts income from billionaires to middle class people and on the margins unlocks valuable land for more productive uses; if it incentivizes new trips, well, it increased human happiness!

But more to the point, when we think about how to regulate these things (and regulate we can and should), we need both to consider the costs and benefits and distributional impacts of the regulatory scheme; but even before that, we need to conceptualize what we are regulating and why. We need to think about what kind of regulatory burden needs to fall on the exchange vs. the participants. Perhaps the kind of certification a hotel or even a traditional B&B receives is not what an Airbnb rentier should require; but if cities streamline the process of certifying Airbnb rooms, should Airbnb then accept more responsibility for enforcing those rules? Is Uber liable when a driver’s error causes harms, even when there is no fare in the back? What should Kickstarter be doing to limit potential catastrophic project failure – if anything? Thinking about these firms as regulated exchanges, as spaces for others to buy and rent the goods and services of others, will give us a clearer idea of how to conceive of them, as well as how to approach the question of “normalizing them.” Especially as international exchanges grow in scope, it’s something we need to think more about.  And perhaps it will lead us to think about how we should be regulating the exchanges that already exist.


On a personal note, those out there reading this thing may have noticed that, over the last month, there hasn’t been much to read. There was a good reason for that – life has been totally bonkers. In a good way, but bonkers nonetheless. However, the causes of that bonkerdom are rapidly drawing to a close, first and foremost my education, which is, for all intents and purposes, complete – after two very meaningful but not functionally high-stakes presentations, one this week and one the next, I will officially be a Master of Public Policy; my 22-month roller-coaster of being a full-time worker and a full-time student (not to mention a full-time spouse, full-time homeowner, and full-time hound-parent) is, basically, at an end. Amen.

Therefore, all the things to which I have been unable to devote a sufficient portion of my CPU and RAM for want of time are suddenly, amazingly possible – and while, sorry readers, my wife is more important than you, and I do have many other wonderful friends and delightful hobbies, it really does mean this blog can take a good deal more of my efforts than it has in a whole. Commensurately, expect more posts – and more, as I have plans to not only kick this blog up a notch but expand both its depth and its scope as well. Big promises from a guy who still hasn’t put out his Best Albums of 2013 list (it’s May, I am ashamed) but promises I plan to live up to. Stay tuned.

Join 1,202 other followers

Not Even Past



Get every new post delivered to your Inbox.

Join 1,202 other followers