You are currently browsing the tag archive for the ‘Thomas Piketty’ tag.

It seems like discussion of Piketty’s Capital has run its course and much of the commentary has moved on (though not necessarily from the broader topic) so now is as good time as any to peer back and reflect on how the debate around the book ended (if such a thing can be summarized). From my own vantage point, the debate about the book (not necessarily the discussion) stalled out around a single question, so I will do my best to restate and clarify that question so as to focus where more evidence and argument is needed, should this be a conversation anyone wishes to resume. None of this is new, exactly, but it’s worth recanting given the importance of the question and the stakes surrounding it.

Around 1800 AD, living standards in some countries began to rise substantially, and over the past 200 years, that rise (as measured in GDP per capita) has been on the order of a factor of 50. This generally seems to correlate with other indicators of increased living standards to a degree that, with some exceptions (such as thinly-populated resource-rich countries) it is generally, though not universally, accepted practice to use GDP per capita as a good-enough shorthand for broad living standards. Whatever the case, exactly how and why this increase transpired is still a matter of debate, in no small measure because most people would find it desirable to replicate the phenomenon in those areas that have not yet experienced it. Indeed, some countries that did not begin experiencing the phenomenon in its initial emergence have experienced it since, leaving, essentially, three groups of countries – those who have experienced it, those who have not, and those in transition.

Piketty’s book, while not exclusively, overwhelmingly is focused on the first kind of country. A compelling portion of his narrative is documenting that transformation, yet the broader focus of the book is on what has transpired since that transformation was consolidated in the era following the Second World War. There are two key factors to be documented. The first is that the countries that have fully experienced this transformation are themselves not ‘complete’ in this regard – average living standards (recent economic troubles excepted) continue to rise and are generally, though not universally, expected to continue to rise in the absence of extreme calamity on the scale of global catastrophic climate change. The second is the change in the distribution of income – since a moment of ‘peak equality’ in roughly 1970, most of the countries Piketty analyzes have seen a sharp increase in inequality, the specific degree of which dependent on method of measurement but whose general contours is not really disputed. This, Piketty and many other believes, poses a problem for these countries that is not alleviable solely by continuing increases in average living standards or aggregate wealth and income growth.

Piketty devotes a lot of space to developing a simple model of how the aggregate quantity and distribution of capital can drive income inequality. This remarkably simple model requires only three input variables – the growth rate of the economy, the average return to capital, and the savings rate (perhaps better phrased as the rate of capital formation relative to national income) – to generate a long term prediction of two key ratios: the ratio of capital to income, and the capital share of national income. From there, wealth inequality can be used directly to compute a floor on income inequality – for example, if 1% of the population owns 50% of the national wealth and the capital share of income is 30%, then that 1% captures, at a minimum, 15% of national income.

And here we arrive at the crux of the debate. Piketty’s model implicitly assumes a certain exogeneity between those three input variables and the two ratios they converge towards, ie, that they are not inherently correlated with each other. This exogeneity poses a fragility in Piketty’s model and a challenge to mainstream economic theory. The fragility is that, if they are strongly correlated (in the direction such correlation is expected), and especially if there is iterative feedback between them over time, then Piketty’s model no longer produces outcomes in which wealth inequality drives income inequality. The key example here is the average return to capital; were it to fall in proportion to the rise of total capital accumulation, then the capital share of national income would be invariant to the quantity of capital, and thus largely undermine the mechanism by which present wealth inequality drives future income inequality. Furthermore, were this anticipatable decline in the in return to capital to drive a decline in savings, the capital/national income ratio would converge at a substantially smaller value than that projected by extrapolating from the initial period. This further depresses the likelihood of ever-increasing wealth-driven income inequality.

This is also precisely the challenge to mainstream economic theory. These correlations and feedbacks are precisely what are predicted by fundamental, strongly-held ideas about economics held by economists; most centrally that investment behavior is driven by that most central economic force, supply and demand. Piketty, however, is not simply laying down an alternative model, but an empirical challenge to this challenge. The most crucial assertion made by his model – that the return to capital fails to decline in proportion to the supply of capital – is not simply a theoretical alternative but one derived from the meticulously researched and calculated estimates in his unprecedented data. As I myself pointed out in my write-up of Piketty’s book, the data show that the return to capital is sufficiently resilient to its accumulation to justify Piketty’s model. At least, that is, without controlling for any additional factors.

And here is where debate stalled, with one side asserting that theory demands these variables be tightly correlated, and the other side responding that empirics demonstrates that they are not. The problem, of course, is that macroeconometric panel empirics is extremely sensitive to model specification, to the point of being perhaps the perfect example of how any decent statistically-versed researcher with strong priors can generate the outcomes from the data they which to receive. Certainly it is more than possible to generate a superfluity of complex models demonstrating the theoretically-predicted correlations, and these models will collectively have zero persuasive power because it is trivially easily to create as many or more equally-plausible equally-complex models that demonstrate the obvious.

Why does this all matter, to the degree it’s worth recounting in such detail to the tune of a thousand words? Because it strikes directly at the heart of the most important argument for tolerating high income inequality.

There are basically three arguments in favor of tolerating high income inequality, which I will attempt to summarize as fairly as I can.

  •  The ‘Just Deserts’ Position: incomes reflect the inherently just outcomes of markets. Beyond a certain threshold to prevent the worst form of miseries, it is therefore a violation of justice to take from the deserving and distribute to the undeserving.
  • The ‘Pink Salt’ Position: income inequality is irrelevant except to the irremediably envious, resentful, or spiteful. What matters is preserving and increasing human happiness, which is largely driven by civil liberties, non-market institutions such as family and community, and the secondary impacts of economic progress.
  • The ‘Golden Egg’ Position: income inequality may be ceteris paribus bad but aggregate economic growth is extremely good to a degree that in most plausible scenarios swamps income inequality. Furthermore, income inequality and economic growth may be conjoined outcomes of our economic system and cannot be modified independently. Therefore, we should be extremely cautious about attempting to alleviate income inequality through policies that slow the rate of economic growth, as this may reduce not just aggregate utility but the utility of those benefiting directly from redistribution.

It will shock nobody to hear that I reject outright the first argument in the strongest possible terms, and the second in quite strong terms as well. Indeed, I believe that the majority of Americans, and certainly the majority of voters in developed countries, disagree with those arguments as well. It is that third argument that gives pause to many – including, to a degree, me (though that pause is still far from convincing in my own case). The average person living in a developed country today as compared to a person living in that same country in 1800 is vastly better off, and it is not impossible to imagine that the average person living in a developed country in 2100 will be vastly better off than that average person today. Impeding our shared progress in that regard could simultaneously defer developments that improve the quality of most lives while simultaneously deferring developments (like innovation in renewable energy sources and storage) that could mitigate or reverse the worst consequences of economic growth to date.

This all converges on something of an ironic surprise. In this debate, it has been the left that has been advocating, implicitly or explicitly, on behalf of the resilience of capitalism (broadly defined) and its ability to deliver human prosperity, whereas it has been the right that has claimed, implicitly or explicitly, that capitalism and the prosperity it delivers is fragile, so much so that even increasing post-market redistribution (as opposed to pre-market regulatory redistribution through minimum wages, stronger protections for unions, and abridging the current rights and privileges of lenders and shareholders) could, to use a tired aphorism, kill the goose that lays the golden eggs. This ideological positioning isn’t wholly novel, and whether it is instrumental and ephemeral or representative of something larger remains to be seen; but it is notable, and worth pondering for what it says about the state of both the contemporary mainstream left and right movements in the United States (if not beyond).

This is the dumbest post I have ever written. You have been warned.

 

I found that last bit…intriguing. Backing our currency with cat videos would, of course, be very difficult to work (backing a currency with something whose marginal cost of replication of zero is probably not a recipe for stability)…but what if we backed our currency with actual cats?

i can haz currency stability

The biggest question to answer is ‘how many cats would the government need to hold in reserve to make the standard work?’ So I went back to look at how much gold the government had when it had a gold standard, and then, in need of a denominator, indexed it as a ratio to national income (using Piketty & Zucman’s data).

look this all made sense at the time

 

 

Rather than over-analyze the data, I just took the average value of all the individual year values, came up with 1.98%, and multiplied that by national income today (just over $14.5 trillion) to estimate that the government would need to hold in reserve $288.7 billion in cats to maintain a cat standard.

This means we have a problem. The Humane Society estimates that there are 95.6 million owned cats in America, and that there are another 30-40 million stray or feral cats. That means an outside estimate of ~135 million cats in the United States. Which means even if the government eminently domained every living cat in America, that would still imply a valuation of over $2,000 per cat, which is an order of magnitude more than the current market price. This would, among other things, be highly disruptive to the cat market. It would also be hard to sustain, since rescue cats are largely sold by non-profits at the marginal cost of vaccinations, microchipping, etc.

So what the government needs to do is breed cats. Lots of cats.

MOAR MOAR MOAR

Assuming we’re not talking about a purebred standard, the kind of cats the government might be keeping in reserve would probably have a market value of around $100/each, which means we would need the government to hold, in reserve, twenty times as many cats as exists in the United States today – 2.7 billion cats. Firstly, that could take a little time – depending on how large a cat base the government started with (presumably they wouldn’t catnap every cat in America), as long as a decade. This is not the insurmountable obstacle, though.

Land is.

Cats, by nature, are kind of territorial.

all mine

One study, in fact, shows a leading cause of death for outdoor cats is…other cats. Meouch.

That same study showed that outdoor cats have quite a substantial home range – as large as 1351 acres, though the average is just 4.9 acres. Even applying that average across the board, to 2.7 billion cats that gets you to 20.7 billion square miles – over a third of all the land area on Earth.

So let’s assume substantial overlap – even if you assume 100 cats per home range, that still gets you to 200 million square miles, 5-6 times the size of the United States. To get all those cats into, say, Wyoming, you’d have a density of 27,602 cats/square mile – which is shockingly close to the human density, 27,779 people/square mile, of New York City.

Wyoming, in other words, would look like this:

everybody! everybody! everybody wants to be a cat!

And it turns out Wyoming land isn’t cheap –  if you apply  the $450/acre for ranch land quoted in this article, over $28 billion.

Of course, total land value in the United States is probably over $15 trillion at this point so we could just have a land standard. That would be a lot easier. A whole lot easier…

 

 

 

Than herding cats.

damn right i went there

 

stewie "david ricardo" griffin

Linking to Dylan Matthew’s generally-excellent piece on the correct way to manage one’s personal finances, Matt Yglesias says “stocks are on average a much better long-term investment than houses.”

This, of course, is an increasingly common view in the “internet wonk community” (one I consider myself an member of), distinct from the related and equally-prevalent view that ‘homeownership should be much less subsidized than it it now.’

This is also a view I take issue with, which you’d already know if you read my big Piketty #thinkpiece – you read my big Piketty #thinkpiece, right? right? – and one that I think needs a little elucidating and defending in detail.

There are three basic reasons that buying a house is a vastly better investment than current wonkpinion suggests. The first is that making large leveraged investments can pay off hugely even if the underlying growth rate of the purchased asset is slow. Let’s demonstrate.* Let’s take an average American buying an average house in an average way – $200,000 purchase price, 20% down, 4% closing costs, 5% interest rate. Now let’s say the value of that house grows reeeeeeeally slowly – just 0.3%/year, which just so happens to be the compounded annual growth rate of the Case-Shiller index since 1947.

If our average American sells their house after 10 years, their initial $48,000 equity investment will become $67,691.08 – which means their equity grew at a CAGR of 3.5%! If they sell after 15 years, they’ll net $92,209.57, which is a CAGR of 4.45%. Hey, that’s a lot higher than the 0.3% growth rate of the house’s price itself, isn’t it?

It sure is! The amazing power of leveraged investments is that you can turn a little bit of equity into a large return, as Matt Yglesias notes concisely here. Here, in fact, is a nice little graph demonstrating the implied return rate of selling your house after making regular mortgage payments for a given number of years, given the interest rate paid, assuming that meager 0.3% growth rate:

n33ds m0Ar l3vr1j

After 13 years, you’ll get a 3% return even at a very high interest rate; at 19 years, you’ll get a 4% return. In fact, you can assume zero growth and still get a substantial return on your initial investment – as long as you don’t count the regular payments on the debt.

Hey, what about the regular payments on the debt?

Good question! This brings us to my next two points. Because if leveraged investments are so awesome, why don’t we empower (and perhaps subsidize) average people to make large leveraged investments in stocks, which have a much larger underlying growth rate? Beyond all the other problems with that idea (not that nobody has pitched it), the thing about a house is that it has an unusual counterfactual. If you buy stocks with leverage, in theory the payments on the debt should come out of your savings, creating a counterfactual of simply saving and investing that money. But the counterfactual to owning is renting. This creates some curious conditions that lead to my next two points in favor of buying a house – inflation protection and subsidies.

There is obviously some connection between the purchase price of a house (and therefore the amortized monthly payment on the mortgage) and the rent it could fetch – regardless of where you fall on the capital controversy that dare not speak its name, there must be some fundamental link between the price of an asset and its expected returns. However, a mortgage is detached from the imputed rent (the flow of sheltering services) a house delivers, and therefore is nominally frozen in a way that rents are not. So therefore even if a mortgage today is substantially more expensive than the rent payment on an equivalent housing unit, in thirty years even very low inflation will change that drastically. Just 2% average annual inflation entails an 80% increase in the price level over three decades, meaning the annual mortgage payment declines by nearly half over that time. Rent, in the meantime, keeps going up (except in rare cases which can entail its own problems), at least as fast as inflation. Therefore even a mortgage whose monthly payment is more expensive than a rent payment today will be much cheaper than renting in a few years.

Aha, you might say, but there is a problem with this – the magic of compound interest means that the difference-in-monthly-payment savings accrued today by the renter will be much more valuable in retirement than the parallel savings accrued years from now by the owner. This is true! But that’s where the subsidies kick in. Our primary national subsidy for homeownership is to allow mortgage-payers to deduct the interest portion of their payments from their income – and the amortization structure of mortgages means the share of payments comprised of interest are highest right when the mortgage begins, and declines until the loan expires:

interest on your interest

This benefit comes when “housing” costs – really, housing-purchase-debt costs – are at their highest, also because earlier in life is when incomes are their lowest. It is difficult – very difficult – to defend the home mortgage interest deduction as currently structured, as such a large portion of the benefit goes to such a small and disproportionately well-off group. It is worth considering, though, whether the idea at the core of the program is sound. And either way, whether you think we should have them or not doesn’t mean that you don’t consider them when considering what constitutes a good investment under the status quo.

Of course, I haven’t even touched on imputed rents once a house is fully-owned (or, conversely, actual rents), which are of course the most important return to a house, well beyond the capital gains discussed heretofore. But this leads to the most important conclusion: not that houses are such a great investment per se; just that they’re a great investment for people without a lot of capital because of their unique pathway to leverage. If you had half-a-million dollars, should you buy a house (or apartment) to rent or a portfolio of financial products? Almost always the latter. But if you only have an order of magnitude less than that to your name, it may make sense to buy something with a lower return (not to mention wholly undiversified) because you can lever up. Just another way that large capital concentrations can secure higher returns – or at least exercise more freedom of action.

Spreadsheet, as always, attached – calculate your own expected returns on your housing investment!

House Investment

*All of these numbers are real and net-of-depreciation unless otherwise noted.

A weekend thought: my father is the kind of guy who likes to come up with big monocausal theories to explain every little thing; he missed his calling as a columnist for a major newspaper. Anyway, last week we were chatting and he expounded on one of these theories, in this case a coherent and compelling narrative for the dramatic increase in dog ownership in recent years. The theory is unimportant (it had to do with a decline in aggregate nachas) but afterwards I decided for the heck of it to fact-check his theory. And what do you know? According to the AVMA’s pet census, dog ownership rates have declined, very slightly, from 2007 to 2012.

Now, I know why my dad thought otherwise – over the past few years, dogs have become fantastically more visible in the environments he inhabits, mainly, urban and near-suburban NYC. I am certain that, compared to 5-10 years, ago, many more dogs can be seen in public, more dog parks have emerged, and there are many more stores offering pet-related goods-and-services. But these are intertwined with substantial cultural and demographic changes, and authoritatively  not driven by a change in the absolute number of dogs or dog-ownership rate.

It’s hard to prove things with data, even if you have a lot of really good data. There will always be multiple valid interpretations of the data, and even advanced statistical methods can be problematic and disputable, and hard to use to truly, conclusive prove a single interpretation. As Russ Roberts is fond of pointing out, it’s hard to name a single empirical econometric work that has conclusively resolved a dispute in the field of economics.

But what data can do is it can disprove things, often quite easily. While Scott Winship will argue to death that Piketty’s market-income data is not the best kind of data to understand changes in income inequality, but what you can’t do is proclaim or expound a theory explaining a decrease in market income inequality. This goes for a whole host of things – now that data is plentiful, accessible, available, and manipulable to a degree exponentially vaster than any before in human history, it’s become that much more harder to promote ideas contrary to data. This is the big hidden benefit to bigger, freer, better data – it may not conclusively prove things, but it can most certainly disprove them, and thereby help better hone and focus our understanding of the world.

Of course, I’m well over halfway into writing my Big Important Thinkpiece about Capital in the 21st Century and the FT decides to throw a grenade. Smarter and more knowledgeable people than I have gone back and forth on the specific issues, and my sense seems to align with the general consensus with there being specific issues with some of the data, but that the FT criticisms were at least somewhat overblown and that there is not nearly enough to overturn some of the central empirical conclusions of Piketty’s work.

What strikes me about this episode most is just how unbelievably hard true data and methodological transparency is. The spreadsheet vs. statistical programming platform debate seems to me to be a red herring – at least as the paradigm stands, each has their uses and limitations, as well as common pitfalls, and for the kind of work Piketty was doing, which didn’t rely on more complex statistical methods but mostly careful data aggregation and cleaning, etc, a spreadsheet is probably as fine a tool as any.

The bigger issue is that current standards for data transparency, while certainly well-advanced by the power of the internet to make raw data freely available, are still sorely lacking. The real problem is that published data and code, while useful, is still the tip of a much larger methodological iceberg whose base, like a pyramid (because I mix metaphors like The Avalanches mix phat beats), extends much deeper and wider than the final work. If a published paper is the apex, the final dataset is still just a relatively thin layer, when what we care about is the base.

To operationalize this a little, let me pick an example that’s both a very good one and also one I happen to be quite familiar with, as I had to replicate and extend the paper for my Econometrics course. In 2008, Daron Acemoglu, Simon Johnson, James A. Robinson, and Pierre Yared wrote a paper entitled “Income and Democracy” for American Economic Review in which they claimed to have demonstrated empirically that there is no detectable causal relationship between levels of national income and democratic political development.

The paper is linked; the data, which is available at AER’s website, are also attached to this post. I encourage you to download it and take a look for yourself, even if you’re far from an expert or even afraid of numbers altogether. You’ll notice, first and foremost, that it’s a spreadsheet. An Excel spreadsheet. It’s full of numbers. Additionally, the sheets have some text boxes. Those textboxes have Stata code. If you copy and paste all the numbers into Stata, then copy and paste the corresponding code into Stata, then run the code, it will produce a bunch of results. Those results match the results published in the corresponding table in the paper. Congratulations! You, like me, have replicated a published work of complex empirical macroeconomics!

Except, of course, you haven’t done very much at all. You just replicated a series of purely algorithmic functions – you’re a Chinese room of sorts (as much as I loathe that metaphor). Most importantly, you didn’t replicate the process that led to the production of this spreadsheet full of numbers. In this instance, there are 16 different variables, each of which is drawn from a different source. To truly “replicate” the work done by AJR&Y you would have to go to each of those sources and cross-check each of the datapoints – of which there are many, because the unit of analysis is the country year; their central panel alone, the 5-Year Panel, has 36,603 datapoints over 2321 different country-years. Many of these datapoints come from other papers – do you replicate those? And many of them required some kind of transformation between their source and their final form in the paper – that also has to be replicated. Additionally, two of those variables are wholly novel – the trade weighted GDP index, as well as its secondary sibling the trade-weighted democracy index. To produce those datapoints requires not merely transcription but computation. If, in the end, you were to superhumanly do this, what would you do if you found some discrepancies? Is it author error? Author manipulation? Or your error? How would you know?

And none of these speaks to differences of methodological opinion – in assembling even seemingly-simple data judgment calls in how they will be computed and represented must be made. There are also higher level judgment calls – what is a country? Which should be included and excluded? In my own extension of their work, I added a new variable to their dataset, and much the same questions apply – were I to simply hand you my augmented data, you would have no way of knowing precisely how or why I computed that variable. And we haven’t even reached the most meaningful questions – most centrally, are these data or these statistical methods the right tools to answer the questions the authors raise? In this particular case, while there is much to admire about their work, I have my doubts – but to even move on to addressing those doubts, in this case, involves some throwing up of hands in the face of the enormity of their dataset. We are essentially forced to say “assume data methodology correct.”

Piketty’s data, in their own way, go well beyond simply a spreadsheet full of numbers – there were nested workbooks, with the final data actually being formulae that referred to preceding sources of raw-er data that were transformed into the variables of Piketty’s interest. Piketty also included other raw data sources in his repository even if they were not linked via computer programming to the spreadsheets. This is extremely transparent, but still leaves key questions unanswered – some “what” and “how” questions, but also “why” questions – why did you do this this way vs. that way? Why did you use this expression to transform this data into that variable? Why did you make this exception to that rule? Why did you prioritize different data points in different years? A dataset as large and complex as Piketty’s is going to have hundreds, even thousands of individual instances where these questions can be raised with no automatic system of providing answers other than having the author manually address them as they are raised.

This is, of course, woefully inefficient, as well as to some degree providing perverse incentives. If Piketty had provided no transparency at all, well, that would have been what every author of every book did going back centuries until very, very recently. In today’s context it may have seemed odd, but it is what it is. If he had been less transparent, say by releasing simpler spreadsheets with inert results rather than transparent formulae calling on a broader set of data, it would have made it harder, not easier, for the FT to interrogate his methods and choices – that “why did he add 2 to that variable” thing, for example, would have been invisible. The FT had the privilege of being able to do at least some deconstruction of Piketty’s data, as opposed to reconstruction, the latter of which can leave the reasons for discrepancies substantially more ambiguous than the former. As it currently stands, high levels of attention on your research has the nasty side-effect of drawing attention to transparent data but opaque methods, methods that, while in all likelihood at least as defensible as any other choice, are extremely hard under the status quo to present and defend systematically against aggressive inquisition.

The kicker, of course, is that Piketty’s data is coming under exceptional, extraordinary, above-and-beyond scrutiny – how many works that are merely “important” but not “seminal” never undergo even the most basic attempts at replication? How many papers are published in which nobody even plugs in the data and the code and cross-checks the tables – forget about checking the methodology undergirding the underlying data! And these are problems that relate, at least somewhat, to publically available and verifiable datasets, like national accounts and demographics. What about data on more obscure subjects with only a single, difficult-to-verify source? Or data produced directly by the researchers?

On Twitter in discussing this, I advocated for the creation of a unified data platform which not only allowed users to merge the functions and/or toggle between spreadsheet and statistical programming GUIs and capabilities, but also to create a running annotatable log of a user’s choices, not merely static input and output. Such a platform could produce a user-friendly log that could either be read in a common format (html, pdf, doc, epub, mobi) or uploaded by a user in a packaged file with the data and code to actually replicate, from the very beginning, how a researcher took raw input and created a dataset, as well as how they analyzed that dataset to draw conclusions. I’m afraid that without such a system, or some other way of making not only data, but start-to-finish methodologies, transparent, accessible, and replicable, increased transparency  may end up paradoxically eroding trust in social science (not to mention the hard sciences) rather than buttressing it.

Income and Democracy Data AER adjustment_DATA_SET_AER.v98.n3June2008.p808 (1) AER Readme File

I’ve been working on collecting some longer thoughts on Piketty’s book now that I’ve finished it (so yes, keep your eyes open for that) and in the meantime I’ve been having fun/getting distracted by playing around with his data, and especially the data from his paper with Gabriel Zucman, which, you know, read, then play too.

One thing I realized as I was going through is that Piketty and Zucman may have incidentally provided a new route to answer an old question – were America to at last make reparations for the vast and terrible evil of slavery, how much would or should the total be?

What is that route? Well, they provide certain annual estimates of the aggregate market value of all slaves in the United States from 1770 through abolition:

slavespikettyzucman

As you can see, the amount was persistently and stunningly high right through abolition.

Now, without wading too much into heck who am I kidding diving headfirst into the endlessly-resurrected Cambridge Capital Controversy, the price of capital is determined in large part by the income it generates; so the market value of an enslaved person was an implicit statement about the expected income that slaveholders receive from the forced labor of their prisoners. So we can (by imputing the intervening annual values in their time-series, which I did linearly, which may not be a great assumption but it’s what I did so there it is) compute the real aggregate dollar market value of slaves from 1776-1860, then impute the annual income produced by, and stolen from, America’s slaves. For that, I used 4%, being conservative with Piketty’s 4-5% range.

Then you have two more steps: firstly, you have to select a discount rate in order to compute the present value of the total of that income on the eve of the Civil War in 1860; then you have to select a discount rate to compound that through 2014.

Well, that’s where things get interesting. For now, let’s pick 1% for both of those discount rates (which I am doing for a reason, as you will see). That makes the value in 1860 of all the income stolen by the Slave Power since the Declaration of Independence said liberty was inalienable roughly $378 billion*. That $378 billion, compounded at 1% annually for 154 years, is worth just about $1.75 trillion.

But those discount rates are both low – really, really low, in fact. Lower than the rate of economic growth, the rate of return on capital, and lower than the discount rate used by the government. When you increase those discount rates, though, you start to get some very, very, very large numbers. If you increase just the pre-1860 discount rate to 4%, for example, the 1860 figure leaps to over a trillion dollars, which even at a discount rate of 1% thereafter still comes to well over four-and-a-half trillion dollars today. Even vaster is the increase that comes from increasing the post-1860 rate, even if you leave the pre-1860 rate at 1%. At 2%, today’s bill comes due at just under $8 trillion; at 3%, $35 trillion; at the government’s rate of 7%, it comes to over $12.5 quadrillion. That’s over six times the entire income of the planet since 1950, a number that even if we concluded it was just – and given the incalculable and incomparable horror of slavery as practiced in the antebellum United States, it’s difficult to say any amount of material reparation is adequately just – is in practice impossible to pay.

There are three conclusions I think are worth considering from the above analysis:

1) First and foremost, slavery was a crime beyond comparison or comprehension, since compounded by our collective failure to not only to make right the crime as best we are able but to not even make the attempt (not to mention Jim Crow and all the related evils it encompasses).

2) Compound interest is a powerful force. Mathematically, obviously; but also morally. These large numbers my spreadsheet is producing are not neutral exercises – they are telling us something not only about the magnitude of the grave injustice of slavery but also the injustice of failing, year after year, to begin to pay down our massive debt to those whose exploitation and suffering was our economic backbone. And that only refers to our material debt; our moral debt, although never fully repayable, grows in the absence of substantive recognition (or the presence of regressive anti-recognition).

3) Discount rates tell us a lot about how we we see our relation to our past and our future. The Stern Review, the widely-discussed report that recommended relatively large and rapid reductions in carbon emissions, became notable in good part because it triggered a debate about the proper discount rate we should use in assessing the costs and benefits of climate change policy. Bill Nordhaus, hardly a squish on the issue, notably took the report for task for using a very low discount rate – effectively, just over 1% on average.

It is hard to crystallize precisely the panoply of philosophical implications of how discount rates interact differently with different kinds of problems. In the case of climate change, a low discount rate implies that we today should place a relatively higher value on the costs future generations will suffer as a consequence of our activity, sufficiently high that we should be willing to bear large costs to forestall them. Commensurately, however, a low discount rate also implies a lower sensitivity to the costs borne by past generations, relative to the benefits received today. High discount rates, of course, imply the inverse in both situations – a greater sensitivity to the burden of present costs on future generations and the burden of past costs on present generations.

There is no consensus – and that is putting it lightly – over what discount rates are appropriate for what situations and analysis, and whether discount rates are even appropriate at all. And when we decide on how to approach policies whose hands stretch deeply into our past or future, it is worth considering what these choices, superficially dry and mathematical, say not just about inputs and outputs, but also the nature of our relationship to the generations that preceded us and those that will follow.

Data attached:

piketty slave reparations

*2010 dollars throughout.

So late last year Matt Yglesias found a simple and concise way to create a good-enough estimate of the value of all privately-held American land, using the Fed’s Z1. He did not, however, go on to take the most-obvious next step, which was to use FRED to compile all the relevant series to calculate the entire time-series.

I have taken that bold step. Behold – the real value in present dollars of all privately held American land since FY 1951:

it's good to have land

Oh, look – a housing bubble!

But because this is the Age of Piketty, why stop there? Thanks to the magic of the internet and spreadsheets, all of the data Piketty relied on in his book is freely available – and perhaps even more importantly, so is all the data Piketty and Zucman compiled in writing “Capital is Back,” which may be even more comprehensive and interesting. So using that data, I was able to calculate land as a share of national income from 1950-2012. Check it out*:

 this land is my land; it isn't your land

 

Oh look – a housing bubble!

And why stop there? We know from reading our Piketty that the capital-to-income ratio increased substantially during that time, so let’s calculate the land share of national capital:

 

image (5)

Oh look – a…two housing bubbles?

It’s hard to know what to make of this at first glance, but after two decades steadily comprising a quarter of national capital, land grew over another two decades to nearly a third of it; and after a steep drop to under a fifth of national capital in less than a decade, just about as quickly rebounded, then plummeted even faster to under a fifth again.

So the question must be asked – why didn’t we notice the first real estate bubble, just as large (though not as rapidly inflated) as the first? There are two answers.

The first answer is – we did! Read this piece from 1990 – 1990! – about the “emotional toll” of the collapse in housing prices. Or all these other amazing pieces from the amazing New York Times archive documenting the ’80s housing bubble and the collapse in prices at the turn of the ’90s.

The second answer is – to the extent we didn’t, or didn’t really remember it, it’s because it didn’t torch the global financial system. Which clarifies a very important fact about what happened to the American economy in the late aughties – what happened involved a housing bubble, but wasn’t fundamentally about or caused by a housing bubble.

For context, here’s the homeownership rate for the United States:

get off my property

 

The 00’s housing bubble clearly involved bringing a lot of people into homeownership in a way the 80’s bubble did not; that bubble, in fact, peaked even as homeownership rates had declined.

There are a lot of lessons to learn about the 00s bubble, about debt and leverage and fraud and inequality, but the lesson not to learn – or, perhaps, to unlearn – is that a bubble and its eventual popping, regardless of the asset under consideration, is a sufficient condition of a broader economic calamity. Now, it does seem clear that the 80s housing bubble was in key ways simply smaller in magnitude than the previous one; it represented a 50% increase as a ratio to national income rather than the doubling experienced in the aughties even though both saw land increase similarly relative to capital. But there have been – and, no matter the stance of regulatory or, shudder, monetary policy, will continue to be – bubbles in capitalist economies. The policy goal we should be interested in is not preventing bubbles but building economic structures and institutions that are resilient to that fact of life in financialized post-industrial capitalism.

*Piketty and Zucman only provide national income up through 2010, so I had to impute 2011-2012 from other data with a few relatively banal assumptions.

This happened yesterday:

Then this:

And you can read the rest of the conversation from there (it was actually quite civil), but for the purposes of this post, it brought me back to the Piketty Simulator I ginned up a little while back to test Piketty’s second law, and I expanded it. And what do you know – Hendrickson is roughly 50% right. And figuring out exactly why gets at the heart of Piketty’s project. Check it out:

Piketty Simulator

So if you open up the spreadsheet and play with it yourself – and you should! spreadsheets are fun! – you should know a few things. Firstly, continuing my stated opposition to grecoscriptocracy, I have changed Piketty’s alpha and beta, the capital share of national income and the long-run equilibrium capital/income ratio, to the Hebrew aleph (א) and bet (ב). I have also created a new variable of interest, which I assigned the Hebrew gimel (ג), which we’ll get to a bit later.

In the spreadsheet, you can set initial conditions of the following five variables – the initial levels of capital and national income, and Piketty’s r, s, and g – the return on capital, the savings rate, and the growth rate. The spreadsheet then tells you a few things, both over the course of three centuries (!) and the long-term equilibrium.

Firstly, it tells you א and ב. Secondly, assuming invariant wealth shares, it tells you the share of national income that goes to the “rentier class” given any given wealth share.

The other thing it tells you, which is key to the first part of this discussion, is ג, which can be best defined as the capital perpetuation rate; it is the percentage of the “r” produced by capital that needs to be saved in order to maintain the existing ב. It can be defined, and derived, in two ways. The first is g/r, which is intuitive; it can also be derived as s/א, which may be less intuitive, but it also really important. Because it shows both why Hendrickson is wrong and why he was right.

The key to Hendrickson’s point is that is really important to the inequality path. Which is correct! But the other point is that inequality can, and will, rise regardless of s so long as r>g and Piketty’s big assumption is true. More on the latter later, but play with both math the simulator first.

The math first – s/א is a clear way to derive ג: it’s the ratio of the share of national income devoted to capital formation divided by the share of national income produced by existing capital. But if you decompose it (fun with algebra and spreadsheets in one post – I’m really hitting a home run here) you’ll see that since א=r*ב and ב=s/g then you’ve got in the numerator and the denominator and it cancels out. That’s why I put both derivations of ג – ג and ג prime – in the spreadsheet; even though one is directly derive from the savings rate, you can change s all you want and ג remains stubbornly in place. Other things change, but not ג.

This is important because it decomposes exactly what Piketty is getting at with his r>g inequality. Essentially, there are two different things going on. One is the perpetuability of capital, the other is the constraint on capital-driven inequality. As you change in the spreadsheet, you’ll see that the rentier share of national income changes accordingly as the long-run ב increases; you’ll also notice that “rentier disposable income” changes accordingly. Hey, what’s that? It’s the amount of income leftover to rentiers after they’ve not only not touched the principal but also reinvested to keep pace with growth.

And indeed, you’ll see if you change and g that as they get closer and closer, regardless of  how large the capital to income ratio is the rentiers need to plow more and more of their returns from capital into new investment to ensure their fortunes keep pace with the economy. Indeed, if r=g, then rentiers must reinvest 100% of their capital income or else inexorably fall behind the growth of the economy as a whole.

In summary, Piketty’s r>g is telling us whether the owners of substantial fortunes – think of them as “houses,” not individual people – can maintain or improve their privileged position relative to society as a whole ad infinitum. Given and gs tells us how privileged that position really is. Even with a 50% savings rate (!), if g = 4% while r = 5% then even though a rentier class that owns 90% (!) of national capital captures 56% of national income, they can only dispose of just over 11% of national income or else they will be slowly but surely swallowed into the masses. On the other hand, if s = 6%, fairly paltry, but g is only 1% relative to r‘s 5%, then rentiers only capture, initially, 22.5% of national income; but they can spend 18% and still maintain their position; if they spend just the 11% above, they can start increasing their already very privileged position (though this model doesn’t account for that).*

So Hendrickson is both right – you need to incorporate s to compute the long-run inequality equilibrium, while also wrong in that, so long as we’re not yet at that equilibrium, r>g can and, at the very least likely if not necessarily inevitably, produce rising inequality. So while the share of national income that goes to creating new capital limits the ability of capitalists to increase their capital income to the point where it truly dominates society, so long as r > g, they not only need never fear of losing their position, but also through careful wealth management and, defined very relatively, frugality, expand it over time, at least until they hit the limit defined by s.

But therein lies the rub. All these simulations, which echo Piketty’s work**, operate from a central fundamental assumption that, if altered, can topple the entire model (both Piketty’s and mine) – that r, s, and g are exogenous and independent. Now, Piketty himself doesn’t exactly claim that, but he does claim (both in Capital and in some of his previous, more technical economic work) that both theoretically there are many compelling models in which they largely move independently, especially within “reasonable” ranges, and in practice these values have been fairly steady over time and that changes in their medium-to-long-term averages, to the extent they are interconnected, have sufficiently low elasticities that, for example, r (and therefore א) decline slower than ב increases and therefore the dominance of capital increases. He derives this a little more technically in his appendix on pgs. 37-39, and discusses it in his book around pages 200-220; you can also check out this working paper to show how a production function with a constant elasticity of substitution > 1 can not only theoretically produce a model consonant with his projections but also matches the trend in Western countries over the past few decades.

These assumptions in many ways cut deeply and sharply against a lot of different assumptions, theories, and models about the economy that many people hold to, advocate for, and have a great deal of influence. And demonstrating conclusively or empirically how related they are can be maddeningly circular and also ripe territory for statistical arcana that most people don’t understand and, as Russ Roberts has pointedly noted, even those who do don’t really find convincing. But fundamentally, if you believe that r, g, and s are sufficiently independent and exogenous, you can view income distribution as a largely zero-sum game set by systems that states can to a substantial degree alter without changing those values; but if you view them as connected in vital feedback loops, you may be loathe to tax r for fear of depressing s and thereby depress g; your game is negative sum, not zero. How you view this bedrock question, a question hard to resolve conclusively through either theory or empirics, is going to determine a lot of what you take away from Captial.

*I’d love to create a model that shows variant rentier shares of national wealth and national income over time, but that’s not for this post, at least.

**One thing Piketty doesn’t stress but this spreadsheet makes clear is just how long the processes Piketty describes take to play out. Given the default society I plugged into the spreadsheet – r=5%, s=12%, g=1%, C=3, NI=1 – a rentier class that own 90% of total wealth, while projected to capture over half of national income in the long run, only captures ~14% initially; after 50 years, it is still capturing less than 30% of national income; and even after two centuries, it is still 6% of national income short of it’s long-run equilibrium, which is quite a bit. Obviously expecting fundamental aspects of society to be invariant for that long in our post-industrial world is probably very unrealistic, but it gives you a sense of the scale of the dynamics this book is grappling with.

Piketty cites two fundamental laws of capitalism. The first one is, truly, a law, and indeed as he says an accounting identity, but the second “law” is really more of a tendency. It suggests that the long-term path of the capital/income ratio (which he calls β but I’m going to call ב becuase we need notation diversity) is equivalent to the ratio of the trend savings rate and the trend growth rate (absolute, not per-capita, importantly). This is nothing Piketty doesn’t mention, but it’s worth stressing that this tendency can take a looooong time to manifest. A society beginning with a ב  of 1 and a trend savings/growth ratio – and thus a long-term predicted ב  – of 4 will take over a generation to build a ב  of 3; the same society with a trend savings/growth ratio of 6 will take over 80 years to surpass a ב of 5.

Anyway, if you’re the kind of nerd I am, you’ll want to play around with the formulae too. Spreadsheet attached; enjoy.

capital 2nd law convergence

Piketty says something in a way that sounds like he takes it, as so many do, as axiomatic:

“…growth always includes a purely demographic component and a purely economic component, and only the latter allows for an improvement in the standard of living.”

The nit I’d like to pick with this received trusim is marginal relative to its broad accuracy, but is still worth noting – there are economies of scale to absolute population. These manifest in two interrelated ways – consumption and investment choices that are only “unlocked,” if it were, when total population crosses certain thresholds, and future per-capita-growth that results from past choices that were contingent upon absolute population.

I can illustrate these by giving three examples – one purely the former, one purely the latter, one a mix.

A purely “unlocked” choice would be a more specialized service that could not achieve scale relative to fixed costs without a large enough absolute population given a fixed share of population interested in the service. Think “shop that only sells customized meeples” or more conventionally “Latverian restaurant.” This doesn’t affect to the level of per-capita income or output, now or in the future, but improves living standards by providing a greater diversity of quality consumption options.

A purely future-oriented choice might be an aircraft carrier. Today, nobody benefits. But in the long term, if an aircraft carrier in the most optimistic framing maintains peace, security, and a stable order, this allows for greater per-capita growth (and fewer destabilizing interruptions) in the future, though in the present it registers as output that brings little utility to the public at large. Obviously two things must be noted –  military investment does not always increase peace, security, and stability; and even assuming it does, there are many, much more cynical, interpretations of how military power projection leads to future per-capita growth for the projectors.

A mixed choice would be cet bruyant objet du désir, a large subway system. It both increases consumption options and quality available to present individuals – lots of people prefer riding trains! – while also being an investment that increases long-term per-capita-growth rates.

This is not the most important point in the world, but since Piketty made it I found it a good time to quibble with it.

 

Join 3,847 other followers

Not Even Past

Tweetin’

RSS Tumblin’

  • An error has occurred; the feed is probably down. Try again later.