A great deal of the vital information that forms the backbone of the social sciences is collected through surveys. The problem with this is that most of the surveyors are academics, and therefore the surveyees they have ready access to are unrepresentative of the population at large. They are, to use a popular acronym, WEIRD – Western, Educated, Industrialized, Rich, and Democratic. Beyond that, college students tend to be unrepresentative of even the WEIRDos; they are the weirdest of all. Even if it is very hard to imagine surveying many non-WEIRDos at less-than-prohibitve cost, we should strive to find a way to make at the very least a broader cross-section of Americans available to social scientists, somehow.

What if I told you, then, there is a place where millions of Americans from almost every stratum of America’s diverse socioeconomic fabric spend a tremendous amount of time just…waiting? Doing nothing? Simply sitting? That almost any engaging activity proposed to them would sound amazingly appealling right about now?

Well, there is such a place – the Department of Motor Vehicles. Americans rich and poor, old and young, of all colors and faiths spend hours just waiting to renew liscences or take exams. And it is during those long and painful waits that America’s social scientists should shake down that captive audience for all the input they can muster.

So my proposal is – DMVs should generally open their doors, free of charge, to any researcher from any accredited university who would like to conduct a survey of folks waiting at the DMV, obviously still contingent on the individual consent of each subject. I imagine plenty of people, otherwise bored out of their wits, would love to spend that time conversing with a human being, taking a test, whatever. There you go. Free idea, America.

Advertisements

A weekend thought: my father is the kind of guy who likes to come up with big monocausal theories to explain every little thing; he missed his calling as a columnist for a major newspaper. Anyway, last week we were chatting and he expounded on one of these theories, in this case a coherent and compelling narrative for the dramatic increase in dog ownership in recent years. The theory is unimportant (it had to do with a decline in aggregate nachas) but afterwards I decided for the heck of it to fact-check his theory. And what do you know? According to the AVMA’s pet census, dog ownership rates have declined, very slightly, from 2007 to 2012.

Now, I know why my dad thought otherwise – over the past few years, dogs have become fantastically more visible in the environments he inhabits, mainly, urban and near-suburban NYC. I am certain that, compared to 5-10 years, ago, many more dogs can be seen in public, more dog parks have emerged, and there are many more stores offering pet-related goods-and-services. But these are intertwined with substantial cultural and demographic changes, and authoritatively  not driven by a change in the absolute number of dogs or dog-ownership rate.

It’s hard to prove things with data, even if you have a lot of really good data. There will always be multiple valid interpretations of the data, and even advanced statistical methods can be problematic and disputable, and hard to use to truly, conclusive prove a single interpretation. As Russ Roberts is fond of pointing out, it’s hard to name a single empirical econometric work that has conclusively resolved a dispute in the field of economics.

But what data can do is it can disprove things, often quite easily. While Scott Winship will argue to death that Piketty’s market-income data is not the best kind of data to understand changes in income inequality, but what you can’t do is proclaim or expound a theory explaining a decrease in market income inequality. This goes for a whole host of things – now that data is plentiful, accessible, available, and manipulable to a degree exponentially vaster than any before in human history, it’s become that much more harder to promote ideas contrary to data. This is the big hidden benefit to bigger, freer, better data – it may not conclusively prove things, but it can most certainly disprove them, and thereby help better hone and focus our understanding of the world.

Of course, I’m well over halfway into writing my Big Important Thinkpiece about Capital in the 21st Century and the FT decides to throw a grenade. Smarter and more knowledgeable people than I have gone back and forth on the specific issues, and my sense seems to align with the general consensus with there being specific issues with some of the data, but that the FT criticisms were at least somewhat overblown and that there is not nearly enough to overturn some of the central empirical conclusions of Piketty’s work.

What strikes me about this episode most is just how unbelievably hard true data and methodological transparency is. The spreadsheet vs. statistical programming platform debate seems to me to be a red herring – at least as the paradigm stands, each has their uses and limitations, as well as common pitfalls, and for the kind of work Piketty was doing, which didn’t rely on more complex statistical methods but mostly careful data aggregation and cleaning, etc, a spreadsheet is probably as fine a tool as any.

The bigger issue is that current standards for data transparency, while certainly well-advanced by the power of the internet to make raw data freely available, are still sorely lacking. The real problem is that published data and code, while useful, is still the tip of a much larger methodological iceberg whose base, like a pyramid (because I mix metaphors like The Avalanches mix phat beats), extends much deeper and wider than the final work. If a published paper is the apex, the final dataset is still just a relatively thin layer, when what we care about is the base.

To operationalize this a little, let me pick an example that’s both a very good one and also one I happen to be quite familiar with, as I had to replicate and extend the paper for my Econometrics course. In 2008, Daron Acemoglu, Simon Johnson, James A. Robinson, and Pierre Yared wrote a paper entitled “Income and Democracy” for American Economic Review in which they claimed to have demonstrated empirically that there is no detectable causal relationship between levels of national income and democratic political development.

The paper is linked; the data, which is available at AER’s website, are also attached to this post. I encourage you to download it and take a look for yourself, even if you’re far from an expert or even afraid of numbers altogether. You’ll notice, first and foremost, that it’s a spreadsheet. An Excel spreadsheet. It’s full of numbers. Additionally, the sheets have some text boxes. Those textboxes have Stata code. If you copy and paste all the numbers into Stata, then copy and paste the corresponding code into Stata, then run the code, it will produce a bunch of results. Those results match the results published in the corresponding table in the paper. Congratulations! You, like me, have replicated a published work of complex empirical macroeconomics!

Except, of course, you haven’t done very much at all. You just replicated a series of purely algorithmic functions – you’re a Chinese room of sorts (as much as I loathe that metaphor). Most importantly, you didn’t replicate the process that led to the production of this spreadsheet full of numbers. In this instance, there are 16 different variables, each of which is drawn from a different source. To truly “replicate” the work done by AJR&Y you would have to go to each of those sources and cross-check each of the datapoints – of which there are many, because the unit of analysis is the country year; their central panel alone, the 5-Year Panel, has 36,603 datapoints over 2321 different country-years. Many of these datapoints come from other papers – do you replicate those? And many of them required some kind of transformation between their source and their final form in the paper – that also has to be replicated. Additionally, two of those variables are wholly novel – the trade weighted GDP index, as well as its secondary sibling the trade-weighted democracy index. To produce those datapoints requires not merely transcription but computation. If, in the end, you were to superhumanly do this, what would you do if you found some discrepancies? Is it author error? Author manipulation? Or your error? How would you know?

And none of these speaks to differences of methodological opinion – in assembling even seemingly-simple data judgment calls in how they will be computed and represented must be made. There are also higher level judgment calls – what is a country? Which should be included and excluded? In my own extension of their work, I added a new variable to their dataset, and much the same questions apply – were I to simply hand you my augmented data, you would have no way of knowing precisely how or why I computed that variable. And we haven’t even reached the most meaningful questions – most centrally, are these data or these statistical methods the right tools to answer the questions the authors raise? In this particular case, while there is much to admire about their work, I have my doubts – but to even move on to addressing those doubts, in this case, involves some throwing up of hands in the face of the enormity of their dataset. We are essentially forced to say “assume data methodology correct.”

Piketty’s data, in their own way, go well beyond simply a spreadsheet full of numbers – there were nested workbooks, with the final data actually being formulae that referred to preceding sources of raw-er data that were transformed into the variables of Piketty’s interest. Piketty also included other raw data sources in his repository even if they were not linked via computer programming to the spreadsheets. This is extremely transparent, but still leaves key questions unanswered – some “what” and “how” questions, but also “why” questions – why did you do this this way vs. that way? Why did you use this expression to transform this data into that variable? Why did you make this exception to that rule? Why did you prioritize different data points in different years? A dataset as large and complex as Piketty’s is going to have hundreds, even thousands of individual instances where these questions can be raised with no automatic system of providing answers other than having the author manually address them as they are raised.

This is, of course, woefully inefficient, as well as to some degree providing perverse incentives. If Piketty had provided no transparency at all, well, that would have been what every author of every book did going back centuries until very, very recently. In today’s context it may have seemed odd, but it is what it is. If he had been less transparent, say by releasing simpler spreadsheets with inert results rather than transparent formulae calling on a broader set of data, it would have made it harder, not easier, for the FT to interrogate his methods and choices – that “why did he add 2 to that variable” thing, for example, would have been invisible. The FT had the privilege of being able to do at least some deconstruction of Piketty’s data, as opposed to reconstruction, the latter of which can leave the reasons for discrepancies substantially more ambiguous than the former. As it currently stands, high levels of attention on your research has the nasty side-effect of drawing attention to transparent data but opaque methods, methods that, while in all likelihood at least as defensible as any other choice, are extremely hard under the status quo to present and defend systematically against aggressive inquisition.

The kicker, of course, is that Piketty’s data is coming under exceptional, extraordinary, above-and-beyond scrutiny – how many works that are merely “important” but not “seminal” never undergo even the most basic attempts at replication? How many papers are published in which nobody even plugs in the data and the code and cross-checks the tables – forget about checking the methodology undergirding the underlying data! And these are problems that relate, at least somewhat, to publically available and verifiable datasets, like national accounts and demographics. What about data on more obscure subjects with only a single, difficult-to-verify source? Or data produced directly by the researchers?

On Twitter in discussing this, I advocated for the creation of a unified data platform which not only allowed users to merge the functions and/or toggle between spreadsheet and statistical programming GUIs and capabilities, but also to create a running annotatable log of a user’s choices, not merely static input and output. Such a platform could produce a user-friendly log that could either be read in a common format (html, pdf, doc, epub, mobi) or uploaded by a user in a packaged file with the data and code to actually replicate, from the very beginning, how a researcher took raw input and created a dataset, as well as how they analyzed that dataset to draw conclusions. I’m afraid that without such a system, or some other way of making not only data, but start-to-finish methodologies, transparent, accessible, and replicable, increased transparency  may end up paradoxically eroding trust in social science (not to mention the hard sciences) rather than buttressing it.

Income and Democracy Data AER adjustment_DATA_SET_AER.v98.n3June2008.p808 (1) AER Readme File

To start, I’m just going to put this right here:

Slade’s piece is, in essence, a defense of conservative anti-poverty policy as expressed through a critique of progressive/social democratic anti-poverty policy and its critique of conservative anti-poverty policy. I made it sound confusing, when actually it wasn’t – Slade focuses on defending conservative anti-poverty policy by explaining that the conservative counterfactual to the status quo is not the current pre-transfer distribution of income but instead a larger pie and fewer barriers to work and entrepreneurship.

As Slade suspects, I disagree, though obviously there is some overlap between Slade’s conservativism, which is definitely libertarian-flavored, and my own preferences – more immigration, less war-on-drugs, less occupational licensing, etc. What I want to dig into, though, is more Counterfactuals 202, and for that I want to hone in on this part of Slade’s piece:

It’s important to realize here that standing against a certain policy proposal is not the same as standing with the status quo. When right-of-center reformers say Obamacare is a bad law, they’re not endorsing the health care system that was in place immediately before its passage. Similarly, when conservatives and libertarians question the wisdom of the “war on poverty,” we are not putting a stamp of approval on the levels of poverty that existed 50 years ago, or on the ones that remain today. Our position isn’t that poverty does not matter. We just recognize the chosen prescription has turned out to be a poor one.

The Obamacare example in particular is a productive one to discuss, since it’s much more narrow in scope – a single, well-defined, recent reform package as opposed to half-a-century of a broad philosophy of governance – and because the issue of Obamacare and counterfactuals cuts both ways.

You may recall that, in addition to the larger and more-vocal right-wing opposition movement to Obamacare, there was a smaller but no less vocal or strident progressive opposition movement, perhaps best epitomized by Marcy Wheeler dubbing the Senate bill “neo-fuedalism.” While I don’t agree with that perspective, I am not wholly sanguine about Obamacare. Firstly because of specific problems or drawbacks to the act as written and enacted, but also because I vastly prefer adopting a single-payer or even nationalized system – universal coverage, better outcomes, and a trillion dollars a year? Yes, please.

But I supported and advocated for the passage of Obamacare. Why? Well, I could have, like Slade, simply stated “I prefer my counterfactual to Obamacare; ergo, oppose” and moved on; but instead, I looked at my preferred counterfactual probabilistically – what are the likely actual counterfactuals to Obamacare? Do I support those more than Obamacare? And the answer to that question was “absolutely not” – I would much rather, even though a kludge too friendly too industry, expand coverage to the uninsured and experiment with serious cost-control reforms then leave the status quo in place indefinitely, which was the overwhelmingly likely actual counterfactual in the case of Obamacare.

I’m not certain how much this applies to the conservative counterfactual case; it’s entirely possible that many conservatives genuinely believe that Obamacare is a net negative development. I would argue, though, that conservatives passed up a tremendous amount of leverage in shaping Obamacare, which Democratic leaders from the President on down would have gladly exchanged for political buy-in. So the conservative counterfactual in the case of Obamacare should be something more like “knowing that Obamacare would be enacted and Obama would be reelected, should we have played ball with the inevitable and shape it more to our liking rather than dig in to indefinite total opposition?” An interesting question.

And while this logic is, as I said above, much harder to apply to the overall war on poverty, it’s not impossible. A point I always try to stress to conservatives is that the opposite of welfare-state social democracy is not conservatism; it’s Communism. The modern welfare states of Western Europe and the United States fundamentally emerged as the capitalist response to the then-seemingly-inexorable growth of Communist power. “We can have our cake of economic growth and individual freedom and eat social justice too,” was the message, to totally dismember the metaphor.

I’m not certain, from reading Slade’s piece, exactly how the contours of her conservative counterfactual to welfare-state democracy differ from the policy status quo of the Eisenhower Administration. But I will ask her to think a little harder on Michael Lind’s question of “why are there no libertarian countries?” and to consider not just the idea of a preferred counterfactual, but the odds of that counterfactual coming to pass, and coming to pass in the way you imagine it, and working out the way you think it might. Which is not to say that principle should always be sacrificed on the altar of hyper-realistic incrementalism; just that a realism in the realm of political economy has as much to say to ideological priors as vice-versa.

I’ve been working on collecting some longer thoughts on Piketty’s book now that I’ve finished it (so yes, keep your eyes open for that) and in the meantime I’ve been having fun/getting distracted by playing around with his data, and especially the data from his paper with Gabriel Zucman, which, you know, read, then play too.

One thing I realized as I was going through is that Piketty and Zucman may have incidentally provided a new route to answer an old question – were America to at last make reparations for the vast and terrible evil of slavery, how much would or should the total be?

What is that route? Well, they provide certain annual estimates of the aggregate market value of all slaves in the United States from 1770 through abolition:

slavespikettyzucman

As you can see, the amount was persistently and stunningly high right through abolition.

Now, without wading too much into heck who am I kidding diving headfirst into the endlessly-resurrected Cambridge Capital Controversy, the price of capital is determined in large part by the income it generates; so the market value of an enslaved person was an implicit statement about the expected income that slaveholders receive from the forced labor of their prisoners. So we can (by imputing the intervening annual values in their time-series, which I did linearly, which may not be a great assumption but it’s what I did so there it is) compute the real aggregate dollar market value of slaves from 1776-1860, then impute the annual income produced by, and stolen from, America’s slaves. For that, I used 4%, being conservative with Piketty’s 4-5% range.

Then you have two more steps: firstly, you have to select a discount rate in order to compute the present value of the total of that income on the eve of the Civil War in 1860; then you have to select a discount rate to compound that through 2014.

Well, that’s where things get interesting. For now, let’s pick 1% for both of those discount rates (which I am doing for a reason, as you will see). That makes the value in 1860 of all the income stolen by the Slave Power since the Declaration of Independence said liberty was inalienable roughly $378 billion*. That $378 billion, compounded at 1% annually for 154 years, is worth just about $1.75 trillion.

But those discount rates are both low – really, really low, in fact. Lower than the rate of economic growth, the rate of return on capital, and lower than the discount rate used by the government. When you increase those discount rates, though, you start to get some very, very, very large numbers. If you increase just the pre-1860 discount rate to 4%, for example, the 1860 figure leaps to over a trillion dollars, which even at a discount rate of 1% thereafter still comes to well over four-and-a-half trillion dollars today. Even vaster is the increase that comes from increasing the post-1860 rate, even if you leave the pre-1860 rate at 1%. At 2%, today’s bill comes due at just under $8 trillion; at 3%, $35 trillion; at the government’s rate of 7%, it comes to over $12.5 quadrillion. That’s over six times the entire income of the planet since 1950, a number that even if we concluded it was just – and given the incalculable and incomparable horror of slavery as practiced in the antebellum United States, it’s difficult to say any amount of material reparation is adequately just – is in practice impossible to pay.

There are three conclusions I think are worth considering from the above analysis:

1) First and foremost, slavery was a crime beyond comparison or comprehension, since compounded by our collective failure to not only to make right the crime as best we are able but to not even make the attempt (not to mention Jim Crow and all the related evils it encompasses).

2) Compound interest is a powerful force. Mathematically, obviously; but also morally. These large numbers my spreadsheet is producing are not neutral exercises – they are telling us something not only about the magnitude of the grave injustice of slavery but also the injustice of failing, year after year, to begin to pay down our massive debt to those whose exploitation and suffering was our economic backbone. And that only refers to our material debt; our moral debt, although never fully repayable, grows in the absence of substantive recognition (or the presence of regressive anti-recognition).

3) Discount rates tell us a lot about how we we see our relation to our past and our future. The Stern Review, the widely-discussed report that recommended relatively large and rapid reductions in carbon emissions, became notable in good part because it triggered a debate about the proper discount rate we should use in assessing the costs and benefits of climate change policy. Bill Nordhaus, hardly a squish on the issue, notably took the report for task for using a very low discount rate – effectively, just over 1% on average.

It is hard to crystallize precisely the panoply of philosophical implications of how discount rates interact differently with different kinds of problems. In the case of climate change, a low discount rate implies that we today should place a relatively higher value on the costs future generations will suffer as a consequence of our activity, sufficiently high that we should be willing to bear large costs to forestall them. Commensurately, however, a low discount rate also implies a lower sensitivity to the costs borne by past generations, relative to the benefits received today. High discount rates, of course, imply the inverse in both situations – a greater sensitivity to the burden of present costs on future generations and the burden of past costs on present generations.

There is no consensus – and that is putting it lightly – over what discount rates are appropriate for what situations and analysis, and whether discount rates are even appropriate at all. And when we decide on how to approach policies whose hands stretch deeply into our past or future, it is worth considering what these choices, superficially dry and mathematical, say not just about inputs and outputs, but also the nature of our relationship to the generations that preceded us and those that will follow.

Data attached:

piketty slave reparations

*2010 dollars throughout.

So late last year Matt Yglesias found a simple and concise way to create a good-enough estimate of the value of all privately-held American land, using the Fed’s Z1. He did not, however, go on to take the most-obvious next step, which was to use FRED to compile all the relevant series to calculate the entire time-series.

I have taken that bold step. Behold – the real value in present dollars of all privately held American land since FY 1951:

it's good to have land

Oh, look – a housing bubble!

But because this is the Age of Piketty, why stop there? Thanks to the magic of the internet and spreadsheets, all of the data Piketty relied on in his book is freely available – and perhaps even more importantly, so is all the data Piketty and Zucman compiled in writing “Capital is Back,” which may be even more comprehensive and interesting. So using that data, I was able to calculate land as a share of national income from 1950-2012. Check it out*:

 this land is my land; it isn't your land

 

Oh look – a housing bubble!

And why stop there? We know from reading our Piketty that the capital-to-income ratio increased substantially during that time, so let’s calculate the land share of national capital:

 

image (5)

Oh look – a…two housing bubbles?

It’s hard to know what to make of this at first glance, but after two decades steadily comprising a quarter of national capital, land grew over another two decades to nearly a third of it; and after a steep drop to under a fifth of national capital in less than a decade, just about as quickly rebounded, then plummeted even faster to under a fifth again.

So the question must be asked – why didn’t we notice the first real estate bubble, just as large (though not as rapidly inflated) as the first? There are two answers.

The first answer is – we did! Read this piece from 1990 – 1990! – about the “emotional toll” of the collapse in housing prices. Or all these other amazing pieces from the amazing New York Times archive documenting the ’80s housing bubble and the collapse in prices at the turn of the ’90s.

The second answer is – to the extent we didn’t, or didn’t really remember it, it’s because it didn’t torch the global financial system. Which clarifies a very important fact about what happened to the American economy in the late aughties – what happened involved a housing bubble, but wasn’t fundamentally about or caused by a housing bubble.

For context, here’s the homeownership rate for the United States:

get off my property

 

The 00’s housing bubble clearly involved bringing a lot of people into homeownership in a way the 80’s bubble did not; that bubble, in fact, peaked even as homeownership rates had declined.

There are a lot of lessons to learn about the 00s bubble, about debt and leverage and fraud and inequality, but the lesson not to learn – or, perhaps, to unlearn – is that a bubble and its eventual popping, regardless of the asset under consideration, is a sufficient condition of a broader economic calamity. Now, it does seem clear that the 80s housing bubble was in key ways simply smaller in magnitude than the previous one; it represented a 50% increase as a ratio to national income rather than the doubling experienced in the aughties even though both saw land increase similarly relative to capital. But there have been – and, no matter the stance of regulatory or, shudder, monetary policy, will continue to be – bubbles in capitalist economies. The policy goal we should be interested in is not preventing bubbles but building economic structures and institutions that are resilient to that fact of life in financialized post-industrial capitalism.

*Piketty and Zucman only provide national income up through 2010, so I had to impute 2011-2012 from other data with a few relatively banal assumptions.

The “sharing economy” is the new big idea:

Screen Shot 2014-05-04 at 9.10.45 AMScreen Shot 2014-05-04 at 9.45.53 AM

 

And like all new big ideas, it has generated its share of consternation, some justified, much not, and all of it a little confused. For me the crux came together a little better after reading Emily Badger’s piece from a few weeks ago discussing some of the challenges in integrating “sharing economy” services into the existing regulatory framework and Daniel Rothschild’s piece on how sharing economy firms empower individuals to operationalize their erstwhile “dead capital.”

The thing I realized, really, is that we already know exactly what these firms do. They leverage economies of scale to provide a regulated and standardized forum to connect buyers and sellers. They are, for all intents and purposes, exchanges.

Financial exchanges have existed for a long time. In some form they may have existed in Rome; they have definitely existed since the late Middle Ages. Financial products are uniquely well-suited to exchanges: economies of scale are high since trading pure institutional claims is low-cost and exchange volumes and aggregate values are high; regulation and standardization are both fairly simple since intra-product shares are almost always equivalent. The thing about exchanges in the past is that, until recently, creating large, standardized, regulated exchanges for more heterogenous asset classes and relatively small consumers and sellers was a logistically monumental project with highly uncertain returns.

The internet, obviously, changed that – and in the long-run, we may see the rise of Internet commercial fora as more important than information fora. eBay has been around for a while, as has Amazon (which hosts other sellers as well as sells directly); and what is Craigslist if not an exchange? The internet also, of course, allowed smaller investors to engage in traditional exchanges.

The sharing economy, then, is basically just the rise of service and rental exchanges. You have a car, I want a ride. You have a house, I need a room. I have an idea, you want to take a flyer on buying it before it’s complete. I need a task done, and you are a rabbit. In the end, it’s just about new kinds of exchanges arising, from a combination of the accelerating technology and penetration of the internet as well as just plain old creativity. Nobody is sharing anything. People are exchanging goods and services. They’re just doing more exchange, and more different kinds of exchange, then they could previously.

And while the article I linked to above highlights some of the potential knock-on effects of this change, the fundamental arc is towards democratization and empowerment. Students need income between classes and studying, so they drive Ubers; homeowners have a room to spare, so they rent out their room to tourists and students. The losers are the people who “made” markets before exchanges could make them, who provided the centralizing, organizing force and reaped the benefits of owning capital – taxi medallion owners, hoteliers. If people stay in Airbnb rooms when they otherwise would have stayed in hotels, that shifts income from billionaires to middle class people and on the margins unlocks valuable land for more productive uses; if it incentivizes new trips, well, it increased human happiness!

But more to the point, when we think about how to regulate these things (and regulate we can and should), we need both to consider the costs and benefits and distributional impacts of the regulatory scheme; but even before that, we need to conceptualize what we are regulating and why. We need to think about what kind of regulatory burden needs to fall on the exchange vs. the participants. Perhaps the kind of certification a hotel or even a traditional B&B receives is not what an Airbnb rentier should require; but if cities streamline the process of certifying Airbnb rooms, should Airbnb then accept more responsibility for enforcing those rules? Is Uber liable when a driver’s error causes harms, even when there is no fare in the back? What should Kickstarter be doing to limit potential catastrophic project failure – if anything? Thinking about these firms as regulated exchanges, as spaces for others to buy and rent the goods and services of others, will give us a clearer idea of how to conceive of them, as well as how to approach the question of “normalizing them.” Especially as international exchanges grow in scope, it’s something we need to think more about.  And perhaps it will lead us to think about how we should be regulating the exchanges that already exist.

***

On a personal note, those out there reading this thing may have noticed that, over the last month, there hasn’t been much to read. There was a good reason for that – life has been totally bonkers. In a good way, but bonkers nonetheless. However, the causes of that bonkerdom are rapidly drawing to a close, first and foremost my education, which is, for all intents and purposes, complete – after two very meaningful but not functionally high-stakes presentations, one this week and one the next, I will officially be a Master of Public Policy; my 22-month roller-coaster of being a full-time worker and a full-time student (not to mention a full-time spouse, full-time homeowner, and full-time hound-parent) is, basically, at an end. Amen.

Therefore, all the things to which I have been unable to devote a sufficient portion of my CPU and RAM for want of time are suddenly, amazingly possible – and while, sorry readers, my wife is more important than you, and I do have many other wonderful friends and delightful hobbies, it really does mean this blog can take a good deal more of my efforts than it has in a whole. Commensurately, expect more posts – and more, as I have plans to not only kick this blog up a notch but expand both its depth and its scope as well. Big promises from a guy who still hasn’t put out his Best Albums of 2013 list (it’s May, I am ashamed) but promises I plan to live up to. Stay tuned.

This happened yesterday:

Then this:

And you can read the rest of the conversation from there (it was actually quite civil), but for the purposes of this post, it brought me back to the Piketty Simulator I ginned up a little while back to test Piketty’s second law, and I expanded it. And what do you know – Hendrickson is roughly 50% right. And figuring out exactly why gets at the heart of Piketty’s project. Check it out:

Piketty Simulator

So if you open up the spreadsheet and play with it yourself – and you should! spreadsheets are fun! – you should know a few things. Firstly, continuing my stated opposition to grecoscriptocracy, I have changed Piketty’s alpha and beta, the capital share of national income and the long-run equilibrium capital/income ratio, to the Hebrew aleph (א) and bet (ב). I have also created a new variable of interest, which I assigned the Hebrew gimel (ג), which we’ll get to a bit later.

In the spreadsheet, you can set initial conditions of the following five variables – the initial levels of capital and national income, and Piketty’s r, s, and g – the return on capital, the savings rate, and the growth rate. The spreadsheet then tells you a few things, both over the course of three centuries (!) and the long-term equilibrium.

Firstly, it tells you א and ב. Secondly, assuming invariant wealth shares, it tells you the share of national income that goes to the “rentier class” given any given wealth share.

The other thing it tells you, which is key to the first part of this discussion, is ג, which can be best defined as the capital perpetuation rate; it is the percentage of the “r” produced by capital that needs to be saved in order to maintain the existing ב. It can be defined, and derived, in two ways. The first is g/r, which is intuitive; it can also be derived as s/א, which may be less intuitive, but it also really important. Because it shows both why Hendrickson is wrong and why he was right.

The key to Hendrickson’s point is that is really important to the inequality path. Which is correct! But the other point is that inequality can, and will, rise regardless of s so long as r>g and Piketty’s big assumption is true. More on the latter later, but play with both math the simulator first.

The math first – s/א is a clear way to derive ג: it’s the ratio of the share of national income devoted to capital formation divided by the share of national income produced by existing capital. But if you decompose it (fun with algebra and spreadsheets in one post – I’m really hitting a home run here) you’ll see that since א=r*ב and ב=s/g then you’ve got in the numerator and the denominator and it cancels out. That’s why I put both derivations of ג – ג and ג prime – in the spreadsheet; even though one is directly derive from the savings rate, you can change s all you want and ג remains stubbornly in place. Other things change, but not ג.

This is important because it decomposes exactly what Piketty is getting at with his r>g inequality. Essentially, there are two different things going on. One is the perpetuability of capital, the other is the constraint on capital-driven inequality. As you change in the spreadsheet, you’ll see that the rentier share of national income changes accordingly as the long-run ב increases; you’ll also notice that “rentier disposable income” changes accordingly. Hey, what’s that? It’s the amount of income leftover to rentiers after they’ve not only not touched the principal but also reinvested to keep pace with growth.

And indeed, you’ll see if you change and g that as they get closer and closer, regardless of  how large the capital to income ratio is the rentiers need to plow more and more of their returns from capital into new investment to ensure their fortunes keep pace with the economy. Indeed, if r=g, then rentiers must reinvest 100% of their capital income or else inexorably fall behind the growth of the economy as a whole.

In summary, Piketty’s r>g is telling us whether the owners of substantial fortunes – think of them as “houses,” not individual people – can maintain or improve their privileged position relative to society as a whole ad infinitum. Given and gs tells us how privileged that position really is. Even with a 50% savings rate (!), if g = 4% while r = 5% then even though a rentier class that owns 90% (!) of national capital captures 56% of national income, they can only dispose of just over 11% of national income or else they will be slowly but surely swallowed into the masses. On the other hand, if s = 6%, fairly paltry, but g is only 1% relative to r‘s 5%, then rentiers only capture, initially, 22.5% of national income; but they can spend 18% and still maintain their position; if they spend just the 11% above, they can start increasing their already very privileged position (though this model doesn’t account for that).*

So Hendrickson is both right – you need to incorporate s to compute the long-run inequality equilibrium, while also wrong in that, so long as we’re not yet at that equilibrium, r>g can and, at the very least likely if not necessarily inevitably, produce rising inequality. So while the share of national income that goes to creating new capital limits the ability of capitalists to increase their capital income to the point where it truly dominates society, so long as r > g, they not only need never fear of losing their position, but also through careful wealth management and, defined very relatively, frugality, expand it over time, at least until they hit the limit defined by s.

But therein lies the rub. All these simulations, which echo Piketty’s work**, operate from a central fundamental assumption that, if altered, can topple the entire model (both Piketty’s and mine) – that r, s, and g are exogenous and independent. Now, Piketty himself doesn’t exactly claim that, but he does claim (both in Capital and in some of his previous, more technical economic work) that both theoretically there are many compelling models in which they largely move independently, especially within “reasonable” ranges, and in practice these values have been fairly steady over time and that changes in their medium-to-long-term averages, to the extent they are interconnected, have sufficiently low elasticities that, for example, r (and therefore א) decline slower than ב increases and therefore the dominance of capital increases. He derives this a little more technically in his appendix on pgs. 37-39, and discusses it in his book around pages 200-220; you can also check out this working paper to show how a production function with a constant elasticity of substitution > 1 can not only theoretically produce a model consonant with his projections but also matches the trend in Western countries over the past few decades.

These assumptions in many ways cut deeply and sharply against a lot of different assumptions, theories, and models about the economy that many people hold to, advocate for, and have a great deal of influence. And demonstrating conclusively or empirically how related they are can be maddeningly circular and also ripe territory for statistical arcana that most people don’t understand and, as Russ Roberts has pointedly noted, even those who do don’t really find convincing. But fundamentally, if you believe that r, g, and s are sufficiently independent and exogenous, you can view income distribution as a largely zero-sum game set by systems that states can to a substantial degree alter without changing those values; but if you view them as connected in vital feedback loops, you may be loathe to tax r for fear of depressing s and thereby depress g; your game is negative sum, not zero. How you view this bedrock question, a question hard to resolve conclusively through either theory or empirics, is going to determine a lot of what you take away from Captial.

*I’d love to create a model that shows variant rentier shares of national wealth and national income over time, but that’s not for this post, at least.

**One thing Piketty doesn’t stress but this spreadsheet makes clear is just how long the processes Piketty describes take to play out. Given the default society I plugged into the spreadsheet – r=5%, s=12%, g=1%, C=3, NI=1 – a rentier class that own 90% of total wealth, while projected to capture over half of national income in the long run, only captures ~14% initially; after 50 years, it is still capturing less than 30% of national income; and even after two centuries, it is still 6% of national income short of it’s long-run equilibrium, which is quite a bit. Obviously expecting fundamental aspects of society to be invariant for that long in our post-industrial world is probably very unrealistic, but it gives you a sense of the scale of the dynamics this book is grappling with.

So Tom  Kludt, totally inadvertently, laid down a challenge yesterday afternoon:

 

Faced with a challenge, I responded the only way I knew how – comprehensively and a little cantankerously:

 

As you can see, the chart speaks in many ways for itself – from 2-6% of wideouts rocking a 1X on their back from ’99-’03, the percentage skyrocketed at nearly 20%, or 5pp, each year since, hitting nearly 60% in the most recent season. And if you weight it by games played (I’m counting the playoffs), it’s similar if not even starker:

nfl wide receiver jersey numbers gw

Fewer than 3% of WR-games are played by guys sporting teens until ’04, when the number leaps to 9% and climbs at the same ~20%/5pp rate.

And, of course, Kludt, is right: Keyshawn is Patient Zero, being the only wideout to play a full season before 2004 with a 1X jersey.

What’s a little curious is that the quality of receivers wearing a 1X jersey is lagging behind the quantity. Below is a graph (since ’04 so we’re not just looking at KJ) of the percentage of total games played, catches made, yards acquired, and TDs caught by guys wearing a teen jersey:

underperform

So without getting too much into measuring football player quality, one would imagine that, if quality were equal, the share games played would be exactly equal to the share of the other major stats – but that’s not happening. With the exception of ’09, which was just TDs, in every year WRs with 1X jerseys are racking up slightly but consistently fewer catches, yards, and scores than their 8X counterparts relative to the number of games their playing. Why this would be, I couldn’t say, but there you have it. So when you’re drafting your fantasy team this year and you have no other way of deciding between two equal wideouts in the late rounds, go with the traditionalist.

 

 

Among the various fallout of Facebook’s odd consumption of Oculus is the growing notion that the various individuals who provided Oculus with ~$2.4mm in funding via Kickstarter in exchange for, well, stuff, but importantly, not equity, equity that presumably would have returned perhaps an order of magnitude more than its cost following Facebook’s purchase. The Times has the summary; Gamespot has the longer and more thoughtful musing; and Barry Ritholtz has the primal scream.

I’m going to go ahead and acknowledge #slatepitch here, but everyone complaining about this is, for the most part, wrong. Nobody was lied to or deceived. Everybody who pledged to the Kickstarter campaign signed a perfectly transparent contract exchanging their money for a concrete, well-defined deliverable, knowing full-well that “hey, maybe somebody who succeeds in designing a revolutionary improvement over existing VR helmets maybe has a big-dollar idea on their hands.” If the Ouya wasn’t a stupid idea and instead had been purchased by Apple or Roku or Amazon for billions (or even hundreds of millions) I’m sure we’d be hearing the same complaints, and they’d still be wrong. Even in the realm of “nominally transparent but fishy exchanges,” this falls well short of “old people paying subscription fees to AOL” or “Herbalife.”

Part of what is incensing people about this, crucially is the scale – crucially because it shows where the real injustice lies. There are, on Kickstarter, many projects to help “kickstart” people’s board game designs, music albums, and short films – should that board game then get picked up by Rio Grande, or that album picked up by XL, or that short film get a contract for expansion into a feature by The Weinsteins, well, isn’t that the point of Kickstarter? You help someone “kickstart” their project, and their dreams, to help them succeed at bringing some cool new creation into the world and hopefully leverage that success into a more-fulfilling career. And maybe that project, and their subsequent career, will be a more lucrative one then the more mundane pursuit they were engaged in before Kickstarter helped them find their break. And that’s OK! You didn’t ask for equity, you didn’t get it, you dig the board game or the T-shirt and life goes on.

But the investors in Oculus, collectively, just made two billion dollars.

And therein lies the real injustice. Kickstarter funders of Oculus may be thinking “hey, I pitched in to help you make a great idea happen, maybe even to make you personally financially successful, but I did not sign up to make you a billionaire.” But that gets us back to the real injustice – there shouldn’t be  billionaires! The fact that a twenty-something dude who figured out how to strap your parent’s basement to your face is now going to live a life of immense luxury, free of all wants and able to pursue any material dream, largely because another billionaire twenty-something thought “hey, I really want to wear my parent’s basement on my face,” is totally outrageous. But it only highlights the vastly deeper flaws in our current socioeconomic system, which allows and indeed catalyzes the accumulation of vast wealth by a tiny minority on relatively arbitrary bases. Nobody who gave to Oculus on Kickstarter is, or almost certainly will ever be, a billionaire. And many of them may and likely will in their lives face substantial economic hardship, hardship that would have been largely avoidable if we had a society committed to supporting the broad majority of people at the expense of the 0.1%. But right now we don’t have that society, and that’s the real injustice.

Join 3,848 other followers

Not Even Past

Tweetin’

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

RSS Tumblin’

  • An error has occurred; the feed is probably down. Try again later.