You are currently browsing the tag archive for the ‘democracy’ tag.

and on top of everything else you get a sticker, everyone loves stickers

The other day I tweeted that “‘voting is irrational’ is the worst argument smart and reasonable people routinely make” after seeing smart and reasonable Matt Dickinson reference it as an aside when making what I think is a different-but-also-bad argument about why people in certain positions should abstain from voting, and got at least one request to flesh out why I think the argument is in fact so bad. Rather than cite to all the people who make the argument (though also not to single out Matt per se, his was just the reference that led to the tweet that led to this post) , since I think it’s fairly well-established both in terms of its contours (that the odds of any individual vote affecting the outcome of an election is tiny ) and that it’s widely made, but here in only some order is a laundry list of all the reasons this argument is bad and I hate it.

Derek Parfit’s “Harmless Torturers” argument – In “Reasons and Persons” Parfit creates a thought experiment summarizes as succinctly as possible as so – if you have 1,000 people each controlling a single machine that each tortures a single person (say with electric shocks), it is clear that electing to activate the machine is wrong. But if each of those persons controls 1/1000th of a single machine that distributes 1/1000th of that torturous shock to each of 1,000 people connected to the single machine, would we still consider the choice of each to flip their switch wrong even if the marginal torture being distributed is at most barely perceptible? The intuitive, and also correct, answer, is “yes” and this is a very potent argument in the context of many cultural problems as well as climate change. It is similarly potent here as well; so long as we accept that collectively high participating in voting is good, it follows that each individual decision to vote is good. I leave it to the reader to note that, in the absence of substantial counter-forces, that doing good is rational.

Anthropology and sociology hugely militate against the narrow economic view of adjudicating individual actions on a narrow benefit-cost of marginal action – Human societies are vastly complex networks bound together as much as or more by norms and custom than formal rules, and rather than seeing collective action as the sum of individual action it often makes more sense to see individual action as a note in a multidimensional matrix of complex social, economic, familial, and communal networks. This, BTW, is why the whole quest to “microfound” macroeconomics is fundamentally dumb but that’s another blog post.

There’s no reason not to vote – the costs to voting are extremely small, and declining as time in transit or in queue can be spent in communication with others or playing Hoplite which I just discovered and is super fun. It can obviously be irrational to do the ethical thing in a context where that leaves one likely to be harmed or exploited; this is related, in some ways, to the theoretical finding that won George Akerlof a Nobel Prize, as well as just being obvious. If nobody’s paying taxes don’t pay taxes, etc. But in a general equilibrium that is either positive or near a tipping point, especially given the prior point, if the costs of doing the socially beneficial and ethically sound thing are low or negligible, it is absolutely rational to do it. Plus, the time-money equivalence isn’t purely scalar on the margins, most people distribute their time in lumpy ways that don’t make marginal time-use decisions, especially on the scale of “an hour every two years” costly in a way that can be easily quantified.

Voting is fun – I like voting! It is rational to do things one likes to do!

Voting is empowering on an individual and communal level – making one’s voice heard in the formal political process has a two-way legitimation effect, legitimizing one’s own equal right to be a part of the civic process as well as legitimating that civic process as the correct channel for making one’s voice heard. It is rational to pursue this, which also leads into the next argument…

This argument mitigates against all public and civic participation – if voting is irrational, so is signing a petition, joining a protest, donating to a candidate, or even voicing one’s opinion. Unless one takes actions so drastic that purely in isolation they affect political outcomes – and, without getting too much into it, one can clearly extrapolate that most such actions are violent or otherwise bad – this argument mitigates in favor of total non-participation in anything civic or even communal.

This argument is particular to first-past-the-post elections on a very large scale – in a proportional voting system, or in elections for mayor, city council, or even Congress it can be clear that much smaller numbers of votes can affect substantial political outcomes. A ~36,000-28,000 vote in suburban Virginia deposed the second-most-powerful House Republican. But if you’re going to vote for everything, the marginal cost of voting for everything on the ballot is so vanishingly small that even the narrow, economic argument against voting is thin as straw.

Making this argument is immoral from a consequentialist standpoint – even if you think individual voting decisions are irrational, so long as you think high participation in voting generally is good then by making this argument you are helping to damage that. Maybe you think that making the argument is damaging it so slightly it barely matters, but then why are you bothering to make the argument at all? It is clearly irrational to do so since it’s not having any impact.

Making this argument is immoral from an anthropological standpoint – of course, I do think it has an impact, especially as more people make it, and I think it corrodes the necessary normative construct of individual obligation to the collective and civic well-being that makes our society and similar societies function well. Promoting cynicism and non-participating is bad.

Making this argument makes you look like a smug, dislikeable cynic – this is self-explanatory. Seriously, doing this just makes you look like a narrow-minded pedant who wants to prove their intellectual superiority by making an obnoxious debator’s point at the expense of, like, you know, democracy, and people will dislike you for doing it.

And all that without referencing Florida c. 2000, and without referencing the many counter-arguments for voting that play somewhat more on the turf of the original argument for irrationality; for those see Andrew Gelman who is good on this issue (paper here, posts here here and here).

All that being said we should vote less, for less, and on the weekend, and maybe it should even be mandatory, but that’s a different story.

Of course, I’m well over halfway into writing my Big Important Thinkpiece about Capital in the 21st Century and the FT decides to throw a grenade. Smarter and more knowledgeable people than I have gone back and forth on the specific issues, and my sense seems to align with the general consensus with there being specific issues with some of the data, but that the FT criticisms were at least somewhat overblown and that there is not nearly enough to overturn some of the central empirical conclusions of Piketty’s work.

What strikes me about this episode most is just how unbelievably hard true data and methodological transparency is. The spreadsheet vs. statistical programming platform debate seems to me to be a red herring – at least as the paradigm stands, each has their uses and limitations, as well as common pitfalls, and for the kind of work Piketty was doing, which didn’t rely on more complex statistical methods but mostly careful data aggregation and cleaning, etc, a spreadsheet is probably as fine a tool as any.

The bigger issue is that current standards for data transparency, while certainly well-advanced by the power of the internet to make raw data freely available, are still sorely lacking. The real problem is that published data and code, while useful, is still the tip of a much larger methodological iceberg whose base, like a pyramid (because I mix metaphors like The Avalanches mix phat beats), extends much deeper and wider than the final work. If a published paper is the apex, the final dataset is still just a relatively thin layer, when what we care about is the base.

To operationalize this a little, let me pick an example that’s both a very good one and also one I happen to be quite familiar with, as I had to replicate and extend the paper for my Econometrics course. In 2008, Daron Acemoglu, Simon Johnson, James A. Robinson, and Pierre Yared wrote a paper entitled “Income and Democracy” for American Economic Review in which they claimed to have demonstrated empirically that there is no detectable causal relationship between levels of national income and democratic political development.

The paper is linked; the data, which is available at AER’s website, are also attached to this post. I encourage you to download it and take a look for yourself, even if you’re far from an expert or even afraid of numbers altogether. You’ll notice, first and foremost, that it’s a spreadsheet. An Excel spreadsheet. It’s full of numbers. Additionally, the sheets have some text boxes. Those textboxes have Stata code. If you copy and paste all the numbers into Stata, then copy and paste the corresponding code into Stata, then run the code, it will produce a bunch of results. Those results match the results published in the corresponding table in the paper. Congratulations! You, like me, have replicated a published work of complex empirical macroeconomics!

Except, of course, you haven’t done very much at all. You just replicated a series of purely algorithmic functions – you’re a Chinese room of sorts (as much as I loathe that metaphor). Most importantly, you didn’t replicate the process that led to the production of this spreadsheet full of numbers. In this instance, there are 16 different variables, each of which is drawn from a different source. To truly “replicate” the work done by AJR&Y you would have to go to each of those sources and cross-check each of the datapoints – of which there are many, because the unit of analysis is the country year; their central panel alone, the 5-Year Panel, has 36,603 datapoints over 2321 different country-years. Many of these datapoints come from other papers – do you replicate those? And many of them required some kind of transformation between their source and their final form in the paper – that also has to be replicated. Additionally, two of those variables are wholly novel – the trade weighted GDP index, as well as its secondary sibling the trade-weighted democracy index. To produce those datapoints requires not merely transcription but computation. If, in the end, you were to superhumanly do this, what would you do if you found some discrepancies? Is it author error? Author manipulation? Or your error? How would you know?

And none of these speaks to differences of methodological opinion – in assembling even seemingly-simple data judgment calls in how they will be computed and represented must be made. There are also higher level judgment calls – what is a country? Which should be included and excluded? In my own extension of their work, I added a new variable to their dataset, and much the same questions apply – were I to simply hand you my augmented data, you would have no way of knowing precisely how or why I computed that variable. And we haven’t even reached the most meaningful questions – most centrally, are these data or these statistical methods the right tools to answer the questions the authors raise? In this particular case, while there is much to admire about their work, I have my doubts – but to even move on to addressing those doubts, in this case, involves some throwing up of hands in the face of the enormity of their dataset. We are essentially forced to say “assume data methodology correct.”

Piketty’s data, in their own way, go well beyond simply a spreadsheet full of numbers – there were nested workbooks, with the final data actually being formulae that referred to preceding sources of raw-er data that were transformed into the variables of Piketty’s interest. Piketty also included other raw data sources in his repository even if they were not linked via computer programming to the spreadsheets. This is extremely transparent, but still leaves key questions unanswered – some “what” and “how” questions, but also “why” questions – why did you do this this way vs. that way? Why did you use this expression to transform this data into that variable? Why did you make this exception to that rule? Why did you prioritize different data points in different years? A dataset as large and complex as Piketty’s is going to have hundreds, even thousands of individual instances where these questions can be raised with no automatic system of providing answers other than having the author manually address them as they are raised.

This is, of course, woefully inefficient, as well as to some degree providing perverse incentives. If Piketty had provided no transparency at all, well, that would have been what every author of every book did going back centuries until very, very recently. In today’s context it may have seemed odd, but it is what it is. If he had been less transparent, say by releasing simpler spreadsheets with inert results rather than transparent formulae calling on a broader set of data, it would have made it harder, not easier, for the FT to interrogate his methods and choices – that “why did he add 2 to that variable” thing, for example, would have been invisible. The FT had the privilege of being able to do at least some deconstruction of Piketty’s data, as opposed to reconstruction, the latter of which can leave the reasons for discrepancies substantially more ambiguous than the former. As it currently stands, high levels of attention on your research has the nasty side-effect of drawing attention to transparent data but opaque methods, methods that, while in all likelihood at least as defensible as any other choice, are extremely hard under the status quo to present and defend systematically against aggressive inquisition.

The kicker, of course, is that Piketty’s data is coming under exceptional, extraordinary, above-and-beyond scrutiny – how many works that are merely “important” but not “seminal” never undergo even the most basic attempts at replication? How many papers are published in which nobody even plugs in the data and the code and cross-checks the tables – forget about checking the methodology undergirding the underlying data! And these are problems that relate, at least somewhat, to publically available and verifiable datasets, like national accounts and demographics. What about data on more obscure subjects with only a single, difficult-to-verify source? Or data produced directly by the researchers?

On Twitter in discussing this, I advocated for the creation of a unified data platform which not only allowed users to merge the functions and/or toggle between spreadsheet and statistical programming GUIs and capabilities, but also to create a running annotatable log of a user’s choices, not merely static input and output. Such a platform could produce a user-friendly log that could either be read in a common format (html, pdf, doc, epub, mobi) or uploaded by a user in a packaged file with the data and code to actually replicate, from the very beginning, how a researcher took raw input and created a dataset, as well as how they analyzed that dataset to draw conclusions. I’m afraid that without such a system, or some other way of making not only data, but start-to-finish methodologies, transparent, accessible, and replicable, increased transparency  may end up paradoxically eroding trust in social science (not to mention the hard sciences) rather than buttressing it.

Income and Democracy Data AER adjustment_DATA_SET_AER.v98.n3June2008.p808 (1) AER Readme File

So Radley Balko brought up on Twitter a "Challenge To Lefty Bloggers" that he published back in 2009. Some of his questions are perfectly reasonable: I think we should target NGDP in a way commesurate with 4% inflation, for example; I’m also in favor of marginal (not total or average) tax rates of ~90% under certain circumstances. Some of his questions are irrelevant (the "unfunded liability" of Social Security is a non-sequitur), confused (I think he scrambles marginal and average tax burdens), or just silly (our average tax rate is close to the median – what greater suffering dare you inflict!?).

But some of them are conceptually flawed in a way I think is interesting. Firstly, his "size-of-government" metric is hopelessly flawed. More interestingly, though, are the questions about income inequality and progressive taxation – what are the optimal levels of each? The trick here is that claiming to have a theoretic or empirical basis for an exact number is a fool’s errand. The real answer to this question is "less and more than we currently have, respectively." So let’s use a little more of the latter to alleviate some of the former, and see what happens! It doesn’t have to be radical – we could just nudge up top marginal tax rates, perhaps create a new millionare’s bracket, and use the money to expand the EITC (which I know doesn’t directly affect pre-tax income inequality on either end but just roll with me here). Will that devastate innovation? Will Atlas shrug? Meh – I doubt it. In fact, Galt’s Gulch was a rather lonely place even when top marginal tax rates in the United States were 90%+. So rather than demand anyone decare a single optimal point, let’s agree that "too few people claim too large a share of national income" and nudge it a bit and see what happens. That’s what democracy is for!

In reading about the fascinating-and-inane-in-equal-measure experiment in private money called “Bitcoin,” I found this post by Timothy Lee, which contained a passage that I found very clarifying:

So one of Bitcoin’s key selling points—a permanently fixed supply—is basically illusory. The supply of Bitcoins, like the supply of every other currency, will be controlled by the fallible human beings who run the banking system. We already have an electronic currency whose quantity is controlled by a cartel of banks. If you’re a libertarian, you might think the lack of government regulation is an advantage for Bitcoin, but it strikes me as highly improbable that the world’s governments would leave the Bitcoin central bank unregulated. So I don’t see any major advantages over the fiat money we’ve already got.

In fact, from a small-d democratic standpoint, once you’ve reached this point of analysis the Federal Reserve System is actually superior because its leaders are appointed and confirmed by elected officials, thus implementing at least some democratic accountability.

But I’m not sure that matters to a lot of self-described libertarians. There is a view, famoulsy summarized by a misquoted Margaret Thatcher, that says “there is no such thing as society. There are individual men and women, and there are families.” In this view, there are individuals and there are those things that intrude on the right of individuals, and the latter is pretty much malicious in every instance.

But there is also a view that there is very much a thing called society, built of networks and relationships, fundamentally rooted in interdependence, and impossible to reduce to the sum of its parts. In this view the entirety of the society produces a certain amount of wealth in goods and services, and how those goods and services are produced and allocated should be determined by institutions elected by society in an accountable and fair and transparent way. This is not to say there is no such thing as individual rights, property rights, etc, but that those who happen for one reason or another to be the lucky few to control the flow and distribution of capitol shouldn’t be the only ones to determine its destination.

And these views are mostly incompatible, though in practice there are some practical areas of agreement (mostly around the necessity of public security and contract enforcement). But it does seem that libertarianism, to the extent that it denies the right of democratically-elected institutions to acquire any meaningful power beyond policing and border defense, doesn’t really lead to any kind of meaningful democratic empowerment.

But if you don’t believe that those who possess capitol have a sacred right to accumulate as much of it as they can, you will probably be more inclined to agree with Abraham Lincoln:

Labor is prior to, and independent of, capital. Capital is only the fruit of labor, and could never have existed if labor had not first existed. Labor is the superior of capital, and deserves much the higher consideration.

Join 3,847 other followers

Not Even Past

Tweetin’

RSS Tumblin’

  • An error has occurred; the feed is probably down. Try again later.