You are currently browsing the tag archive for the ‘FRED’ tag.
My post last week on the case for homeownership as an investment has received some good feedback (the e-word is hereby banished from this blog), a good chunk of which has been constructively critical. While I responded to specifics in comments, I also wanted to supplement the post by fleshing out the remainder of the argument and adding a couple of points.
It has been pointed out to me that there are certain costs – mostly taxes, insurance, and maintenance – that weren’t included in my spreadsheet and only implicitly in my analysis. This is – for the most part – true! I did handwave away depreciation, as much for the sake of simplicity as anything, but I only touched on the other two to the extent that they’re wrapped up into the rent counterfactual. Let’s delve into that some more.
Rent – the price of shelter to non-owners – is in the simplest analysis driven by the same things that drive all markets prices: supply and demand. That means rents aren’t directly responsive to the costs of housing, but those costs do impact the supply curve. If the costs of creating and renting new housing can’t be justified by rents, then supply will not rise even if demand does, driving up prices until they are so justified. Therefore, in general we should expect the costs of renting shelter to be similar (though not equivalent) to those incurred by the owner of the same. In fact, I bet if you play around with The Upshot’s ‘Buy vs. Rent’ calculator, you’ll find that housing and rental costs are very similar.
This brings me to my next point; while people have pointed out what costs I didn’t include, fewer have mentioned the benefit I didn’t include in my analysis, even though that benefit is much vaster. I focused solely on the capital gains returns of buying a house to demonstrate the power of leverage, but the huge share of the returns to a house are the rents you receive as an owner. This is central to any complete case in favor of homeownership. It is further worth noting that these imputed rents are, in fact, an enormous share of our economy.
Net imputed rents, as I pointed out in my Piketty thinkpiece which seriously you must have read this thing by now also tend to be fairly stable, returning between 4-6% of the house’s price over time.
This chart actually understates the stability of imputed rents (as the former chart makes clear) since most of that volatility is driven by volatility in the denominator. For context, here’s the Case-Shiller index, since basically forever (with bonus real interest rate series):
While volatility has more recently increased (consider that my application for the Understatement of the Year Award), note that houses, at the very worst, tend to be inflation proof (the Case-Shiller is a real, not nominal, index) – an asset whose nominal price grows alongside inflation while consistently returning 4-6% annual net returns is, hey, not too bad, and if you can use tax-privileged leverage to buy it, not too bad at all. Especially since we’re going to pay a bundle for housing no matter what we do:
…using housing as a vehicle for savings makes an additional sense.
That leads me to an additional point on volatility; here’s Shiller’s stock price index, also since basically forever:
That looks a lot more volatile than house prices, huh? Which brings us to a key point – as asset price volatility increases, so does the importance of investment timing. This, as Neil Irwin recently noted, can make long-term averages of returns misleading.
While his examples are obviously stylized, they clearly-enough make the point that otherwise-identical savings behavior in a volatile market can achieve vastly different outcomes depending on the timing of returns even holding long-term average returns constant. Therefore, the relative stability of housing returns – prices + rents – helps savers reduce long term risks.
I want to conclude, though, by taking a major step back and examining the whole purpose of this exercise. When we’re talking about savings from a consumer perspective (not from an investment perspective) what we’re talking about is retirement; and when we’re talking about retirement, we’re always talking about the same somewhat-odd phenomenon. When a person retires, they cease all economic production through labor, yet continue to demand a share of the economic output of their society. We tend to view these claims as just and deserved because they are made by the elderly, who we feel have earned it/are unable to work/are generally venerable (as opposed to similar claims from the non-elderly poor, which we treat very differently) but that doesn’t change the underlying structural nature of the phenomenon, in which we are trying to ensure that a substantial portion of the adult population is consuming an broadly-equally-substantial portion of present economic output while providing no inputs.
Debates about savings and retirement, therefore, are all about how to structure this phenomenon – specifically, what network of programs, policies, mechanisms, incentives, and behaviors we want to establish to justify to the working and capitalists that a portion of their labor and capital outputs be directed to the non-working old, which we often do by creating mechanisms that somehow tether those portions of redistributed present income to guarantees of future income. All governments in wealth nations do this, and the ways in which they vary are influenced heavily by politics, ideology, and other socioeconomic factors. In the United States, our prevalent ideology around a certain kind economic freedom means we tend to be less generous in direct public redistribution and instead attempt to subsidize private savings through the tax code and public insurance – ergo, 401(k)s, the home mortgage interest deduction, and the Pension Benefit Guaranty Corporation. Indeed, the increasing prevalence of that ideological strain is driving defined benefit plans into extinctions in favor of defined contribution plans.
This leads us to many debates about the best savings vehicles for middle class Americans, yet those debates are to a decent extent a red herring – the vast majority of retirees receive the majority of their retirement income from Social Security, and for many, it’s all the income they have – though to be consistent, I’m nearly certain the figures in the chart below don’t include imputed rents, though I could be wrong, and this is important because 80% of seniors are homeowners:
This is very good evidence for the proposition that a vastly disproportionate share of the private-savings-for-retirement subsidy network flows to those who need it least. And it suggests that questions like “houses v. stocks” are, for many Americans, mostly a red herring – if we want to put more money in the hands of retirees, we should simply make Social Security more generous – or, in a better world, maintain it at its current level of generosity while implementing a Universal Basic Income.
So late last year Matt Yglesias found a simple and concise way to create a good-enough estimate of the value of all privately-held American land, using the Fed’s Z1. He did not, however, go on to take the most-obvious next step, which was to use FRED to compile all the relevant series to calculate the entire time-series.
I have taken that bold step. Behold – the real value in present dollars of all privately held American land since FY 1951:
Oh, look – a housing bubble!
But because this is the Age of Piketty, why stop there? Thanks to the magic of the internet and spreadsheets, all of the data Piketty relied on in his book is freely available – and perhaps even more importantly, so is all the data Piketty and Zucman compiled in writing “Capital is Back,” which may be even more comprehensive and interesting. So using that data, I was able to calculate land as a share of national income from 1950-2012. Check it out*:
Oh look – a housing bubble!
And why stop there? We know from reading our Piketty that the capital-to-income ratio increased substantially during that time, so let’s calculate the land share of national capital:
Oh look – a…two housing bubbles?
It’s hard to know what to make of this at first glance, but after two decades steadily comprising a quarter of national capital, land grew over another two decades to nearly a third of it; and after a steep drop to under a fifth of national capital in less than a decade, just about as quickly rebounded, then plummeted even faster to under a fifth again.
So the question must be asked – why didn’t we notice the first real estate bubble, just as large (though not as rapidly inflated) as the first? There are two answers.
The first answer is – we did! Read this piece from 1990 – 1990! – about the “emotional toll” of the collapse in housing prices. Or all these other amazing pieces from the amazing New York Times archive documenting the ’80s housing bubble and the collapse in prices at the turn of the ’90s.
The second answer is – to the extent we didn’t, or didn’t really remember it, it’s because it didn’t torch the global financial system. Which clarifies a very important fact about what happened to the American economy in the late aughties – what happened involved a housing bubble, but wasn’t fundamentally about or caused by a housing bubble.
For context, here’s the homeownership rate for the United States:
The 00’s housing bubble clearly involved bringing a lot of people into homeownership in a way the 80’s bubble did not; that bubble, in fact, peaked even as homeownership rates had declined.
There are a lot of lessons to learn about the 00s bubble, about debt and leverage and fraud and inequality, but the lesson not to learn – or, perhaps, to unlearn – is that a bubble and its eventual popping, regardless of the asset under consideration, is a sufficient condition of a broader economic calamity. Now, it does seem clear that the 80s housing bubble was in key ways simply smaller in magnitude than the previous one; it represented a 50% increase as a ratio to national income rather than the doubling experienced in the aughties even though both saw land increase similarly relative to capital. But there have been – and, no matter the stance of regulatory or, shudder, monetary policy, will continue to be – bubbles in capitalist economies. The policy goal we should be interested in is not preventing bubbles but building economic structures and institutions that are resilient to that fact of life in financialized post-industrial capitalism.
*Piketty and Zucman only provide national income up through 2010, so I had to impute 2011-2012 from other data with a few relatively banal assumptions.
When I read Erik Kain’s post yesterday about how the Ouya has essentially failed, my immediate response was “well, of course, the video game industry is drawing dead.” Let me explain what I mean.
When technological innovation leads to a new product class that catches in, there is an initial phase called the adoption phase which is characterized by large and rapid growth, continued innovation, and something of a mania around the product and industry. We saw that with video games in the ’90s.
But when an industry becomes mature, sales plateau as the product becomes a more banal part of everyday life and the mania generally declines. This can sometimes be wrenching for an industry, especially since what worked before no longer works now. They may fail to realize that no amount of innovation can change the total magnitude of the industry relative to life in general, just compete within the bounds of that magnitude. Firms whose models were built implicitly around a continuation of the adoption phase will fail.
To give you a concrete example, here’s car sales per thousand souls in the US since Ted Turner bought the Braves:
Beyond the extreme sensistivity to business cycles, what you see is that, despite the fact that the quality improvements in American cars since the days of Nader’s Raiders has been extraordinary in safety, comfort, fuel efficiency, pollution mitigation, and other cool accessory features, it hasn’t convinced Americans to buy more car overall. This pattern is clearly taking hold in the video game industry:
Now, let’s be a little clear here – mobile gaming has done a lot for the category of industry we’re calling “video games.” But that’s because it expanded the boundary of what video games were, not because it convinced people to spend more time playing what, up through 2007-08, we referred to as “video games.” My mother plays plenty of Candy Crush Amazing Epic Super Saga: Incredible Journey of Candy Gilgamesh or whatever but that hasn’t convinced her to take up Halo (though the image of my mother pwning and trash-talking on XBox Live is pretty hilarious). In a sense, this growth is illusory insofar as you are considering the growth of console and PC gaming. That’s really clear when you look at this:
Video games, methinks, have entered the phase in their life where they are no longer stealing American hours from other activities; indeed, if anything, the rise of Netflix, board games, and some backlash against video games probably means equilibrium per-capita video game hours will end up settling at a lower number than they currently are. The Ouya was implicitly premised on the idea that it wasn’t competing with existing consoles; it wasn’t doing what they were doing better or differently, it was trying to convince people to do something new. But for most people, if they weren’t playing video games, they were just doing something else. The Ouya was drawing dead because video games are drawing dead. That doesn’t mean the industry is going to implode; it just means its moved on to fierce competition for limited market share rather than explosive overall growth.
Maybe it’s my anticipation to read The Leading Indicators (right after Piketty, that’ll be a breezy one, right?); maybe it was listening to the Planet Money podcast on Kuznets; or maybe it’s just because it’s Friday, but the question popped into my head this morning – how much is the nonconomy worth?
The boundary between what is and is not “the economy” is both a very well-defined and a very fuzzy one; the NIPA Handbook does a very nice job explaining the division criteria but the more you muse over them the more they reveal themselves as pragmatically arbitrary with a dash of “I know it when I see it.” Which is all fine and useful but still means that all the stuff that’s not “the economy” is not only, you know, the stuff of life, but also just as measurable and worthwhile to measure. Because I am a giant nerd, I have decided to measure it. Because I am decidedly not a one-man BEA, this is going to be pretty back-of-the-napkin stuff. All data 2012:
Americans worked 230 billion hours in 2012. There were 314 million Americans in 2012, so they experienced roughly 2.75 trillion hours. Which means the economy took up ~8% of American time in 2012.
But that’s a little unsatisfying, for the following reasons:
1) It includes retirees and children.
2) It neglects the question of sleep.
3) It neglects the question of work-supporting activities, like commuting. I’m going to exclude them from the nonconomy for now.
So let’s pinpoint working-age Americans, of which there were ~201mm in 2012. Let’s assume that, of their total time spent existing, they spend 1/3 sleeping and another 10% in work-support (the average round-trip commute is roughly an hour, and I rounded well up to encompass all the other little things that shouldn’t be lumped in with the nonconomy with that definition. That leaves us with a pool of ~1 trillion hours pretty much exactly; subtracting the 230 billion hours working, Americans spent 770 billion hours in 2012 laboring in the nonconomy.
How to attach a number? Simplifying assumption: output-per-hour is the same in the nonconomy as in the economy, so you just divide GDP by hours worked. In 2012, that was just around $70/hr; multiplied by 770 billion, you discover the nonconomy was $53 trillion in 2012. Let’s use a sophisticated data visualization to compare:
GDP is Gross Domestic Product; GDA is Gross Domestic Awesome.
If anything, this is a substantial undermeasurement, because old people count too! And since by this calculation they don’t work at all, they would be contributing an additional 200 billion hours to the nonconomy, which is another $14 trillion of nonconomic activity:
Just a little perspective on life and the economy on this new-jobs-number Friday.
So these are the three largest components of GDP, all indexed to 1960:
Clearly one of these is not like the others, but the well-known fact that investment, not consumption or government spending, is mostly what fluctuates with the business cycle is very visible. I wanted to dig a little deeper, though, especially to compare the current recession to priors. So I made this graph:
Bars are unbroken periods of percent change in GPDI; their height is the total percent change in the period, their width is the length.
Here it is smoothed a bit using a highly-advanced method called “arbitrary eyeballing”:
And this time with feeling:
While none of these three graphs is perfect, looking at all of them the various recessions we’ve experienced and their depth and breadth become quite clear. And it seems striking that our current mess represents a vastly larger and longer decline in private investment then any prior recession since WWII.
So let’s break down GPDI; the biggest component is the broad heading of “fixed non-residential investment:”
Looking at the log (which is quite often a good idea, see James Hamilton for more) you can see that this recessions seems notably but not dramatically more severe than past downturns, and that we are on a decent track for recovery.
But here’s residential structures:
Wowzers. Two facts worth noting: residential investment has fallen off a cliff and is nowhere near recovering; the so-called “housing boom” is barely visible.
That becomes a little clearer, though, when you look at single-family construction vs. multifamily and “other” (dorms, trailers, etc):
Single-family construction clearly gets a little wacky during the mid-aughties, whereas multifamily is catching up from slacking on trend; since then, multi-family is rebounding while other is wishy-washy and single-family is really terrible.
What’s remarkable about all this, though, is that you can with some confidence say non-causally that recessions are, for all intents and purposes, fluctuation in housing construction.
In the past, we’ve had recessions, interest rates are cut, recession over. Now, interest rates can’t be cut, and we’re not building enough housing, and therefore there’s too much unemployment (especially among the young who are largely the building class):
In fact, relative to older folks, this is the worst the young have had it since the 70s:
Now, why does lowering interest rates reverse recessions? There are many good reasons, but to some extent they’re all about setting expectations. When the Fed “cuts rates,” what is doing is what its doing is just buying lots of government securities, which is what “quantitative easing” is; the difference between the former and the latter is the ends, not the means. The former is a kind of credible expectation setting of broader outcomes – “we will buy bonds until interest rates are where we say they should be, dammit.” The latter sets a much narrower expectation that doesn’t necessarily imply broader changes in the economy.
Now, there is an idea out there that Paul Krugman calls “the confidence fairy,” which he belittles…and he’s right (at least in practice)! As it is formulated by conservative pols and pundits as a partisan cudgel, it basically amounts to a non-sequitur; recessions, ergo, implement the tangential policies we support regardless of economic conditions (derp).
But I’m not sure the confidence fairy is entirely a fiction. In what I think is a bit of a cousin to Steve Waldman’s story of finance as the world’s most important confidence game, it seems like in the past recessions have been alleviated because the Fed creates self-fulfilling prophecies – by buying bonds to depress interest rates, they incentivize individuals to invest based on an implicit assumption about future growth dependent on their investment. And it all worked rather nicely until we hit the ZLB:
The thing that the Fed has fundamentally failed to do is pull their usual trick; they haven’t convinced anyone that the economy will be better tomorrow, so they’re not doing the things today that will create that improvement.
This, in a roundabout way, is where I get to responding to Ryan Cooper’s terrific article making the case for helicopter money. Helicopter money is a good idea. I like it. I support it. It is a humane, fair, and efficient way to help everyone get through hard times. But my gut tells me its not, on its own, enough to kickstart us out of the funk our economy is in. While the biggest reason the 2008 tax rebate didn’t help the economy was its puniness relatively to the impending crisis, it was doubly hobbled by the fact that it was a one-off with no guarantee of being repeated (which it hasn’t, though the payroll tax cut was it’s cousin). Ryan supports giving the Fed the power to mail checks unilaterally, not by implicitly supporting a fiscal-side program, which is a great idea – coordinating the king and the wizard can be a tricky game. But even then, a $2000 check can be extraordinarily helpful in the medium term to people in need, but it in-and-of-itself does not a housing construction recovery make. Helicopter money works best, and may work only, as the whip hand of a credible promise by the Fed of meeting a broader economic target; it can, though, be a very persuasive whip.
Bitcoin, after all, is the ultimate fiat currency: just a bunch of ones and zeroes on a computer with no intrinsic value. But so are all currencies. The difference is that it’s more obvious with Bitcoin because the entire enterprise is actively marketed as nothing more than algorithmically-created data. It’s one of their big selling points.
So that forces you to think about what the ultimate value of a Bitcoin can be. And if there isn’t any, then why do dollars and yen have value? Why do IOUs passed around in prison camps have value? Or babysitting chits? Once you figure out what ultimately underlies the value of these various fiat currencies, you’ve taken a big step toward understanding why some currencies are better than others and why playing games with the debt ceiling is so stupid.
Which reminds me that I wanted to knock down this whole notion of “fiat currencies” in the first place.
Money is a technology devised as a solution to a bundle of collective action problems centered around network effects and the transaction costs of exchange above a certain threshold of scope and scale. It works really, really well, but it has a few problems. A key problem (though not the only or, perhaps, even the most important of which) is the one of storing value – money is only useful if its value doesn’t fluctuate too much, too unpredictably, too soon. However, there are incentives for whomever issues the money (as well as counterfeiters) to take actions that result in just those kinds of fluctuations, as well as outside pressures that make such fluctuations more likely. Therefore, almost every money issuer ever has taken some sort of steps to regulate the value of its currency and the rate at which that value changes.
One solution to this was to make the money out of rare, durable, verifiable elements. One solution was to have private actors issue money and caveat emptor. One solution was to have a bunch of state technocrats issue paper redeemable for said elements. One solution was to have a bunch of state technocrats issue paper redeemable in more paper and pinky-swear not to allow the value of the currency to fluctuate. One solution was to create a self-perpetuating algorithm that fixed the supply of currency units. There were also other solutions.
I think trying to sort those and the myriad other solutions to the money problem into “fiat” and “backed” is as irrelevant as it is obscurant. In each of those schemes there are two identifiable foci from which value regulation derive and distinguish various schemes from each other:
-The algorithm – the rule governing the value path of the currency.
-The credibility – the likelihood of the currency following the value path promised by the algorithm, and the accountable party for those outcomes.
This makes actual, categorizable sense of the differences between various monetary regimes. The algorithm of a gold-standard is “the value of this currency will always be equal to a certain quantity of gold, and you can always exchange your paper for that quantity” and the credibility is in the issuer, whether it’s a private bank or the central bank. Nothing stops me from issuing a gold-backed currency tomorrow, but nobody would use it because my promise to redeem all the SquarelyBucks I’m issuing for shiny gold coins is, sadly, totally lacking credibility. The algorithm of a “fiat” currency is “the value of this currency will never decline by more than ~2% annually” and the credibility is in the issuer, in this case Janet Yellen, the FOMC, and the institutional apparatus in which they operate. The algorithm of Bitcoin is “the money stock will never exceed 21mm BTC” and the credibility is in the nature of the currency’s code which until recently seemed very well-designed to prevent counterfeiting.
The genius of Bitcoin is that it takes the algorithm out of human hands; the tragedy of Bitcoin is that its algorithm is stupid, for two reasons. The first was that Bitcoin’s algorithm was borne out of an ideology which believed that central banks inherently lacked credibility and that therefore central bank currency inflation, even hyperinflation, was not just possible but inevitable, especially in light of the various Federal Reserve responses to recent economic shocks. This ideology is wrong:
The second reason is that there are better algorithms even if you believe in that (wrong) ideology. A good example would be “one BTC will always equal one 2009 USD no matter what happens to the value of USD over time.” This, of course, is way more complicated than the BTC algorithm as a coding matter, because it would have to either trust CPI or another inflation measure or somehow routinely update an internal proprietary index based on accessible price data, a tricky thing to do into perpetuity. If someone wanted to program and release that cryptocurrency, BTW, it would be a fantastic economic experiment. But that’s not BTC, which instead fixed its money supply, and lacking any private or public chartalistic price anchor allowing for large, unpredictable, rapid fluctuation in its value, thus defeating the purpose of money itself.
I hereby therefore petition that we suspend all discussion of “fiat” currencies and “backed” currencies and instead discuss rules and credibility.
(Thanks to Mike Sproul and Nick Rowe for kicking this around with me in the comments of this post)
Miles Kimball and Yichuan Wang find that high government debt doesn’t cause low GDP growth, and Kimball says he finds that surprising, as does Matt Yglesias. But as I suggested in a post last month, I’m not really surprised by this at all.
Governments tax or borrow. The former is withdrawing money from the economy in exchange for nothing (or perhaps a promise not to sanction the taxed) while the latter withdraws money from the economy in exchange for a piece of paper. That’s debt! Evil, evil, debt! Oh, no!
Wait, let’s start over.
The goverment decides it wants to do something it isn’t already doing, and therefore needs to command a higher share of total social production going forward than it has been. Developed-world governments don’t directly commandeer social resources, they claim through the proxy of money, by spending it. Assuming an economy at full capacity (whatever that means), if the government commandeers resources by spending money without removing any money from the economy then you’d have inflation, unless the central bank raises interest rates substantially, which would likely have undesireable negative effects. So the government attempts to roughly balance the resources claims it makes using money by withdrawing an equivalent amount of money from society. Sometimes it does this through taxes, which has some desireable properties (no future obligations on the state, can be used Pigovianly) and some undesirable ones (unintended consequences, involuntary, discourages desireable activity). Borrowing also has some desireable properties (voluntary, compensates those who part with their money) and some undesireable properties (obligates the state).
Therefore, there are two key intertwined questions to be asked about this new government activity, which remember is centrally about taking some resources deployed previously to some private purpose and redirecting them to some other, presumably public purpose – is the new activity more valuable than the activity(s) it is supplanting, and how is it being financed? They are intertwined because the latter question informs the former.
Let’s say we all agree that this new government project – let’s say it’s a SUPERTRAIN, for fun – is widely considered to be of higher value than the marginal private activity it supplants regardless of how it is funded. The government could raise taxes to fund it, but unless it is taxing something undesireable (like carbon or booze or Kardashians) this would have the drawback of incurring some "deadweight loss," not to mention other unintended consequences. It could also borrow the money, which would have two consequences. Firstly, it would supplant something different – rather than raising the cost of work or carbon emissions, it would be more likely to supplant a capital investment of some kind somewhere in the economy. Secondly, it would obligate the government.
And to what would it obligate the government? Key to understanding this is that governments, unlike Lannisters, never pay their debts. They cleverly disguise this fact by paying their debts in full and on time. Huh? From the perspective of a borrower, you get your interest payments, and then your principal in full. But from the perspective of a government, you don’t pay the principal back out of tax revenue, you pay it by rolling over the debt and issuing new debt in the amount of the principal. This works because of NGDP growth (both the RGDP growth and inflationary components). In fact, we’re still likely rolling over all the debt we incurred from WWII, which back then was 110% of NGDP but today is less than 2% of NGDP.
So really what the government does when it issues a bond is issue itself a negative perpetuity. And the key to understanding the value of a perpetuity is knowing the interest rate, since the PV = C/r. Therefore, the obligation on the government is much more dependent on the interest rate path than on the nominal coupon value.
But that interest rate path isn’t just some made-up thing – it’s fundamentally related to NGDP growth. Don’t believe me? Here’s the fed funds rate divided by the NGDP growth rate:
So when recessions happen, the ratio spikes (and whether it spikes up or down is very interesting), but otherwise it’s very steady; if you exclude just the 12 of 223 periods where the absolute value of the ratio is greater than 3, you get an average of 0.8 and a standard deviation of 0.6.
So what does that mean? As interest rates grow, so does the obligation on the government – but it also implies that the government’s ability to meet that obligation is growing in tandem. Which suggests that, while governments cannot borrow limitlessly, the pain point at which government indebtedness begins to inflict structural economic harm is vastly higher than previous assumptions.
Japan, for example, is often cited as an example of government debt creating a huge drag/time bomb/giant vengeful lizard that is harming Japan’s economy. But since 1990, Japan’s debt/GDP ratio increased from 67% to 211% and GDP-per-capita…grew! Significantly! Not awesomely, not enough to catch-up with the US (in fact, it fell behind), but grow it did. Certainly more than you might think it would if the 90% monster were real and starting smashing major cities or something.
Many people have begun to worry whether the seemingly-inevitable Japanese debt crisis is nearing as yield have crept up. But yields have crept up because NGDP-growth-expectations have crept up. As long as they increase in tandem, contra Noah Smith, Japan should always be able to pay its debts.
And I’d be willing to put money/my reputation on this point. While Noah Smith is 100% right that bets != beliefs, I am nonetheless willing to agree in principle to any reasonably-valued bet that neither Japan nor the United States will default over any arbitrary time period. Any takers?
Ashok Rao busts me for being lazy this morning, and he has me dead to rights. I blithely waved away a further discussion of what would actually happen if there was a large secular increase in aggregate saving on the part of the poor. I was lazy about this, and in my defense, I was using “economic disequlibirum” as a stand-in for saying “a) I’m at the office, b) I didn’t think it was entirely relevant to the final conclusion of my post, and c) I was feeling lazy.”
So now that, at the very least, condition a) has been relieved, let’s take a look at Ashok’s point. He notes – quite correctly – that 1) even if you make unrealistically egalitarian assumptions about the initial wealth distribution that increased saving by the poor doesn’t affect the ratio very much, and the more broad point that 2) the wealth gap between rich and poor will continue to increase so long as the rich save at a higher rate than the poor regardless of the initial distribution. Both of these are correct, but they do end up being somewhat tangential to the real question re: the effect on the economy on a substantial secular increase in the social desire to save.
This is, in fact, a much disputed question in economics, and what effects it might have (and whether that effect depends on whether current conditions are recessionary and whether we’re at the ZLB) are not the subject of much consensus. As a general point, though, Americans used to save much more, so if we decided to save more now, in all actuality it may not have much economic impact at all.
But just because things used to be one way doesn’t mean that, under current conditions, they could just be that way again – a lot has changed since the 1970s and perhaps it might not be so simple to revert back to saving that much. Tangentially, I’m really not a huge fan of the distinction between “saving” and “consuming” anyway, so maybe I’m not the perfect person to be breaking this down, but what the heck, let’s give it a shot:
What happens to the economy when the poor start saving more depends on what “savings” is in this context. Let’s start with some real data – the lowest quintile of Americans take home 3.4% of national income, and therefore a 1% increase in the savings rate of “the poor” (defining that as coterminous with the bottom quintile of earners, which is totally not actually correct) would result in an increase in the total national savings rate of .034%. So we’re talking pretty small taters, frankly, which is really the key issue.
But beyond that, we can still discuss the theoretical side, which truly does depend on what “savings” means. If it means “putting the money into deposit accounts at banks” then the aggregate effect of those savings depends on whether it increases loaned funds from that bank in an amount that equals or exceeds the forgone consumption. The reason the “paradox of thrift” doesn’t always hold is that the saved money “has to go somewhere” which means it could (though not will) become someone else’s consumption (of perhaps a more durable good) that will offset the loss in more short-term oriented consumptions. So the “disequilibirum” that results could be a net loss in output and/or it could be a sectoral shift in output, and how disequilibirum-izing you think that is depends on how PSST-y you think the economy is or more broadly how inflexible it his, how high transaction and discovery costs are, how rooted labor markets are, that kind of thing.
Anyway, the point is, while this is theoretically all interesting, my two conclusions from Ashok’s post are a) the net short-term economic impact of even a substantial shift in the savings preferences of the poor will be small and b) I still think the conclusion of my prior post was right because it wasn’t dependent on whether or not a) is true.
Ruminating on my earlier mortification, I realized that my mistake was actually accidentally onto something vaguely interesting. After checking my math (note to self: always check your math) then asking FRED, I produced this lovely graph:
Assuming I haven’t fracked the pooch yet again, what this ought to be telling us is RGDP-per-capita over the population:worker ratio; ie, the share of RGDP-per-capita that actually ends up in workers’ pockets assuming a perfectly even wealth distribution. This may or may not be interesting.
For those who crave trendlines: