Arguments for stimulus and how to counter them

It has now been five years since the passage of the American Recovery and Reinvestment Act of 2009Several commentators are claiming that the stimulus has been successful, despite its unpopularity, or that it was at least an improvement over a laissez-faire approach. Let us examine these claims.

The Federal Reserve, the TARP program, and the auto industry bailout prevented an economic collapse. The stimulus saved or created 6 million jobs.

Such claims are an example of the broken window fallacy. While they point out the jobs that were created by the stimulus, they ignore the jobs that could have been created if the money had been left in private hands. Many supporters of Keynesian stimulus have anticipated this argument and have offered rebuttals. Let us examine some of these.

The broken window fallacy does not apply because desired savings are not automatically converted into investment spending, so that government borrowing need not come at the expense of private investment.

To some extent, this is a fair point. The broken window fallacy is necessary but not sufficient to explain why stimulus is not effective. Due to the nature of debasement and borrowing, another argument is needed to explain why exploiting the velocity of money and stealing from the future to feed the present will not be effective.

Many proponents of austerity will say that the stimulus money must come from somewhere, but this is not true. Its fiat nature in modern economies allows it to be created out of thin air. But its value must come from somewhere. In the case of currency debasement (also known as quantitative easing), the value is extracted from all existing money in the economy. But as this extraction of value takes time to occur, the original spenders get to spend the money at its current value, while those farther down the line get to spend the money at its debased value. This effect helps to make the rich richer and the poor poorer, and is thus a form of income inequality caused by governments.

In the case of government borrowing and deficit spending, the value is extracted from the future productivity of the tax base, thereby selling the unborn into debt slavery. Thus, while stimulus does increase the resources used at present, this comes at the expense of having less resources to use in the future. The effect on the economy is much like the effect on a farmer who runs out of food in the winter and decides to eat his seed crop. He will be better off for a time while his resources last, but he is consuming the resources in the present that he needs to use next spring to provide for himself in the future. (This metaphor may fly over the heads of some younger farmers, but those who were farming before seed companies were able to use the legal fiction of intellectual property to force farmers to purchase seeds from them each year will understand.)

Governments had to spend because the private sector would not.

Spending occurs in the private sector when people have money to spend and believe that they will receive a reasonable return on investment. The problem is that a large amount of private capital was lost in the housing bubble. This bubble was caused by Federal Reserve policies from 2000 to 2004, which true to Austrian business cycle theory, set off a wave of malinvestments in housing that collapsed in 2008, following a tightening of monetary policy in 2006. As the Federal Reserve is authorized by Congress to perform its manipulations of the economy, we are left with the idea that government intervention is the solution to a problem created by government intervention, which is a contradiction.

Now let us consider what happens when governments spend money because the private sector will not. First and foremost, there is cronyism. The politically connected members of the 1 percent are most likely to receive stimulus funds. There is also a lack of profit motive. Governments do not have to earn a reasonable return on investment because they obtain their funds at gunpoint rather than by voluntary means, and they have excluded all competitors from their geographical areas. Thus, they have no incentive to invest wisely with stimulus funds, and they frequently do not. These malinvestments, in the worst cases, can help to fuel the next business cycle.

Spending cuts since 2011 have destroyed jobs. A sort of reverse broken window fallacy is at work here, as one cannot see the jobs that would have been saved if higher levels of spending had continued.

There is something to this argument, even though there is no way to be sure that spending cuts destroyed jobs because we have no control group to observe. But it must be remembered that jobs which are sustained by elevated government spending are temporary jobs which will end whenever elevated government spending ends. Eventually, such spending must end in one way or another, so these jobs were destined to be lost at some point. But again, we have the broken window fallacy of ignoring what the private sector can do if money is not taken out of it to fund stimulus, as well as the seed crop consumption problem if stimulus is funded by debasement or borrowing.

Now let us consider two more arguments:

Europe tried an austerity approach that failed, and this provides a control group which shows what happens without stimulus.

Quite simply, Europe did not engage in austerity. There have been riots in protest of proposed austerity measures, as well as theft from the bank accounts of private citizens to help keep government spending going, but very little in the way of actual spending cuts. According to Eurostat, only eight countries in Europe (out of 30) cut government spending between 2008 and 2012. But even if there had been austerity and accompanying economic hardships, short term losses are to be expected as the economy adjusts to a lack of stimulus, but this will result in long term gains.

Also note that unlike the US, individual European countries cannot use central bank manipulations to attempt to solve their problems. But this is probably a good thing, as will be shown below.

The stimulus should have been larger, but doing something was better than laissez-faire.

The historical evidence does not support this assertion. By comparing the laissez-faire approach used from 1887 to 1913 with the Keynesian approach used from 1933 to present, we find that while stimulus by governments and central banks spaces out recessions to an average of every 5.7 years from every 3.9 years, the recessions which do occur are much deeper, doing 46.11 percentage-point months of industrial production lost until previous peak is regained (PPM) of damage to the economy per year versus 40.98 PPM of damage per year with an Austrian approach. Factoring the time difference and damage difference together, this means that a Keynesian-managed recession is 64 percent worse on average than an Austrian-managed recession.

By using the same methods used by Adorney, we find that the period of monetary stimulus only from 1914 to 1932 resulted in a recession every 3.2 years on average, doing 137.3 PPM of damage per year. This makes an average recession managed solely by a central bank 67 percent worse than an average Keynesian-managed recession and 175 percent worse than an average Austrian-managed recession. Clearly, central banking alone produces by far the worst results. While a period of government stimulus without monetary stimulus would be interesting to examine, no data for such a period exists.

Like the seed crop consumption analogy discussed earlier, we see a short term gain followed by a long term loss. Increasing the amount of stimulus to get over one recession will set the stage for a much worse long-term recession, one which may not be responsive to stimulus.

<<Coal Ash Pollution And A Free Market Solution++++++++++Book Review: Selected Salvos from the Loose Cannon Libertarian>>