The belief that economics has become politicized is a big reason the general public has lost faith in the ability of economists to give advice on important policy questions. For most issues, like raising the minimum wage, the effects of government spending, international trade, whether CEOs deserve their high compensation, etc., etc., it seems as though economists who also happen to be Republicans will mostly line up on one side of the issue, while economists who are Democrats mostly take the other. Members of the general public, not knowing who to believe and unable to rely upon the press to sort it out, either throw up their hands in frustration or follow the side that agrees with their preconceived notions and ideological beliefs.
But why is it so hard to sort out? Why can’t the press do a better job of avoiding “he said – she said” reporting and give the public direct and specific answers to these important policy questions? One reason is the “mathiness” that has infected our economic models, something economist Paul Romer recently identified as a big problem with economic theory.
What is mathiness? Berkeley’s Brad DeLong defines it as “restricting your microfoundations in advance to guarantee a particular political result and hiding what you are doing in a blizzard of irrelevant and ungrounded algebra.”
That is, building assumptions into a model to produce a particular result, usually one that runs counter to other models and supports an opposing ideology, and then obscuring these assumptions with math. These are difficult models. The math is impenetrable to most members of the press and the general public, so they must rely upon economists’ to tell them which set of models to believe. And what they hear and report is that “economists disagree” leaving the general public free to adopt the viewpoint that is consistent with their beliefs.
What is the solution to this problem? Solid empirical evidence would help. Physicists can build all sorts of crazy models with strange results that hang together perfectly well mathematically, but the models can be ruled out by experimental evidence.
Physicists cannot assume whatever they want in order to produce an interesting or counterintuitive result, the assumptions must be consistent with the experimental evidence. Why isn’t the same true in economics? Why doesn’t the data tell us about key assumptions? Why is there so much debate about whether prices and wages are sticky, whether government spending multipliers are big or small, whether markets should be modeled as competitive, and so on?
In many cases, the data sets are too small to answer these questions. Unlike in physics, economists cannot do experiments in a laboratory to test their theories. We can’t, for example, rerun the economy thousands of times with controlled randomized amounts of monetary and fiscal policy to precisely determine the effects of each. Instead, macroeconomists must rely upon historical data, and that generally means data sets going back just a few decades. For example, for questions about the effects of monetary policy on GDP, the data usually begins in the early 1980s, and it is only available quarterly. That results in data sets of less than 150 observations, which is generally not enough to deliver precise answers. With all the talk about “big data,” macroeconomists are hampered by the problem of “little data.”
When the data do not fully determine the appropriate modeling assumptions – when there is evidence on both sides of an issue – we ought to be open to models that make both types of assumptions. However, too often one side will insist, for example, that prices and wages are perfectly flexible or markets are perfectly competitive, and deride models that make other assumptions. When this is done by some of the leading figures within the profession, it gives the false impression that one type of model – and the associated policy implications – is superior to another.
In many other cases, the data do point in a particular direction but this is ignored or denied because it gives results that disagree with someone’s previous work, goes against their political leanings, or contradicts their preconceived conclusions. The tactic in this case is to just cite the few papers that support your position while ignoring, dismissing, or clouding the considerable amount of evidence that points in the other direction. This cherry picking and obfuscation of the evidence leaves the impression that there is uncertainty over issues that are largely settled. To me, this is one of the more frustrating aspects of communicating economic policy to the general public.
We must find a way to make it clear what the preponderance of evidence says about important policy decisions. Far too often, confusion about the degree to which economists are unified, or not, clouds the public debate. Somehow, and surveys such as the IGM Economic Experts Panel are a start, we must do a better job of communicating the general view within the profession about important policy issues.
In doing so, we will hopefully take a step toward restoring the trust of the public that our policy recommendations are based upon solid evidence rather than ideology, pre-conceived beliefs, or cliquish political infighting among economists. As Paul Romer says, “It would be tragic if economists did not stay current on the periodic maintenance needed to protect our shared norms of science from infection by the norms of politics.”
Top Reads from The Fiscal Times: