“There are three kinds of lies,” Mark Twain once wrote, erroneously attributing this quote to British statesman Benjamin Disraeli: “Lies, damned lies, and statistics.” While nine out of ten people think that Twain created that axiom himself, 78 percent of all Americans rely on statistics provided to them by the media and by government. And easily 63.2 percent never check their provenance, which means that 88 percent of politicians get away with using them to create their own version of reality.
Don’t bother to check those statistics, though. They are admittedly the third species of mendacity identified in Twain’s quote, illustrating the problem of misused statistics is real. Normally a method for producing objective metrics for policymakers, politicians and activists exploit and stretch statistics to bend reality to their own devices instead.
The need for statistical data is certainly no myth. The only rational way to understand and measure complicated systems is from metrics that produce hard data. That need exists in complex systems in small applications such as data and call centers, which I managed for years, to large government bureaucracies, and especially in risk-pool management as conducted by insurers in every application.
The key is to know the scope and the limits of those measurements and to understand their meaning to the whole of the operation. Otherwise, it’s too easy to see just the statistics that suit one’s own purposes, or to deliberately cherry-pick them to advance one’s own agenda.
Nowhere has this been impulse been more often displayed than with the Obama administration and Obamacare. That’s been true since the Affordable Care Act was introduced, although the distortion of statistics has ramped up since the disastrous rollout of the Obamacare exchanges. Starting almost immediately after it became apparent that the enrollment figures would result in embarrassment, the White House began to offer context-free statistical claims to declare victory in place of failure.
First came the mendacious claims of four million Medicaid enrollments, which the Obama administration cited as evidence of Obamacare’s success. However, this didn’t take several factors into consideration, including the fact that many of these enrollments would have qualified with or without Obamacare. In fact, as Sean Trende quickly discovered, only 1.9 million of the enrollees came from states that had expanded Medicaid under Obamacare, and the government didn’t have any data as to how many of those only qualified under the expanded eligibility.
Washington Post fact-checker Glenn Kessler gave himself three Pinocchios for passing along that error, but had to give Barack Obama four Pinocchios just a month later when he upped the ante to seven million –- using the same misleading statistical data. HHS didn’t have better statistical data because, as it turns out, they didn’t bother to compile it. The statistical nonsense has continued throughout the open-enrollment period with the White House’s conflation of online signups and actual enrollments, where again HHS failed to include in its system a measure of how many signups resulted in paid-premium enrollments.
This dishonesty with numbers continued last week as Obama himself claimed, “thirty-five percent of people who enrolled through the federal marketplace are under the age of 35.” Kessler again called foul, noting that not only did this distort the actual data from the White House data sheet (where the figure was actually 28 percent), but also ignored the fact that the Obama administration itself was on record that it needed the number to be 40 percent for the sake of the risk pools.
“By the time the dust settled, the original 40 percent goal was largely forgotten,” Kessler wrote, “as well as the fact that the final 28 percent figure was only slightly better than the 27 percent achieved in March.”
There have been other complaints about the use of government statistics that don’t relate to Obamacare. Employment statistics generate a lot of controversy in a stagnant economy, especially the increasingly irrelevant unemployment rate.
The U-3 measure used for that statistic relies on a consistent civilian workforce participation rate as a denominator, but the sharp decline in that measure over the last four years means that the U-3 series is much less reliable as a measure of joblessness in the American population, especially compared to U-3 results before the Obama recovery of June 2009.
However, in those cases the issue is less the statistics themselves than it is the focus on which statistic to use. The Bureau of Labor Statistics provides a robust set of metrics, most of which have decades of historical data. The U-6 measure takes into account the decline in the workforce and provides a much clearer trend line for joblessness, but the media and analysts seem uninterested in using those figures instead. That’s hardly the fault of the Obama administration, even if they benefit from the false comparisons the U-3 series provide.
When complete data exists, though, this White House has no problem using statistics dishonestly, even after repeatedly being called out on the practice. The administration’s attempt to whip up a frenzy on pay equity provides a clear example of this impulse. Obama has returned to the “war on women” campaign with a specious claim that women only earn 77 cents on the dollar compared to men, pointing to Census Bureau data for confirmation.
However, that data does not compare equal work; rather, it just aggregates income on gender regardless of how many hours are worked, what kind of job is held, seniority, and so on.
The 77-cent myth has been repeatedly debunked, so much so that even Obama’s allies began objecting to the “revolting equal-pay demagoguery.” The National Republican Senatorial Committee used the White House formula on pay equity to note that Obama only pays women 88 cents for every dollar earned by men, and that Democrats running for the Senate perform far worse. Slate’s John Dickerson wondered whether lying was a deliberate strategy, akin to the axiom that there’s no such thing as bad publicity.
The federal government applies policies and exercises authority on a vast scale, far too broad to see the impact from one limited vantage point. Citizens need reliable metrics to judge policy and regulation on a rational basis. As government grows larger, though, the need to distort those rational measures has increased, especially in this administration – and that undermines confidence in authority in general and especially in big-government accountability.
If we can’t trust the big-government activists to tell us the truth, then there’s a 100 percent chance we can’t trust the institution they represent.
Top Reads from The Fiscal Times: