On Sunday, Politifactbias.com posted what they call a "semi-smackdown" of my claim that they provide little quantitative evidence that PolitiFact has liberal bias .

I want to thank PolitiFactBias for engaging me in a rational debate. (I'm serious. This is good!) To show how grateful I am, I'm going to systematically tear their semi-smackdown to shreds. In the process, I will clear up points of confusion that PolitiFactBias.com (PFB.com) has about who I am, and about Malark-O-Meter's methods.

**1. "Our pseudonymous subject goes by 'Brash Equilibrium.'"**

My name is Benjamin Chabot-Hanowell . I prefer the Internet to know me as Brash Equilibrium, and I don't mind if people call me Brash in meatspace. because I value transparency. That said, yes, call me Brash, not Benjamin.

**2. "Brash goes through the trouble of adding Kessler's Pinocchios together with PolitiFact's 'Truth-O-Meter' ratings..."**

I don't add the two types of report card together. Doing so would bias the estimate heavily in favor of PolitiFact, which posts many times more rulings than Kessler, and is harder on Republicans than Kessler. Instead, I calculate the malarkey score from a report card (or collated report card, or subset of statements) and average the scores for the same subset. Doing so gives the two fact checkers equal weight. I don’t do this for my debate analyses because Kessler doesn’t do separate rulings for each statement made during the debates.

**3. "...and then calculates confidence intervals for various sets of ratings, based on the apparent assumption that the selection of stories is essentially random."**

My confidence intervals don’t assume anything about the selection of stories. What they do assume is that fact checkers assemble a sample of statements from a population of statements, which results in sampling error. The population of statements from which those statements are selected could be everything that individual or group says. Or it could be the population of statements that are susceptible to whatever selection biases that fact checkers have. Either way, the basic mechanics of the calculation of the confidence intervals is the same. The question lies in whether I have parameterized my sampling distribution properly. Basically, PFB.com's saying that I haven't.

But what would PFB.com have me do? Introduce a prior probability distribution on the concentration parameters of the Dirichlet that isn't equal to the counts in each category plus one? Where would my prior beliefs about those parameters come from? From PFB.com’ allegations that PolitiFact cherrypicks liberal statements that are more likely to be true, whereas it cherrypicks conservative statements that are more likely to be false? Okay. What model should I use to characterize the strength of that bias, and its separate effects on conditional inclusion in each category?

We don’t know what model we should use because no one has statistically analyzed fact checker rating bias or selection bias, and that is the point of my article. Until someone does that, we can only estimate how much bias

*might*exist. To do this, we perform a thought experiment in which we assume that I am measuring fact checker bias instead of real differences among politicians. Doing so, I gave PFB.com two figures that it is free to use to support their argument that PolitiFact is biased (they’ll also have to assert that Glen Kessler is biased; look for PolitiFactAndTheFactCheckerBias.com soon!).

Meanwhile, I am free to use my findings to support my argument that the Republican ticket is less factual than the Democratic ticket. The truth is probably somewhere in between those two extremes, and the other extreme that fact checkers have centrist bias, as partisan liberals allege. For now, we don’t know exactly where the truth lies within that simplex of extremes. Although PFB.com's qualitative analysis suggests there might be some liberal bias, its authors rhetorically argue that there is

*a lot*of bias. They actually argue that it's

*all bias*! They present no statistical estimates of bias that cannot also be interpreted as statistical estimates of true differences.

**4. "It's a waste of time calculating confidence intervals if the data set exhibits a significant degree of selection bias."**

Item 3 soundly defended my methods against this criticism. In sum, it is not a waste of time. What is a waste of time? Assuming that you know how biased an organization is when you've no conclusive estimate of the strength of that bias whatsoever.

**5. "Our case against PolitiFact is based on solid survey data showing a left-of-center ideological tendency among journalists, an extensive set of anecdotes showing mistakes that more often unfairly harm conservatives and our own study of PolitiFact's bias based on its ratings."**

Survey data that shows journalists tend to be liberal doesn't automatically allow you to conclude that fact checker rulings are all bias. It doesn't give you an estimate of the strength of that bias if it exists. All it does is give one pause. And, yeah, it gives me pause, as I stated in my article when I conceded that there could be as much as 17% liberal bias in fact checker rulings!

**6. "Our study does not have a significant selection bias problem."**

I highly doubt that. That PFB.com makes this assumption about its research, which relies heavily on blog entries in which it re-interprets a limited subset of PolitiFact rulings, makes me as suspicious of it as it is suspicious of PolitiFact.

**7. "Brash's opinion of PolitiFact Bias consists of an assertion without any apparent basis in fact."**

And I never said it did. That is, in fact, the whole point of my article. Similarly, however, PFB.com's rhetoric about the strength of PolitiFact's bias has little evidentiary support. At least I recognize the gaps in my knowledge!

My methods, however, have much stronger scientific foundations than PFB.com's.

**8. In response to one of my recommendations about how to do better fact checking, PFB.com writes, "How often have we said it? Lacking a control for selection bias, the aggregated ratings tell us about PolitiFact and The Fact Checker, not about the subjects whose statements they grade."**

No. It tells us about both the subjects whose statements they grade, and about the raters. We don't know the relative importance of these two factors in determining the results. PFB.com thinks it does. Actually, so do I. Our opinions differ markedly. Neither is based on a good estimate of how much bias there is among fact checkers.

Subjectively, however, I think it's pretty ridiculous to assume that it's all just bias. But I guess someday we'll see!

**9. "We need fact checkers who know how to draw the line between fact and opinion."**

Sorry, PFB.com, you're never going to get that. What we actually need is a statistical method to estimate the influence of political beliefs on the report cards of individuals assembled from the rulings of professional fact checkers, and then a statistical method to adjust for that bias.

**10. "And critics who know enough to whistle a foul when "fact checkers" cross the line and conflate the two."**

Yes. People like you and Rachel Maddow (strange bedfellows, to be sure!) are valuable whistle blowers. But your value isn't in estimating the strength of political bias among fact checkers.

**UPDATE (same day):**PFB.com and I fling more poor at one another here .