The gist of the bullpucky score
Bullpucky measures the average degree of falsehood in an individual or group's statements. It goes from 0 to 100. Zero means everything you say (that has been rated by fact checkers) is true (ot at least it's been rated as true).
A score of one hundred means you're full of bullpucky. Or at least, everything you've said that's been rated by the a couple of professional fact checker groups is false.
We measure bullpucky from the report cards of two fact checking systems: the Truth-O-Meter at PolitiFact.com, and the Pinocchio Tracker of The Washington Post 's Fact Checker, Glen Kessler.
We measure bullpucky from the report cards of two fact checking systems: the Truth-O-Meter at PolitiFact.com, and the Pinocchio Tracker of The Washington Post 's Fact Checker, Glen Kessler.
Calculating bullpucky
Step 1:
Assign comparable values to each of the categories of each of the report cards.
Truth-O-Meter
True Mostly True Half True Mostly False False or Pants on Fire |
Pinocchio Tracker
Geppetto. One Pinocchio Two Pinocchios Three Pinocchios Four Pinocchios |
Value
0 25 50 75 100 |
Step 2:
For each type of report card, calculate the percentage of statements in each its categories.
Step 3:
For each type of report card, multiply the percentage of statements in each category by the value of that category.
Step 4:
For each type of report card, sum up the results from Step 3 over the categories.
Step 5:
For each type of report card, divide the result from Step 4 by the total number of statements on that report card.
Step 6:
Average the results from Step 5 over the available report cards. Voila. Bullpucky. We average the two measures instead of adding up the statements from comparable categories because we are estimating the truthfulness rating across checker groups. So we want to give each fact checker equal weight. This is a rare case when averaging averages is the right thing to do.
Extension to two or more report cards
Collated bullpucky
Collated bullpucky measures the average falsehood of the statements made collectively by a group of individuals. Simply sum up the number of statements in each category over all group members. Then measure bullpucky as if the group were one person. With collated bullpucky, the more statements an individual makes, the greater the influence that individual will have on the group's bullpucky score. Because collating report cards increases the sample size of statements, we'll be less uncertain about collated bullpucky than individual bullpucky.
Average bullpucky
Average bullpucky measures the average falseness of individuals within a group. To estimate average bullpucky, calculate bullpucky for each individual in the group, then take the average over the group. With average bullpucky, every individual has the same weight of influence on the score. Probability theory says that the uncertainty in an average of values about which we are also uncertain will be greater than the uncertainty of those values alone.
In defense of bullpucky
The bullpucky scale makes sense. True statements aren't bullpucky. False statements are 100% bullpucky. We evenly space out everything in between. You might say, "You can't assign a numeric value to truth!" To that I respond, "Yes, you can, so long as you know what assumptions you're making, and that you're not doing rocket science," and then adding, "Plus, fact checkers implicitly do it anyway."
You might also say, "You don't know exactly how true something is that was rated 'Mostly True'!". You're right. I could just collapse everything into two categories, true and false. But that would be a disservice to the hard work fact checkers do counting the grains of truth in people's statements.
I assume that statements rated with four Pinocchios are completely false, which is reasonable; otherwise Kessler would have a 5 Pinocchio rating. This implies that two Pinocchios make something half bullpucky, which is comparable to being half true. I also assume that categories in between True, Half True, and False (and their equivalents) have an average level of truthfulness in between the categories next to it.
If you really want to make a federal case about it, contact me .
Just be prepared to have a rational discussion .
And keep learning about the methods by learning about how and why we measure uncertainty .
You might also say, "You don't know exactly how true something is that was rated 'Mostly True'!". You're right. I could just collapse everything into two categories, true and false. But that would be a disservice to the hard work fact checkers do counting the grains of truth in people's statements.
I assume that statements rated with four Pinocchios are completely false, which is reasonable; otherwise Kessler would have a 5 Pinocchio rating. This implies that two Pinocchios make something half bullpucky, which is comparable to being half true. I also assume that categories in between True, Half True, and False (and their equivalents) have an average level of truthfulness in between the categories next to it.
If you really want to make a federal case about it, contact me .
Just be prepared to have a rational discussion .
And keep learning about the methods by learning about how and why we measure uncertainty .