In this article, political polling guru Nate Silver uses numbers to show polling error (percentage points from actual Election Day votes for a particular candidate), polling bias (percentage points in wrong direction, e.g. Republican, from actual Election Day votes for a particular candidate), and poll difference (in percentage points, from poll to poll).
In many ways, the numbers are very overwhelming. I did not walk away from this article with more than what the headline and nutgraph told me--that pre-election polls favored Republicans and some are less accurate than others, particularly those that favored Republicans more.
The "Simple Polling Accuracy Analysis" helps makes sense of the numbers by comparing them in a colored-coded chart. But I'm left wondering: How is this rather long article--which repeatedly exceeds G.G.'s rule of having only two numbers per paragraph--more useful than its headline, first couple paragraphs, and this chart?
I don't appreciate Silver's insistence on including so much written interpretation (maybe that's The Times insistence, I'm not sure). Including more presentation of the numbers and less analysis seems considerably more in-line with Silver's recent, shaming views of political bias. In my view, his analysis devalues the use of the numbers he so readily includes; I hardly need them if I simply follow Silver's (probably justifiable) praise of Quinnipiac and shaming of Rasmussen.
Silver is quick to cite the sources of his information (e.g. SurveyUSA, etc.), without which he couldn't work. He also uses math in ways pollsters and news organizations all too often do not--that is, in interpreting error and bias (see red and blue in the chart). Without his math work, his report would be almost meaningless to the general public (or at least to j-schoolers like myself).
Still, if he made his analysis less wordy and let the numbers speak more on their own, interesting themes would resonate more effectively.