Comparative analysis is misleading
Cision's list is misleading because we know it to be wrong. Iain Dale and Guido Fawkes both publish their monthly statistics online (as do many others).
There are no reliable ways of judging how many people read a blog without accessing the analytics package for each blog. Those that monitor audience figures by sampling (such as Alexa) do so with a very skewed sample. I know this in at least 30 instances because I have been able to compare the Alexa rankings to the statistics. I don't know what methodology Cision used but it's apparently flawed.
Comparative analysis is unhelpful
Why is it useful to know if Guido Fawkes is read by 10,000 more people than Iain Dale (for the sake of argument)? If an article appears about a Cision client on either blog, it's significant. In terms of reputation management, does it matter if something appeared in The Times rather than the Daily Telegraph? Not much - unless there's a specific demographic at play - in which case knowing the readership numbers is of limited use anyway.
The Newscounter method has its weaknesses too, I'm sure. Any single metric that judges a complex environment has its draw backs. But by ranking blogs as critical, high, medium or low, we give a specific enough indication as to whether its an impact on your reputation without producing a tortuous and supposedly precise measure.
If you disagree, do let me know. One of the challenges of measuring PR impact is that so few people agree on the usefulness of one measure over another. Therefore, all measures are approached with a degree of cynicism. A consensus around a smaller number of measures would at least provide the industry a tool to compare apples and pears.
UPDATE: Chris Paul has drawn attention to the methodology used by Cision. It is more complicated than my post suggests although the truth with these league tables is always that no matter how complicated the algorithm, the end result has to make sense. I don't think this league table works intuitively.