On Sunday, Gabriel Desjardins (of Behind the Net) was nice enough to run the Quality of Competition method I've been using for the AHL and compare it to his method for evaluating the Edmonton Oilers. The results were different, although not wildly divergent.
In any case, Desjardins' work, combined with this David Staples piece, spawned a bit of a discussion over at HFBoards. Staples thought my work was clearly wrong:
I'm still going over Desjardins' post, but it would seem to me that the Time On Ice-based system of measure quality of competition got it right. For instance, I just can't see how Smid or Strudwick faced tougher quality of competition last year than Souray.
Meanwhile, HF commenter Giant Moo (no fan of the blogophere) was critical of the process as a whole:
If "QoC" can be calculated in four wildly different ways with wildly different results with low correlation, then obviously it's an exercise in deception through numbers. It means that QoC is whatever you want it to be, meaning it's not scientific, meaning it's not useful as "proof" of anything.
With no disrespect intended, I think I can say that both are missing the point.
Desjardins asked in his post which system was best - comparing them head to head and asking which best reflected what actually happened on the ice last season. Staples strongly felt (and I agree) that Desjardins' existing system is much better. Meanwhile, Giant Moo feels that if four systems can produce four different results, than none of them are valid.
But it should be obvious that Desjardins' existing method isn't just better, but far superior. It should also be obvious to people like Giant Moo that not only is Desjardins' system highly accurate (as we've seen year over year with the Oilers), but that the system I proposed was by neccessity a poorer substitute.
Let me explain.
I own a brand new digital SLR camera. It's the best camera I've ever owned; it takes magnificent pictures. These pictures are big files, and in full size they do as good a job as I've seen a camera do of recording what my eyes actually saw.
Now, if I take one of those photos and shrink it down to a thumbnail - say 1/10th size - they don't look so good. It's the same picture, but it's heavily distorted by being compressed into such a small space. This isn't a fault of the camera - but rather the format that we're using to view the picture, in this case a thumbnail.
It's the same thing with my Quality of Competition method. Since our information on the AHL is limited, I proposed a system similar to Desjardins', except that instead of using time on ice, I used goal events as a proxy. What's the difference? Let's use Ales Kotalik as an example (although you can do this with any player and get similar results). Last year, he spent 81% of his ice-time in even-strength situations - something that equates to roughly 1,250 shifts. 77 of those shifts resulted in a goal for or against.
Desjardin's method uses the icetime from all 1,250 shifts. My AHL method uses the goal events; in other words, the difference in resolution is approximately 1600% - Desjardins' is using similar data, but he's using a sample that's 16 times larger than mine.
Obviously, because of the reduced sample size - even the Oilers EV icetime leader, Shawn Horcoff, only had 95 goal events - this measurement is going to be distorted, and needs to be taken with a grain of salt. Beyond the players right at the top of the scale, it's largely a guesstimate. This is expected.
It's unfortunate that my AHL approximation isn't more accurate, but with the data at hand it's as good as I can make it. It shouldn't be relied upon religiously, but it is helpful for giving us an idea of what is going on down on the farm, and should be considered as one more tool rather than a definitive value.