I generally don't like writing pieces like this. The tone inherenet in pieces like this usually comes off as pretenious, attacking, and unwelcoming to alternative ideas. I just want to be clear about this from the outset: The purpose of this piece isn't to dump on anyone; it's merely to clear up something that nags at me incessenantly because it creates a sense of expectation from people (expectations that are, in general, built on tenuous foundations).
So, here we go: This post at Lacrosse All Stars is kind of a mess. I'm not going to spend 2,000 words picking it apart, but I am going to deconstruct a bunch of it to explain why it isn't serving the purpose that it set out to do. Also, a warning: There are mathematical methodology issues lying directly ahead; if you're only here for the jokes, you may want to blink and mosey along to the next thing on the site.
The major problems I have with the piece lie in three premises:
- The piece is sold as using math to predict performance outcomes, although a lot of "not math" takes place for a piece that is billed as "using MATH to make predictions";
- The piece uses bad math -- when it even uses math -- to sell some kind of formula (or, "algorithim" as described in the piece) to predict performance outcomes; and
- A lot of the work isn't shown or described and creates assumptions in a vacuum.
The underlying math in the piece -- margin (offensive strength against defensive strength, adjusted for strength of schedule) -- is fine, but shaky. I've written about it at least a thousand times, but using per-game statistics to do any kind of performance analysis is silly; all the pace and tempo inherent in per-game statistics skews actual production to a degree that those measures almost become useless. That's why tempo-free statisics -- possession-based statistics -- work so much better when trying to determine production: On a possession basis, you figure out what a team is doing each time is has, or is defending, the ball. Whether a team plays 100 possessions in a game or just 10, the per-possession measures aren't artificially inflated or deflated like per-game measures. So, the author in the piece is already starting from a flawed foundation, even if he is barking up the right tree in looking at adjusted margin as his starting point.
(For the record, I use adjusted efficiency margin to "rate" teams. It's a pretty fair measure to determine which teams are strongest top-to-bottom. I only argue with the author's utilization of measures in this context, not necessarily the methodology with respect to this.)
To further complicate matters, the author doesn't determine how he's adjusting margin for strength of schedule. Is he using RPI strength of schedule? Efficiency strength of schedule? Perceived value of a schedule? I need to see the work here, especially considering the shot against Massachusetts -- that it hurt the Minutemen in the analysis -- even though on an efficiency basis Greg Cannella's team played a pretty decent slate in 2012 (it ranked 27th in the country in opposing efficiency margin). As you can see, it's a situation where points are made in a non-delineated vacuum upon a weak base. This is my concern as it feels like strength of schedule is built on feelings and not an Excel spreadsheet.
Even if we assume this is all okay, the writer then proceeds to state that Johns Hopkins, Duke, and Syracuse are the "highest 'rated' teams in 2013." What does that mean? Based on 2012 activity projected into 2013? I have a hard time understanding that, as no methodology was given as considerations around returning production, what newcomers -- freshman and transfers -- mean to these teams, or if 2013 schedules were part of the "formula" (and given the fact that all three of these teams have yet to release their schedules for next spring, it was not likely a consideration). Even if we assume that this "projection" is based on 2012 activity, there are flaws here that I think are being filled in with opinions from the gut or other non-math based positions. Pertinently, using efficiency information from 2012 (which is the best way to gauge production), Duke finished eighth in adjusted efficiency margin, the Blue Jays were 11th, and Syracuse finished 15th in the same measure. Loyola and Maryland -- which are returning as much talent as anyone in the country -- finished 2012 ranked second and third in adjusted efficiency margin. How are the Devils, 'Jays, and Orange ahead of the 'Hounds and 'Terps? Other top ten teams in adjusted efficiency margin from 2012 that are returning big chunks of talent -- Lehigh, Denver, Colgate, and Notre Dame -- are also tossed to the side for the simple assertion that Duke, Syracuse, and Johns Hopkins are the "highest 'rated'" for 2013. This conclusion -- seemingly unsupported with a methodology that is on the right path but flawed, that spit out some teams that you'd absolutely consider -- is almost drawn from the ether. That's not math; that's something built on trust, and without a stronger methodology -- both in function and completeness -- I can't make that leap.
So, that's a problem for me. The other problem -- a problem that consumes the next nine paragraphs, actually -- is that the writer totally abandons math to provide an analysis. Don't bill this as Nate Silver-esque analysis when there is nothing in the vast bulk of the piece that has anything to do with statistics or a mathematical projection model. It'd be useless to pick that apart -- and unfair to do so -- so I'll pass on addressing all that stuff.
I've been pretty transparent about this since I started working with advanced statistics six years ago: There's no silver bullet in these analyses. Some things are obviously more important than others (like overall adjusted margin), and I made that evident in a piece I wrote last May about trying to identify a national champion based on their tempo-free profile. I have no problem with people other than me working with statistics (far from it, actually); after all, that's why some folks and I developed Tempo Free Lax. It's good for the game that people are writing in this medium, but it needs to be done the right way. The premise of the piece is great -- and I'd love to develop a model that could do some predictions relatively decently, but I haven't gotten to that point yet -- but how the work was done (and how the piece was written) was off-point and it is pretty misleading.
Math is math; don't wrap it in something that it isn't.