Comments / New

The Washington Capitals and the Maturation of Evaluation Metrics

The Washington Capitals are off to a relatively slow follow up to their championship campaign, and it understandably has the faithful in the Nation’s Capitol beginning to arch an eyebrow here or there.

To wit, they’re 15th in the League in score-adjusted possession17th in share of scoring chances, and the goaltending hasn’t been anything to write home about either, as the teams netminder tandem has put up only the 12th-best five-on-five save percentage in the League through 12 games. They’re 5th in the division standings, and 10th in the Conference. Not exactly Stanley Cup caliber play, on any of those fronts, and a slew of others than need no mentioning here.

Reasonable minds can agree that it’s still too early in the season to expect much meaningful inference from the sample of games already in the books. Most teams are still letting pieces slide into place. In Washington, the team is still missing a top line winger, figuring out it’s new head coach, contending with several weeks of abnormal scheduling, and remembering how to get up for early season games after a multi-month run through the playoffs, where every night was high energy and high stakes.

But at what point do those sorts of excuses no longer hold?

The end of December, which is on average 38 games played. Let’s put some science behind that answer. Below is a correlation matrix visualizing, for all NHL teams from 2014 through the end of last season, how closely their CF% at the end of each month related to their CF% at the end of the season (“Full”). A tip on reading this: the bottom left tile in this plot indicates that CF% at the end of October had a correlation coefficient of .68 with end of year CF%. A perfect correlation value is 1. A value of .68 indicates that there is a relationship there, but not a particularly strong one.

If you follow that tile across the bottom row, you’ll notice the values gradually creeping closer to one, which makes sense. The more a season matures, the better idea we have of the performance characteristics of each time. Now that you understand the principles here, let’s strip out the salient information and visualize it more conventionally.

It’s easy to see that the rate of increase between the metrics current state and end-of-year state begins to slow down at the end of December. Think of this as the point of diminishing returns on the predictive growth of that metric. (It’s possible this observation could be cut at a finer gradient, such as games played)

Astute readers will point out that this is only one metric, which is true, but this behavior is present in other metrics as well, to varying degrees.

Hockey is a sport with a high degree of randomness, and the small sample sizes that plague early season analysis do spectators no favors. According to the last five years of data, that randomness tends to distribute normally a little before the the season is halfway through. So fans of the defending Cup champs should take solace in the notion that there’s a whole lot of precedent for variance between early season quality of play versus the ultimate reality, and just kick back and enjoy the squad’s victory tour.

data sourced from Natural Stat Trick

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Talking Points