For a number of years now, PDO has been among the most important arrows in the hockey statistician's quiver, referred to as not only "the most important advanced statistic, or the most critical to understand," but also "one statistic I absolutely wouldn't want to bet against" by folks who know what they're talking about. (Take a minute to click through and read those two links if you're not familiar with the metric.)
Put simply, PDO is the sum of shooting and save percentage (either at the team level or when an individual player is on the ice), and that sum, over time, tends to end up right around 1.000 (or 1000 or wherever the chosen convention places the decimal), a bit higher for skilled forwards, and a bit lower for grinders.
Conceptually, PDO can be a little hard to grasp - it doesn't make sense that we should expect Bruins shooters to convert at a lower rate than most teams simply because they have outstanding goaltending, or that Devils goalies should make more stops because their forwards have struggled to finish chances. But PDO doesn't describe a causal relationship, rather a correlation and, indeed, sometimes that expected regression to 1000 or so doesn't materialize (certainly in the short-term, but in the long-term as well, both at the team and individual level).
Practically, it's hard not to wonder why, when there are two perfectly good and informative pieces of data (namely shooting and save percentages), we'd combine the two into a single, often less-informative number. For example, PDO on its own would tell us that a player with a 1002 PDO is just about where we'd expect him to be, and that's certainly true of a player with an on-ice save-percentage of .922 and a shooting percentage of 8.0, both of which are right around League-average. But if those numbers were .952 and 5.0 or .902 and 10.0, we'd probably expect a good deal of movement as all of the numbers come back towards League averages.
To that end, it's probably worth knowing what's driving a high or low PDO, or whether the component parts tell us anything more than the combined metric. And that's what the visualization below aims to do. Let's take a look:
There's a lot there, so let's walk through it. Each individual's PDO is represented by the point on the red-filled shape corresponding to his name, with the values labeled on the concentric circles at the "12 o'clock" axis. So, for example, John Erskine PDO was roughly 1050 (1050.5, to be precise), Wojtek Wolski around 975 (976.8). That outer red shape also represents each player's on-ice shooting percentage, since the inner blue/purple mass is on-ice save percentage (which, of course, is added to on-ice shooting percentage to arrive at PDO). So Joey Crabb on-ice save percentage was around .950 (.952), and Jeff Schultz approximately .900 (.901). Add in the dotted green line at a PDO of 1000, the solid red line at the team's overall PDO (1014), the solid green line at at the League-wide average save percentage of .921 and the Caps' save percentage at .928 and that's our chart.
So what we see here is whose PDOs were particularly high or low and what was driving them. Crabb, for example, had a very high on-ice save percentage and very low on-ice shooting percentage, but came in right about at 1000. Alex Ovechkin had a low save percentage (.916) and a very high shooting percentage (11.2) for a high total of 1028. And Erskine, Eric Fehr and Brooks Laich coupled high save percentages with high shooting percentages for monster PDOs that will almost certainly come crashing back to earth next season (don't say we didn't warn you).
Anyway, this is more about the visualization than the substance of it - there's plenty of time to get into that.
At a team level, here's a similar League-wide view of what went on in 2013:
Columbus is the big loser here, failing to make the playoffs despite a 1017 PDO, while the Isles were able to overcome a 990 PDO and get in. Interestingly, the top-five teams in even-strength save percentage (the inner blue/purple shape) not only made the playoffs, but won at least a round once there, while regular-season shooting percentage didn't predict much success at all (in either making the playoffs or doing well once there). Of course, it should be noted that the shortened season produced some wonky results (both at the individual and team levels), so things would even out quite a bit given more time (see, for example, last year).
Back to individual PDO for a moment, if you're wondering how it corresponded to production, here's a look at Caps forwards this year (we'll get into the defensemen at some point in the future):
Distance from the black dotted line indicates how far a player's PDO was from 1000 (high or low), and you can see from where on the vertical and horizontal axes just what was driving those PDOs. The bubble size indicates points-per-sixty minutes at five-on-five. It should come as no surprise that the guys with higher on-ice shooting percentages generally produced points at a higher rate. And while it's tempting to look at the names and conclude that higher-skill players drive shooting percentage up and lower-skill players' suppress opponent shooting percentage (or face similarly low-skilled talent that brings about the result), this is far too small a sample to support that... but your hunch is spot-on (as Tyler Dellow showed earlier in the year). Would anyone be truly surprised to see shots going in at a high rate at both ends of the ice when Marcus Johansson is out there, or at a low rate when it's Jay Beagle?
Finally, PDO and possession:
Really small sample here, of course, but an interesting visualization that once again confirms what you already knew - the Caps' second line was a black hole of possession and all the Corsis in the world weren't going to get Wolski back on the ice. Also, we keep coming back to the prospects of a Martin Erat-Laich-Troy Brouwer second line, but their respective possession numbers and PDOs would lead you to believe that any regression they're due for as a group will be a net negative (though obviously Erat and Laich had tiny 2013 samples with the Caps).
Anyway, the point of these charts is to help more "visual" types see what everyone's talking about when they're talking PDO, but also to more clearly show what the drivers behind those numbers are. The charts (and, more importantly, an understanding of the numbers that populate them) help to lay out who's been "lucky" and who hasn't been, and, in some cases, provides a strong indicator of what might lie ahead. As we look ahead to the 2013-14 Metropolitan Division (ugh) race, for example, plenty of questions arise from just looking at the respective teams' PDOs. Can the Caps stay comfortably above 1000, or will they come back to the pack? If they do regress (which is certainly the safer bet), can they improve their possession game enough to cushion the landing? What will happen if the Isles, Devils, 'Canes and Flyers get League-average goaltending? Are the Pens' dominant five-on-five numbers sustainable? Can Carolina and New Jersey get some more shooting "luck"? Was 2013 as good as it gets for Columbus? Will a new coach be able to help the Rangers' offense enough to match their terrific goaltending and make them an elite team? And so on (and don't forget, we're only talking about five-on-five play here... special teams are still a critical component of the game).
Ultimately, PDO is more than just a clever math trick, and the combined metric may be a good indication of where (or when) to dig for more information, but a closer look at the component parts would seem to be important in understanding what's really going on.