Stats Central: Elite scorer vs. committee

From 2018–2020, I wrote a weekly article for IL Indoor called Stats Central. Every week I’d look at various statistics and see what kind of insight we can get from them. All of those articles are listed here.

In 2019, I wrote the article below, where I looked into whether it’s generally better to build your team’s offense around a single superstar player, or to have a more “scoring-by-committee” approach. The conclusion was less than compelling, and so I wrote a different article for that week and shelved this one. I’ve blown the dust off of it, rewritten a bit, and updated the numbers to include the seasons since then.

Here, for your viewing pleasure, is the Lost Stats Central Article.


When it comes to building your team’s offense, there seems to be two schools of thought. You can have an elite scorer and build your offense around getting him the ball as often as possible, or you can build your team with a bunch of people who may not be elite but are all capable of scoring at any time.

There are clearly pros and cons to each method. Having an elite scorer, like Jeff Teat or Dhane Smith, is obviously a pro on its own. Opposing defenses often double-team him and leave someone else open, which can also help your offense. On the other hand, if your superstar is having an off night or gets injured, you may be in trouble. If you go the scoring-by-committee route, you avoid the single point of failure but you rely more on team chemistry. A problem in that department may result in everyone having an off night at the same time.

Photo credit: UnknownDhane Smith is the quintessential “elite scorer”

Is either of these strategies better than the other? I think that depends on too many factors to be able to say definitively. But let’s see if we can determine which has been more successful in the past.

To do that, I’m afraid we’re going to have to do a little bit of math. The way I did this was to calculate the average and standard deviation for the points total of the top six scorers on each team. Standard deviation (I’ll call it SD for brevity) is just a way of measuring how spread out the values are. As a simple example, say we have two groups of three players. The first group have 45, 50, and 55 points, while the second group have 30, 50, and 70 points. Both groups have an average of 50 but the second group would have a higher SD than the first.

OK, that’s really all the math you need to know. The lowest SD in history belonged to the 1994 Boston Blazers, whose top six scorers scored 20, 19, 16, 15, 15, and 13 points. The 1992 Saints and 1994 Turbos were right behind. Clearly, no one player was dominant over any other.

At the other end of the spectrum, the 2024, 2016, 2023, and 2015 Buffalo Bandits are the top four teams. However the 2024 and 2023 versions had two elite scorers, Dhane Smith and Josh Byrne. They kind of break the model I’m going for here so I’m going to ignore those. They won the Championship both years, so (and here’s a pro tip for NLL GMs) clearly the approach of having two elite scorers works.

In 2016, the Bandits’ top six point totals were 137 (the league record), then 92, 72, 39, 38, and 38. Dhane Smith’s record-setting MVP season was about as dominant as you can get, beating his next closest teammate by 45 points. Right behind the 2016 Bandits were the 2015 Bandits, also led by Dhane Smith. Next were the 2019 Black Wolves led by Callum Crawford’s 109 points, followed by one player in the 90’s, two in the seventies, then two in the twenties.

But the original question was which of these strategies has been more successful in the past. To do that, we’re going to compare the SD list with the list of teams that went to the Championship final. We’re also going to add one more restriction: we’ll ignore teams that finished under .500. There have been some pretty bad teams in NLL history, and very bad teams are unlikely to have an elite scorer. This means that we’ll have more “committee-based” teams, and so our list of teams is skewed. We’ll attempt to even things out by getting rid of the bad teams. Yes, that means we’re ignoring the six sub-.500 teams that went to the Championship.

Unfortunately, the SD includes even more bias. From 1988–2000, we had thirteen seasons of 8, 10, or 12 games. From 2001–2024 (we’re ignoring the truncated 2020 season here, since there was no champion), we had 22 seasons of 14, 16, and 18 games. With the longer seasons, there is more time for the leading scorers to become more spread out, and so the SD values have tended to go up as the season lengthened. 37 of the lowest 50 seasons occurred before 2001, while none of the highest 50 did.

After all that, here’s what we find. (Reminder: Higher SD = scorers spread out. Lower SD = scorers bunched together.) Of the 50 teams with the lowest SD, sixteen made it to the Championship game. Of the 50 with the highest SD, fifteen got to the finals. If we cut the number of teams down to 25, the numbers are ten and nine respectively. This indicates that the committee approach might give you a small advantage, but if there really is a difference, it’s very small.

The overall result is that it really doesn’t matter. If you can get yourself a Joe Superstar who dominates your scoring, great. If you can’t, it’s just as possible to build yourself into a contender. And of course, all of this ignores defense, goaltending, transition, special teams, and a ton of other elements that are critical to building a successful team.

This may seem like an anti-climactic result, but hey, that’s how the numbers work sometimes.

Leave a comment