In a recent article on IL Indoor, Teddy Jenner examined the teams that have played back-to-back games this season. He discovered that more than two thirds of the teams that had two games in a weekend won the second of those games. The results were better for teams playing at home on the second night and less so if playing away.
Those are pretty interesting numbers, but they only cover three-quarters of one season. If only we had such statistics on previous seasons – but for that we’d need information on all previous NLL games. But wait! We have that! Let’s fire up Graeme’s Super-Amazing Magic NLL Statistical Database-inator™!
I did some queries looking for two games involving one team played within 3 days of each other. This will include not only games on consecutive days, but also games played on Friday and Sunday of the same weekend. I ended up doing four queries: the team in question plays at home both games; away both games; home first and then away; and away first and then home. I then combined these numbers for the aggregate record. We’ll deal with home-and-home series (i.e. both games involved the same two teams) below.
If you’re not interested in the raw numbers or statistical analysis, click here to skip to the conclusions.
From 1987 to 2011, I found 394 instances where a team played more than one game in a weekend. Here are the numbers:
Strangely, teams have played two home games in the same weekend only 5 times, but teams have played two away games 75 times. The percentage totals show that when a team played two games in a weekend, the most common scenario is that they lost both games. Winning the first and losing the second is the least common.
Looking at the numbers a different way:
|Type||Wins first game||Wins second game|
|Totals||191 (48.5%)||195 (49.5)|
|Home Totals||90 (52.6%)||86 (56.2%)|
|Away Totals||101 (45.3%)||109 (45.2%)|
So 48.5% of the time, the team won the first of back-to-back games, and 49.5% of the time, they won the second. Unsurprisingly (?), the numbers are better at home.
But the real question is not “what was the most common outcome of such series?” but “can we make inferences or predictions based on past behaviour?” To answer that question, we must do some statistical analysis.
Here’s where we get into the stats stuff a little more. Don’t worry if you’re not a stats geek, I’m not going to describe the tests I did in great detail or describe how or why they work, primarily because I have no idea. (Or more accurately, I don’t remember since it’s been over twenty years since I studied this stuff.)
Our default assumption, or “null hypothesis”, is that each of the four outcomes of a two-game series (win-win, win-loss, loss-win, loss-loss) is equally likely. Basically, we make no assumptions that there are any patterns of any kind, and let the numbers tell us if we’re wrong. Our observed values are shown in the chart above, so we want to see whether the differences between those values and the values that our null hypothesis would predict are statistically significant.
As an analogy, say we flipped a coin 100 times. Our null hypothesis is that each outcome is equally likely (i.e. our coin is fair), so we’d expect 50 heads and 50 tails. If our actual results were 52 and 48, that’s likely to be close enough for us to conclude that our null hypothesis is probably correct. If our results were 70-30, we’d reject our null hypothesis and decide that one outcome is more likely than the other, and so we likely (the numbers can’t conclusively prove anything) have a rigged coin. But what if the numbers were 58-42? Is that close enough to 50-50 for the differences to be insignificant, or is it more likely that your coin is unfair?
We can use a test called the chi-squared test to calculate the probability that the differences between a group of observed results and the expected results are due solely to chance, or if it’s more likely that there’s something else involved causing the differences. I calculated (well, Microsoft Excel calculated) this probability using the chi-squared test, though I omitted the Home-Home row because all of the expected values were too low. (Chi-squared doesn’t work very well for expected values below 5.) The p-value calculated was 0.059793, or about 5.98%. What this means is that the probability that the values we observed would have been observed if our null hypothesis was true is almost 6%. To be considered statistically significant enough to reject our null hypothesis, this value should be less than 5%.
The long and the short of is it that we cannot reject our null hypothesis. The numbers do not indicate that playing two games in a weekend has an effect on the likelihood of winning either one. Playing two games in a weekend is no different than playing two games a week apart.
Now let’s look at home-and-home series. Note that there has never been a weekend where two teams played each other twice in the same location, so we’re only dealing with each team playing one game at home and one away. There are four possibilities here: a split where the home team wins both games, a split where the away team wins both games, a sweep where the home team wins the first game, and a sweep where the away team wins the first game. There have been 53 such series’ in NLL history (1987-2011):
|Sweep – Home, Away||16|
|Sweep – Away, Home||16|
|Split – Home wins||12|
|Split – Away wins||9|
The most common occurrences have been sweeps, but after applying the same chi-squared test as above, I came up with the p-value number of 0.453534, or about 45.4%. This is way over the 5% required to be statistically significant so these numbers really tell us nothing, likely because of the small sample size. Similarly, the numbers do not indicate that any outcome of a home-and-home series is more likely than any other.
To summarize the conclusions I’ve drawn above:
- The evidence does not indicate that playing two games in the same weekend affects the likelihood of winning either game.
- The evidence does not indicate that any of the four possibilities of a home-and-home series is more likely than any other.
A team playing two games in a weekend is no different than that team playing two games a week apart. The numbers tell us that it simply doesn’t matter. There are always going to be outliers, but for every team that loses the second game because they’re tired, there’s another team that’s energized from playing the night before.
Just to be clear, these numbers don’t tell us that there is no pattern. They simply say that the data does not indicate a pattern. It also tells us that we can’t use the numbers to make predictions; in the past when a team played two games in a weekend, the most common outcome was that they’d lose both games. This does not mean that in the future when a team plays two games in a weekend, they are more likely to lose both than any other outcome.
Many thanks to Dan Shirley from In Lax We Trust (and a math undergrad at Washington State University – go Cougars!) for his help in interpreting the statistics.
Excellent analysis. I thought that Jenner's findings would have been the norm. Perhaps after another 25 seasons a trend will develop.