While Toronto and Seattle are preparing to face off in MLS Cup, the rest of the teams in Major League Soccer are turning their attention towards next season. Most immediately, teams are making decisions about which players should return and which should be let go. Continue reading Roster Turnover across Major League Soccer, 2013 – 2015
Last summer, I published an article that examined the repeatability of success in Major League Soccer’s regular season. Using data from recent seasons, I concluded that “[a]n MLS team’s performance one year has very little relationship to its performance the following year.”
With the 2016 season starting tomorrow, it seems appropriate to build upon that analysis. This piece expands on last summer’s work in two ways. It does this first by adding all seasons of MLS to the data, and second by considering a new measure of success: advancement in the playoffs.
The data used in this article, and the R code to generate the illustrations, has been posted to GitHub. I encourage anyone interested to download the data and build upon it. The data is in CSV format for maximum portability.
Turning first to the league analysis, the plots below now include the final standings from all 20 seasons of Major League Soccer. There are 251 instances in league history of teams playing consecutive seasons*. Plotting all of these in a scatterplot, with first-year performance along the X axis and second-year performance along the Y axis, produces the following figure.
> model <- lm(data$PPG2 ~ data$PPG1) > summary(model) Call: lm(formula = data$PPG2 ~ data$PPG1) Residuals: Min 1Q Median 3Q Max -0.9891 -0.1992 0.0165 0.1830 0.7460 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.04390 0.08498 12.283 < 2e-16 *** data$PPG1 0.24374 0.06118 3.984 8.91e-05 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2972 on 249 degrees of freedom Multiple R-squared: 0.05992, Adjusted R-squared: 0.05615 F-statistic: 15.87 on 1 and 249 DF, p-value: 8.907e-05
The addition of all historical data does not appreciably change the result. The R-squared coefficient for this model is a very small 0.0599, which is similar to the 0.0681 that was found last summer with a smaller dataset.
Generally speaking, the conclusion from recent seasons holds up well when all seasons of MLS are examined. Team success in one year is a fairly poor predictor of success the next year. This minimal relationship is unlikely to be a figment of the data, but it also leaves a large amount of variation unexplained.
Turning our attention to the playoffs, the dataset now includes information about how far each team advanced in the postseason. The measure is simple – how far did the team advance? Each team is recorded with one of six values, ranging from “Did Not Qualify” to “Champion”. For some parts of this analysis, numeric equivalents of these six levels are used, ranging from 0 (did not qualify) to 5 (champion). The distribution of these values is depicted below.
It should be noted, here, that the structure of the playoffs has changed at two different points. After being an eight-team competition for the first 15 years, in 2011 an octofinal round was added that expanded the field to ten teams. In 2015 the competition was expanded again, to include 12 teams. Because of these expansions, the count for teams advancing to the octofinal stage is relatively small.
When we examine how team success in the playoff changes from year to year, some interesting details emerge.
First, teams have demonstrated radical changes in fortune in the playoffs, but not significantly enough to say that past performance is completely meaningless. Teams that missed the playoffs in one year are – slightly more likely than not – going to miss the playoffs the next year. Teams that advanced to the quarterfinals or semifinals are probably going to reach the same general stage the next year (not missing the playoffs, but also not advancing to MLS Cup).
Yet beyond these very general statements, there are some interesting details. Teams that reach MLS Cup are only able to repeat that feat 20% of the time – and are eliminated before the quarterfinals at roughly the same rate. Curiously, no MLS Cup-winning team has ever been in eliminated in the semifinals the next year – they all either lose before then, or make a second MLS Cup.
Another interesting phenomenon is how frequently teams have surged to MLS Cup from previous disappointment. The percentage of teams to win the championship after either not making the playoffs, or falling in the quarterfinals, is relatively even – approximately 5%. This is in marked contrast to the difference in these teams’ likelihood of missing the playoffs altogether.
One final quirk relates to the chances of a team’s appearing in MLS Cup. Based on these data points, the group most likely to play in the championship game is not the reigning champion – but the defeated finalist. However, while approximately 25% of losing finalists repeat their appearance the next year, those teams that do repeat are still more likely to lose a second time than to finally claim the title. Blame the New England Revolution and Houston Dynamo for these data points.
The varying fates of teams that reached different stages of the playoffs are separated in the following gallery of histograms. Each plot focuses on one level of playoff advancement.
A note on significance
This last note also bears some explanation. For as much as there are now 20 years of history in this data, there are still relatively few data points behind some of these individual categories – so it is still somewhat likely that these are aberrations rather than an emergent trend. This stands in contrast to the linear model for league success, where we can be relatively certain that a team’s performance from one year to the next only explains about 6% of the variation in future performance.
With this analysis now expanded to include all of MLS history, we can say with relative certainty that team fortunes can change drastically from year to year. Yet this begs the question – what does determine team success? Are there factors that predict repeated success? How would we go about identifying them?
Thanks and footnotes
I owe a sincere thanks to Katrin Anacker for helping me work through the analysis of the linear model in this piece, and also to Jason Little and William Rand for their helpful feedback.
* The final seasons for Tampa Bay, Miami, and Chivas USA are obviously excluded. Additionally, I have chosen to exclude the final season of San Jose before their relocation in 2005, classifying Houston as an expansion team.
When Columbus and Portland face off this evening in MLS Cup, it will be a clash between two of the better passing teams in the league. Both feature midfields heavy on ball control, with international-caliber players pulling the strings supported by a back line that likes to get forward.
In preparation for this game, I collected player-by-player passing summaries for each game the two teams played, starting from the last time they faced each other in late September. What I found indicates that fans of all stripes could be in for quite a treat.
There is a narrative about Major League Soccer that describes an emerging financial arms race between its teams. The Designated Player Rule, instituted in 2007 with the arrival of David Beckham, has allowed teams additional flexibility to spend larger sums of money on key players. Every team in the league has taken advantage of this opportunity, and the rule itself has been expanded several times in recent years. Teams can currently have up to three such players on their roster, and a new category of expenditure – “Targeted Allocation Money” – was announced earlier this season. This tactic was used almost immediately by the Los Angeles Galaxy, with the end result being the acquisition of Giovani dos Santos.
Surveying this shifting landscape, columnist Steve Davis recently argued at World Soccer Talk that the teams in MLS will effectively split into two groups:
Now [MLS is] like all the other leagues of haves and have nots. We will now march predictably into every season essentially choosing among a handful of big brand clubs as the real title contenders. Everyone else will fight for the scraps.
Is this narrative of financial inequality accurate? I set out to investigate.
I was inspired the other day by something Steve Fenn (@StatHunting on Twitter) wrote about analytics in soccer:
IMO it’d be better if a club’s analytics 1st made sure metric was predictive, or at least repeatable.
— Steve Fenn (@StatHunting) July 28, 2015
This question of repeatability is something that resonated with me, so I started digging around a bit. While I can’t claim great familiarity with some of the advanced modeling that goes on around the soccer world, my starting point was a fairly simple question:
How repeatable is team success itself?
The week of the All-Star Game is upon us. Most teams in Major League Soccer hit the midway point of their season a few weeks back, the Gold Cup just finished, and the CONCACAF Champions League is starting soon. This seems a decent time to step back from the season, take stock of the trends so far, and begin to anticipate the push to the playoffs.
How do you evaluate soccer players? Is there a way to examine a given player, in the context of his or her team, regardless of their position on the field?
This is an issue that I’ve been somewhat preoccupied with this season, and the question has led me to put together a plot style that attempts to answer those sorts of questions. Continue reading Experiments in plotting player performance
Several years ago, I wrote about the importance of continuity in a team’s lineup over the course of the season. The piece has since been taken down (it will soon be republished on this site), but the thrust of the argument was that the most successful teams in Major League Soccer were able to identify a core group of players who played a significant amount of a given season together. Teams that couldn’t, or didn’t, coalesce around such a core were less likely to be successful.
Over the past several weeks, I’ve been re-visiting that thesis using some alternate strategies to see if they continue to hold true.
I started working with a new type of impact plot tonight, looking specifically at playing time compared against team goal difference. Dots representing each player are plotted along two axes: the horizontal axis records how much of the season the player has seen, while the vertical axis indicates the team’s goal difference during the player’s time on the field. Continue reading Plotting individual playing time against goal difference
This chart presents the attendance figures for Columbus Crew SC home openers since 1999, when MAPFRE Stadium opened. In the 16 year history of the stadium, crowd sizes have ranged between 10 and 25 thousand. The low point was 2011, while the high was unsurprisingly set in the stadium’s first year.
Attendance shows some correlation with game day temperature, as depicted above. This is not surprising, although the math – an R2 value of 0.3264 – indicates that temperature alone does not explain the variation. It appears that attendance varies by about 163 people for every degree of temperature gained or lost.
Other possible factors that could influence attendance include precipitation, opponent, competitive prospects, and the sales efforts by the front office. This last factor is particularly important, as shown by the attendance growth since 2011 after the team restructured their sales team.
To explore additional information about Crew SC attendance, check out the attendance visualization on this site.