What matters more? List building, or good tactical play? In the lead-up to the Australian CoK 2024, I published a blog post detailing the statistical breakdown of the army lists submitted by the participants. This post aimed to shed light on the composition strategies players might employ during the tournament. With around 60 Kings of War players converging for the event, the stage was set not just to test these lists in battle but to delve into the nuances of strategic gameplay over mere list superiority.
The event was structured as a Swiss-style tournament, often referred to as Swiss pairing, where each player is paired with another competitor with a similar win-loss record in each round. This pairing system is designed to ensure that players face opponents of comparable skill levels as the event progresses, making each match a crucial step toward the overall championship.
As the tournament unfolded, it became increasingly clear that the key to victory lay beyond the army lists themselves. The strategic deployment and maneuvering of forces, the timely reactions to opponents’ moves, and the tactical decisions made in the heat of battle often had a more significant impact on the outcome than the units listed on paper.
Now, with the tournament concluded, it’s time to circle back and analyze the data. Did the pre-tournament list stats I shared predict the winners? Or did the dynamic nature of in-game decision-making prove to be the deciding factor?
In the coming paragraphs, I’ll dive into the results and explore just how much the stats mattered in the grand scheme of the Australian CoK 2024.
Event Overview and Trends
The tournament saw a surge in Abyssal Dwarfs selections, while an inventive Elf list with chariots caught the attention of many. Many people are exploring new avenues of list building with the new CoK 2024, so we saw more variety than typical in a major tournament. Despite these trends, it was the expertly piloted slow-moving Dwarfs that took the championship (though, in fairness, Jeffrey Traish could probably pilot a Ratkin Slaves list to a 6-0 record).
Analytical Approach
I decided to delve deeper into the statistical side of the tournament, fulfilling the promise made in my initial wrap-up post. My goal was to uncover any significant correlations between the various factors of army list building and the overall success in the tournament. To do this, I first gathered detailed pre-tournament data, which included metrics such as unit strengths, movement capabilities, and other army list building characteristics. These metrics were carefully chosen to represent the strategic and tactical variety offered by different armies.
After the tournament, the next step was to analyze how these pre-defined army list characteristics played out in actual matches. I focused on identifying any discernible patterns or trends in the data that could link list attributes to game outcomes. The approach involved regression analysis – a statistical method that allowed me to measure the impact of various army list attributes on the results of the matches. By comparing the attributes of winning and losing armies, I sought to quantify the influence each factor had on the likelihood of a player’s success.
Model Findings Elaborated
In analyzing the data, I employed a multi-model approach. This method involved creating and testing several different models to gain a more comprehensive understanding of the dataset. This approach is particularly beneficial in handling smaller datasets as it helps limit the risks of overfitting and provides a more robust analysis.
The regression models revealed some intriguing insights, albeit with low R-squared values, indicating a generally weak correlation between list attributes and victory. The findings suggest that having a strategic edge in the “Objective” score (a combination of unit strength and number of scoring units) and a higher average defense score (combination of nerve, defense, and healing) were more closely aligned with winning outcomes. On the other hand, attributes like average speed and total unit count showed a negative correlation with success. However, I caution against overinterpreting these results, especially regarding speed. The success of slower Dwarf armies might have skewed this outcome, suggesting that other factors like gameplay style and scenario objectives play a more significant role.
One clear trend emerged regarding “grounder” units, e.g. combat individuals, often used to ground flyers. The data showed a negative correlation between having a higher number of these units and winning. This could imply that focusing on units with higher unit strength might be more advantageous than investing heavily in individual combat heroes. In other words, as much as you love having an individual with a 360-degree charge arc and the ability to swing a combat, there’s a clear trend showing that the lack of unit strength hurts more than you get benefit from the hero.
Conclusion
The findings affirm my belief that while list building lays the groundwork, it’s the adeptness in playing the game that ultimately carries the day. Good tactical play can make a substantial difference, evident in the performance of the slower Dwarf armies, which excelled in scenario-focused play thanks (maybe?) to recent rule updates with Ordered March. Frankly, though, Traish could play anything he wanted and make it look good, so take this with a grain of salt.
I’m interested in hearing from others on this topic. How do these insights resonate with your experiences? Any questions about the methodology or conclusions are welcome. If you’d like to dig into this further, here’s a link to my dataset:
If you have questions or would like to share your thoughts on these findings, please comment below. Also, if you’re a tournament organizer and would like similar analysis for your tournament, please reach out!