Mathematical analysis of the metagame (you be the DCI)

I wrote a quick program today that takes in a matrix of deck win percentages (along the lines of what e.g. Ryan Eberhart provides in his MTGO Power 9 analyses, middleleft chart) and computes the metagame breakdown that you would expect to see if the metagame is in equilibrium.
Overview
Suppose we have a metagame with two decks: for the sake of exposition, let’s call them Mardu Vehicles and Janky Rogue Brew (Replace Mardu Vehicles with CoCo, or CawBlade, or Bloodbraid Elf aggro, or Mono Black Devotion, etc etc, to discover why I never play Standard.)
Let’s say you are planning on attending a tournament, and are trying to decide which deck to bring. For each deck, let’s assume that you can calculate (based on historical data, playtesting, etc) the match win probability of that deck against the field.
Obviously, if you play a mirror match, your match win probability is 50%. The only additional data you need is how likely Mardu Vehicles is to beat the Janky Rogue Brew: if Vehicles is an 80% favorite, then you also know that the rogue deck has only a 20% chance of beating Vehicles.
We can summarize the probabilities in a matrix:
MarduVehicles RogueBrew MarduVehicles 0.5 0.8 RogueBrew 0.2 0.5 The way to read this table is to find the deck you are considering playing on the left, then look up the column of the deck that you will be playing against. We can call this 2x2 matrix of win probabilities P.
The other piece of metagame information we need is how many people will be playing each deck. We can encode this information in a vector, say (0.9,0.1), representing a hypothetical metagame where 90% of people are currently playing Vehicles, and 10% are playing the Jank Brew. Let’s call this metagame breakdown vector m.
Finally we can calculate our expectation to win a game with each deck: w=Pm. In this case we will get (0.53, 0.23).
This makes sense: if we play Vehicles, we have a very high (90%) chance of playing a mirror match, and in that case, whether or not we win is a tossup. The other 10% of the time, we will playing the Jank deck, and win almost every time. This bumps our overall chance to win a random game up slightly above 50%.
Now if we want to pick a deck for the tournament, our choice is clear (assuming we are using no other considerations beyond maximizing our match win probability): play Vehicles.
Equilibrium Metagames
Of course, we are not the only players going through this exact exercise: others will observe the same facts we did (that it is far better, for maximizing win probability, to play Vehicles than the Jank deck). Thus two things will happen over time: first, players will switch decks, and m will change. Second, players will innovate, and modify their decks, or perhaps brew an entirely new deck to prey on both existing decks. In this section, we will analyze only the first effect: the change in the metagame composition, assuming that the deck contents stay fixed.
We will need a slew of assumptions about player behavior to make this analysis: for example, we will assume that
 players pick a deck purely based on its expected win probability in a single match
 all players play with equal skill
 all players have perfect information about the metagame and adapt instantly to changes in the metagame;
and of course there are many potential objections to each of these assumptions! But let us forge ahead anyway, since we need some reasonable starting point for making our analysis.
If players have perfect information about the metagame, they can look at the vector w and draw the following conclusions: if they are already playing Vehicles, they are already playing the best deck, and have no reason to change decks. If they are playing the Jank deck, they notice that they can strictly increase their win probability by dropping Jank and picking up Vehicles. Therefore over time, there will be a shift in players away from Jank and towards Vehicles. This shift will continue until either (i) all players are playing Vehicles, or (ii) both decks have the same match win probability, i.e., 1/2, in which case all players are happy with their deck choice and do not change.
Let us call a metagame (P,m) where all players are happy with their current deck an equilibrium metagame. We can characterize such a metagame mathematically as one where
The decks with m_i=0 are those that have been extirpated from the metagame due to being too poor against the rest of the field. Let us call such decks dominated. Every nondominated deck must have a win probability of exactly 1/2, otherwise everyone would switch to a better deck with a higher win percentage. The last condition, that m_i >= 0, encodes the fact that the fraction of the metagame occupied by a deck cannot be negative.
Computing the equilibrium metagame is not so trivial (it is a type of quadratic programming problem called a linear complementarity problem (LCP), and as far as I can tell a nasty one at that, since the matrix P is not symmetric) but the code below does so, essentially by checking for every possible set of dominated decks and solving for a possible equilibrium with those decks dominated. If I run the code on the Vehicles, for example, it is obvious that Vehicles dominates the Jank brew, and indeed I get that the equilibrium metagame is
Equilibrium Metagame:
MarduVehicles 100%
RogueBrew 0%.Perhaps more interesting is applying the code to the Vintage metagame described in Eberhart's P9 results. We get
Equilibrium Metagame:
Shops 24.9007%
Eldrazi 24.4953%
Dredge 8.3284%
Combo 20.9646%
Oath 6.50955%
BigBlue 10.6315%
Mentor 0%
BlueControl 0%
Other 4.16994%which says that, according to the P9 win percentages, it is incorrect to play Mentor(!!!). Of course it is easy to explain away this conclusion: the win percentages in Eberhart's data are not very robust (Mentor doesn't really have a 0% win probability against Dredge and Combo, for instance), weaker players tend to play the most popular deck, depressing Mentor's win statistics, etc. But it's a starting point if you want to play around with how altering the win percentages of decks against each other alters the expected metagame balance. For instance, it is possible to observe several counterintuitive effects:
 Banning a card that weakens a deck (decreases its win percentage against the entire field) can increase the deck's prevalence in the metagame;
 Conversely, a new printing that strictly improves an existing deck can sometimes cause its presence in the metagame to decrease;
 Adding a new deck that is good against the current best deck, but weak against the rest of the field (which is the effect of printing more cards like Flusterstorm, Gush, Library of Alexandria, etc) often causes the rest of the field's metagame prevalence to increase at the cost of both the new deck and the old best deck.
Here is the code if you want to try yourself. You will need the Eigen linear algebra library. The code expects the first line of the input to be the number N of decks in the metagame, followed by N lines of N+1 columns each, where each line begins with the deck name (one word, no spaces) followed by N floatingpoint numbers listing the win percentage of the deck against the field (including itself).
#include <Eigen/Core> #include <Eigen/Dense> #include <iostream> #include <vector> #include <string> using namespace std; bool hasEquilibrium(const Eigen::MatrixXd &P, int dominated, Eigen::VectorXd &result) { int ndecks = P.rows(); result.resize(ndecks); int idx = 0; vector<int> idxmap; while (dominated > 0) { if (dominated % 2 == 1) { idxmap.push_back(idx); } idx++; dominated /= 2; } int subdecks = idxmap.size(); Eigen::MatrixXd subP(subdecks, subdecks); for (int i = 0; i < subdecks; i++) { for (int j = 0; j < subdecks; j++) { subP(i, j) = P(idxmap[i], idxmap[j]); } } Eigen::VectorXd rhs(subdecks); rhs.setConstant(0.5); Eigen::FullPivHouseholderQR<Eigen::MatrixXd> solver(subP); Eigen::VectorXd sol = solver.solve(rhs); if ((subP*sol  rhs).norm() > 1e8) { cerr << "Warning: linear solve failed on domination strategy " << dominated << endl; } for (int i = 0; i < subdecks; i++) { if (sol[i] < 1e8) return false; } result.setZero(); for (int i = 0; i < subdecks; i++) { result[idxmap[i]] = sol[i]; } Eigen::VectorXd candidate = P*result; for (int i = 0; i < ndecks; i++) { if (candidate[i]  0.5 > 1e8) return false; } return true; } int main() { int ndecks; cin >> ndecks; if (ndecks < 1) { cerr << "Error: you must provide at least one deck win percentage to analyze" << endl; return 1; } if (ndecks > 30) { cerr << "Error: too many decks" << endl; return 1; } if (ndecks > 18) { cerr << "Warning: this analysis runs in exponential time. It may take quite a while to process a metagame with " << ndecks << " decks" << endl; } Eigen::MatrixXd P(ndecks, ndecks); vector<string> names; for (int i = 0; i < ndecks; i++) { string name; cin >> name; names.push_back(name); for (int j = 0; j < ndecks; j++) { double winpercent; cin >> winpercent; P(i,j) = winpercent; } } for (int i = 0; i < ndecks; i++) { for (int j = i; j < ndecks; j++) { if (fabs(P(i, j) + P(j, i)  1.0) > 1e8) { cerr << "Warning: entries (" << i+1 << ", " << j+1 << ") and (" << j+1 << ", " << i+1 << ") don't sum to one!" << endl; } } } // check for all possible sets of dominated decks for (int dominated = 1; dominated < (1 << ndecks); dominated++) { Eigen::VectorXd breakdown; if (hasEquilibrium(P, dominated, breakdown)) { cout << "Equilibrium Metagame:" << endl; for (int i = 0; i < ndecks; i++) { cout << names[i] << " " << 100*breakdown[i] << '%' << endl; } } } }
EDIT: A few words about the next steps:

More reliable data is needed to fill out the match win probability matrix for the current set of Vintage archetypes. I used the P9 data as quick way to get started, but a singledigit number of games is woefully inadequate for determining the match win probability of two decks against each other.

Now it is possible to play around with the data: you can simulate the effect of adding a new printing, or restricting a card, by adjusting the win probabilities between decks, and/or adding new decks to the metagame. Of course estimating the effect of restrictions etc. on match win percentage is not an exact science (unless somebody is willing to do a lot of playtesting), but what the program above will allow you to do is to quantify the exact and often counterintuitive relationship between changes in match win percentage and changes in metagame balance. "Restriction Gush will actually help Mentor" is the kind of claim that can be tested, for instance.

Wow. Just.... wow.

Thanks for doing this. I'm interested in looking into how you're doing this (I know both linear algebra and C++ so hopefully I won't need any help). Can you let us know what you input as the win percentages of each archetype / how you arrived at those numbers? There are obviously lots of ways to get a single win percentage out of all the P9 data.

@diophan I'm straight up using the data from your spreadsheet:
9 Shops 0.50 0.50 0.67 0.50 0.40 0.33 0.67 0.60 0.75 Eldrazi 0.50 0.50 0.67 0.33 0.83 0.50 0.50 0.83 0.50 Dredge 0.33 0.33 0.50 1.00 0.50 0.50 1.00 0.60 0.00 Combo 0.50 0.67 0.00 0.50 0.50 0.50 1.00 0.50 0.50 Oath 0.60 0.17 0.50 0.50 0.50 0.83 0.00 1.00 1.00 BigBlue 0.67 0.50 0.50 0.50 0.17 0.50 0.50 0.83 0.00 Mentor 0.33 0.50 0.00 0.00 1.00 0.50 0.50 0.57 0.75 BlueControl 0.40 0.17 0.40 0.50 0.00 0.17 0.43 0.50 0.67 Other 0.25 0.50 1.00 0.50 0.00 1.00 0.25 0.33 0.50
Of course a single P9 is not enough data to draw any firm conclusions about match win percentageWotC of course has reams of data from the dailies and twoman queues.

This post is deleted!

Evouga, why don't you look at the averages over the past year?

@desolutionist said in Mathematical analysis of the metagame (you be the DCI):
Evouga, why don't you look at the averages over the past year?
Excellent idea. Is there a cutoff date that is especially meaningful? When was the last metagame shift, before the recent restrictions?

It's hard. So much has changed in the last year. Walking ballista was printed in January. Paradoxical Outcome in September. Leovold in August (not as big but I do love Leovold). Last summer grixis pyromancer was fantastic. Now it is unplayable. While a lot has changed I would probably start after the restriction of Lodestone Golem last spring. While it wouldn't be perfect it would give probably the best general idea of matchup percentages.

Paradoxical has been around for almost a year, I'd start with that printing. Unless you want to wait until something else is printed; like you said 1 P9 is not enough and with the current rate of printings, there is something new every other set.

Unfortunately the problem with going back very far is that we've tweaked the classification. Hypothetically someone could go back and change the archetypes around to have a consistent classification (the win rates will update themselves).

@diophan One could just do key cards rather than archetypes (which by their very nature are somewhat arbitrary... #imlookingatyoujoesephcampbell)

So I scraped the metagame reports from Diophan and ChubbyRain from last June until this April, including P9s, Eternal Extravaganzas, the TMD Open and Vintage Champs.
The final matrix (of win counts this time, not win %; to calculate win percentage of deck i against deck j, compute M(i,j) / (M(i,j) + M(j,i)) ):
9 Shops 96 181 58 48 36 63 43 80 40 Gush 138 354 78 96 77 97 99 132 83 Dredge 38 92 17 34 18 23 25 30 25 BigBlue 43 82 27 27 13 30 25 27 28 BlueControl 26 61 10 19 20 24 18 19 17 Combo 34 72 38 21 18 31 25 36 26 Oath 41 80 31 37 14 31 40 45 30 Eldrazi 53 123 47 30 21 42 44 49 32 Other 21 68 26 21 6 20 25 25 12
I plugged this data into my code and got the surprising result:
Equilibrium Metagame: Shops 100% Gush 0% Dredge 0% BigBlue 0% BlueControl 0% Combo 0% Oath 0% Eldrazi 0% Other 0%
Sure enough, according to the aggregated match win data, Shops has a winning match percentage against the entire field... not sure what to make of this.

If your program and theory is good, then I guess it sort of reinforces what we all already know... Shops is the best
edit If you get a chance, can you add in some //comments . I'd like to try and trace the program but cannot decipher it.

@desolutionist said in Mathematical analysis of the metagame (you be the DCI):
If your program and theory is good, then I guess it sort of reinforces what we all already know... Shops is the best
edit If you get a chance, can you add in some //comments . I'd like to try and trace the program but cannot decipher it.
Here is the code with more comments. I'm happy to explain if there are parts that are still confusing.
#include <Eigen/Core> #include <Eigen/Dense> #include <iostream> #include <vector> #include <string> using namespace std; // Calculates whether the metagame is in equilibrium, given a matrix of win probabilities P and a set of dominated decks. // 'dominated' is a bit field encoding whether each deck is present in the metagame, or hated out. A '0' in bit position // i means that the deck is dominated (not present in the equilibrium metagame) and '1' means the deck is present. // // Returns true if an equilibrium exists given the set of dominated decks. If the equilibrium exists, the composition of // the metagame is stored in 'result.' bool hasEquilibrium(const Eigen::MatrixXd &P, int dominated, Eigen::VectorXd &result) { int ndecks = P.rows(); result.resize(ndecks); // constructs a list of nondominated decks. idxmap[i] is the ith nondominated deck. int idx = 0; vector<int> idxmap; while (dominated > 0) { if (dominated % 2 == 1) { idxmap.push_back(idx); } idx++; dominated /= 2; } int subdecks = idxmap.size(); // Extracts from P the submatrix Pn corresponding to only the nondominated decks. Eigen::MatrixXd subP(subdecks, subdecks); for (int i = 0; i < subdecks; i++) { for (int j = 0; j < subdecks; j++) { subP(i, j) = P(idxmap[i], idxmap[j]); } } // solves Pn m = 0.5. Eigen::VectorXd rhs(subdecks); rhs.setConstant(0.5); Eigen::FullPivHouseholderQR<Eigen::MatrixXd> solver(subP); Eigen::VectorXd sol = solver.solve(rhs); // check that the linear system did indeed get solved if ((subP*sol  rhs).norm() > 1e8) { cerr << "Warning: linear solve failed on domination strategy " << dominated << endl; } // the metagame composition m can fail to be an equilibrium metagame for several reasons: //  it requires that one of the decks make up a negative proportion of the metagame; //  it implies that one of the "dominated" decks has a >50% win percentage (in which // case we were incorrect in labeling that deck as dominated.) // check the first condition for (int i = 0; i < subdecks; i++) { if (sol[i] < 1e8) return false; } // check the second. Calculate the win percentage of all decks, given P and m. result.setZero(); for (int i = 0; i < subdecks; i++) { result[idxmap[i]] = sol[i]; } Eigen::VectorXd candidate = P*result; // check all win percentages are <= 50%. // Of course for the nondominated decks we will get a win percentage of exactly 50% // so we only really care about what the value is for the dominated decks. for (int i = 0; i < ndecks; i++) { if (candidate[i]  0.5 > 1e8) return false; } return true; } int main() { // read in the number of decks int ndecks; cin >> ndecks; if (ndecks < 1) { cerr << "Error: you must provide at least one deck win percentage to analyze" << endl; return 1; } if (ndecks > 30) { cerr << "Error: too many decks" << endl; return 1; } if (ndecks > 18) { cerr << "Warning: this analysis runs in exponential time. It may take quite a while to process a metagame with " << ndecks << " decks" << endl; } // read in deck names and win counts Eigen::MatrixXd wins(ndecks, ndecks); vector<string> names; for (int i = 0; i < ndecks; i++) { string name; cin >> name; names.push_back(name); for (int j = 0; j < ndecks; j++) { int nwins; cin >> nwins; wins(i,j) = nwins; } } // contruct the win probability matrix from the win counts. Eigen::MatrixXd P(ndecks, ndecks); for (int i = 0; i < ndecks; i++) { for (int j = 0; j < ndecks; j++) { P(i, j) = double(wins(i, j)) / double(wins(i, j) + wins(j, i)); } } // check for all possible sets of dominated decks. Each integer from 1 to 2^{# decks} encodes a different set // of possible dominated decks, where each '0' bit in the integer represents a dominated deck. // Since it is impossible to have a metagame where all decks are dominated, we skip the dominated=0 case. for (int dominated = 1; dominated < (1 << ndecks); dominated++) { Eigen::VectorXd breakdown; if (hasEquilibrium(P, dominated, breakdown)) { cout << "Equilibrium Metagame:" << endl; for (int i = 0; i < ndecks; i++) { cout << names[i] << " " << 100*breakdown[i] << '%' << endl; } } } }

So assuming we all agree that Vintage is not really a onedeck format, there are a few possible explanations for the above data:

The format's best players play Shops, and its worst play decks that prey on Shops, so that the win percentage of Shops against its predators is not accurately reflected in tournament results;

Even though Shops as a whole has a winning match percentage against all other archetypes as a whole, individual builds of Shops are weak against individual builds of, say, Big Blue, and this finergrained relationship between the decks is not captured in the aggregate data.
I have no idea how to correct for #1 (if it's even true), but #2 could be fixed by going back through Diophan and ChubbyRain's data and doing a more finegrained taxonomy, splitting archetypes that contain multiple, strategicallyvaried builds.
I can try doing this, but as it represents a very significant time investment, I'd like to get your thoughts first about what taxonomy you consider reasonable: if you had to decompose the metagame over the past year into ~15 archetypes so that

decks with similar strategic roles in the metagame (and similar win percentage against other archetypes) are grouped into the same archetype;

decks with different strategic role, but similar core, are classed as different archetypes;
what groups would you pick?


Based on your assumptions and what you input for winrates the second time around the conclusion is clear since shops does not have a sub 50% winrate.
I discussed this a bit with Matt and the assumption that winrates are static is what's giving a lopsided equilibrium in this state. Realistically speaking the higher proportion a deck is of the metagame the more people will tune their deck to beat it. This is why we see so many pyroblasts in a metagame where 60%+ of decks are blue. Note that I'm not criticizing what you did: I think it's interesting and it's also unclear how (or minimally a pain) to model tuning decks.
I'm rolling with the assumption that winrates are static for the remainder:
Since as mentioned before the conclusion that shop becomes 100% of the meta appears to depend on none of its winrates being below 50%. You'll notice in your first set of data that shops had a 40% winrate against oath but in the aggregate it had a record of 4384, or ~51% winrate.
Because of this, a natural question is "how likely is it that shops really has a sub 50% winrate against oath?" The oath versus shops matchup is much discussed, since oath is in many people's eyes the deck to play if you expect a bunch of shops. I do not know much of any stats, but it's fun to learn, so I tried to figure this out. We have a binomial distribution, a sample of n=84 and a winrate of ~51% in the sample size. I had no idea how to figure this out, but based on https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval the first mentioned method seems reasonably accurate. Assuming I know how to use a ztable and didn't mess up the calculation, it's ~9% likely that shops has a sub 50% winrate against oath, which is lower than I expected. Anyone who actually knows stats should feel free to call me out for doing something stupid :)

I'd like to plot the fluctuations in the actual meta game breakdowns. (From month to month)

Many thanks for doing this analysis. I've always been interested in game theory and the modern vintage metagame is a great case study!
I actually think it's entirely possible that vintage is a 'one deck format'. There are many ,many reasons for this, here are just a few off the top of my head:
We can't assume that everyone has perfect information. With the advent of MODO vintage and analyses like this one, we are trending towards more information, but it is far from perfect. Some players still show up with outdated decks. And I'd argue most don't know the 'win percentage' of archetype A vs. Archetype B.
We can't assume everyone is a rational player, whose sole goal is to win. People have deck and play style preferences that push them towards suboptimal strategies. Some play to have fun, so winning is secondary.
Another hypothesis is it is much easier to 'screw up' playing blue than shops, i.e. blue decks have a larger decision tree and therefore also more opportunities to make mistakes. Given that most vintage players are fairly casual and are not on the pro tour/grand prix/scg tour grind, they are more likely to make mistakes when on a blue deck.
I don't know if there's a solution to this problem outside of more restrictions. The power of shops since printing of lock pieces started in Mirridon has always been that even if you packed a metric ton of hate, you still lose if you can't cast any of them. That problem still exists and will continue to exist as long as there are turn 1 spheres powered out by workshops. We know workshops are not going anywhere, so that leaves thorn or sphere, both of which seem way to underpowered to be restricted.
P.S. Frank Karsten wrote an excellent article examining what makes a good constructed format using game theory, highly recommend everyone to read!: https://www.channelfireball.com/articles/whatcangametheorytelluswillmakeagoodconstructedformat/

@diophan said in Mathematical analysis of the metagame (you be the DCI):
I discussed this a bit with Matt and the assumption that winrates are static is what's giving a lopsided equilibrium in this state. Realistically speaking the higher proportion a deck is of the metagame the more people will tune their deck to beat it. This is why we see so many pyroblasts in a metagame where 60%+ of decks are blue. Note that I'm not criticizing what you did: I think it's interesting and it's also unclear how (or minimally a pain) to model tuning decks.
One thing you can do is to introduce a new hypothetical deck to the metagame with adjusted win percentages. These percentages are inevitably going to be to some extent hypothetical, but for example, say you added a tuned Gush deck with artifact hate, which boosted the matchup against Shops to 55% at the cost of 5% less effectiveness against other blue decks:
10 Shops 0.5 0.567398 0.604167 0.527473 0.580645 0.649485 0.511905 0.601504 0.655738 0.45 Gush 0.432602 0.5 0.458824 0.539326 0.557971 0.573964 0.553073 0.517647 0.549669 0.55 Dredge 0.395833 0.541176 0.5 0.557377 0.642857 0.377049 0.446429 0.38961 0.490196 0.54 BigBlue 0.472527 0.460674 0.442623 0.5 0.40625 0.588235 0.403226 0.473684 0.571429 0.51 BlueControl 0.419355 0.442029 0.357143 0.59375 0.5 0.571429 0.5625 0.475 0.73913 0.48 Combo 0.350515 0.426036 0.622951 0.411765 0.428571 0.5 0.446429 0.461538 0.565217 0.48 Oath 0.488095 0.446927 0.553571 0.596774 0.4375 0.553571 0.5 0.505618 0.545455 0.45 Eldrazi 0.398496 0.482353 0.61039 0.526316 0.525 0.538462 0.494382 0.5 0.561404 0.49 Other 0.344262 0.450331 0.509804 0.428571 0.26087 0.434783 0.454545 0.438596 0.5 0.45 GushPrime 0.55 0.45 0.46 0.49 0.52 0.52 0.55 0.51 0.55 0.5
You get
Shops 29.8689% Gush 29.8689% Dredge 0% BigBlue 0% BlueControl 0% Combo 0% Oath 0% Eldrazi 0% Other 0% GushPrime 40.2621%
Now the new Gush deck (and its original, Shopssoft variant, which preys on the Shopshardened version) take up 70% of the metagame. With enough match data about different variants of the different decks, presumably these percentage adjustments could be made less hypothetical.

I had the same thought, which is why I wanted to look at the change in metagame breakdowns to see exactly what percent of people switch decks as opposed to tweaking their current one.
I plotted the metagame breakdowns from the P9 Challenges since October and the common trend seems to be that an increase in percentage of metagame share for a particular archetype was almost always followed by a decrease the following month and vice versa. There was only one instance in which a deck (Oath) saw a continued decrease in metagame share over two consecutive events.