FM18 Labs: The Final Decisions Results

(12/18 Note: Sports Interactive have examined the match engine code and not found any egregious errors, while also pointing out a flaw in the testing methodology. There is further testing to be done with updated methodology. Give them serious credit for the speed with which they’ve looked into this!)

The response to the last article, where I determined that it’s likely that the Decisions attribute is bugged and behaves the opposite of the expected behavior, has been immediate and passionate on both the FM Subreddit and in the Discord. The most common response was that the information is interesting, but not definitive because the testing of the one stat in a vacuum takes out too many variables. That’s certainly possible, so a new set of tests has been devised and executed, and the results should put this one to bed.

To come up with a real-world scenario, we want an average team for its league. To that end, I simulated the 2017-18 Premier League 20 times, and decided on Newcastle:

Name Wins Draws Losses GF GA GD Points League Table Place
Newcastle 13.35 8.45 16.2 49.5 55.95 -6.45 48.5 10.9
Average results after 20 seasons, 760 match-days (n=760)

So that is our baseline Newcastle side. I created two more Workshop files; both edit only Newcastle, and only the Decisions attribute of all players. In one file, all players have Decisions set to 1. In the other, all players have Decisions set to 20. Those two files were used to each simulate the 2017-18 season a further 20+ times. (Thank you to T6 who also submitted data for the Decisions 1 file.) Then each of the three files (Unmodified, Decisions 1, Decisions 20) had their best and worst seasons dropped.

Average League Position
Decisions 1 Unmodified Decisions 20
7.55 10.83 18.33

I have a suspicion that the Decisions 20 file would actually perform worse than that if there hadn’t been constant coach sackings and owners splashing cash to bring new players in with lower Decisions attributes.

This was a very common sight during Decisions 20 testing.

Average Points
Decisions 1 Unmodified Decisions 20
57.77 48.33 30.83

You are reading that correctly. The team with Decisions set to 1 pulls in nearly double the amount of points over the course of the season than the team with Decisions 20.

I’ve made a couple charts to show how the seasons went from a league table perspective, and I need to reiterate that the only thing changed between the 3 sets of data is the Decisions attribute:

Each of the data sets final position in the league table, 18 seasons shown for each.

Or, to visualize it with the results together…

The Decisions 1 and Decisions 20 files have zero overlap in 18 iterations.

A couple of other interesting things…

European Qualifications
Decisions 1 Unmodified Decisions 20
4 0 0
Decisions 1 Unmodified Decisions 20
0 0 13

I want to point out something else that has come up frequently: All I’m after, here, is for this attribute to behave in a logical manner. I’m not making any assertions on that you should look for players with Decisions set to 1, or avoid players with Decisions set to 20. What I’m finding is that it’s possible that players with high Decisions attributes may not be playing to their fullest ability due to what appears to be an inversion somewhere in the match engine. What I’m also finding, is that Decisions has a massive impact on the team, far more than a single attribute probably should, and that’s definitely the case when the behavior is opposite of what one would consider logical.

One more comparison I want to look at, how the Newcastle Decisions 20 squad compares to the Major League Statistics Decisions 20 squad (where all other stats are set to 10 for all players):

Name Wins Draws Losses GF GA GD Points
Newcastle – Decisions 1 16.08 9.32 12.6 54.56 48.24 6.32 57.56
Newcastle – Decisions 20 7.5 8.8 21.2 35.85 62.75 -26.9 31.3
MLS18 – Decisions 20 5.43 8.57 21 24 51.76 -27.76 24.85

So Newcastle, full of players that are capable of playing in England’s top flight, would likely do fine matched up against a squad of players with all 10s and the identical 20 in Decisions.

I’m not sure what there is left to say. Decisions does not behave as it should, when tested both in a vacuum as the only variable, and in real-world testing where many factors are in play. We still see huge swings in team performance that is solely due to the Decisions attribute, and they work in the exact opposite manner as they logically should. SI needs to fix this.

I believe I’m done working on Decisions unless someone comes up with a huge revelation or oversight on my part. I’ll be working more with the results of Major League Statistics in further articles, as well as new tests still to come. The Decisions Linearity Testing is over at this point and I’m marking the experiments as completed on Bearpuncher Labs. So if you want to help, please keep submitting those Major League Statistics results!

5 thoughts on “FM18 Labs: The Final Decisions Results

  • There was a good point raised from one of the devs on the SI forums. He said that the weighting of Decisions is pretty much the highest in the game and thus the other stats of the players are among the lowest which means that the team with very high decision attributes will end up with the other attributes below the initial 10 which will end up leading to the better or worse results since it outweighs the change in decision. So unless you are using some third party tool to freeze all players attributes I think the results are pretty much meaningless.

  • You haven’t taken CA into account at all have you?

    When you raise Decisions to 20, all the other attributes will drop to balance the team’s CA.

    This is not a fair test, and is misleading.

    • This was indeed brought up on the SI forums, that the discrepancy caused by raising the stat may be a major factor. There are new tests being conducted where every player’s CA has also been adjusted for their raised attribute. For Decisions, for example, this necessitated raising their CA from 105 to 143. This does fix player attribute decay, but thus far in testing, has not significantly affected the results as it relates to Decisions.

  • Or he could have frozen the statistics, you know? But nonetheless I tried to modify the update to give balanced CA so players will keep 20 in their attributes and 10 in all other, and I changed the feet so they can use both (in the normal file both feet are at 10, except the game engine doesn’t allow that so they’re randomly left-footed or right-footed at the start of the save) equally. Here’s the file

  • I’m a bit behind and missed most of the discussion online about this. This might have already been mentioned but as stats are dynamic and it’s been mentioned players were brought in etc. it would be worth recording the average decisions attribute at the start, middle and end of the season to see what actual comparison is.

    If the attribute through training and transfers regresses to the mean you would expect the 20 to lower and the 1 to increase resulting in a gap that is smaller than the 19 implied by the initial set up.

    My background is in research and statistics so whilst I quite like the idea of the experiment (and there’s some interesting information here) I still think there are lot of confounding variables to be accounted for but…the nature of FM (at least the version we have access to) means it is really hard to control the variables. It’s meant to be dynamic so it’s hard to strip that out. What I would suggest is collecting a lot more data but rather than controlling for everything statistically analyse it using a mult-linear regression.

    This would allow for interactions to be discovered and for variables to be controlled for (statistically) rather than experimentally. It’s not perfect as it is a bit of a fishing expedition but it might shed light on what might be going on with decisions. Something I’d be happy to help out with at the analysis stage 🙂

Leave a Reply

%d bloggers like this: