How modifiers work (maybe)
DNA3000
Member, Guardian Guardian › Posts: 19,678 Guardian
This is one of those things that I think is obvious to some, but not everyone. Also, perhaps not everyone who knows this understands this in this way, and this way might be simpler overall. But my real reason for explaining this is to try to find out where the explanation might be broken. More on that later.
First of all, what do I mean by "modifier?" A modifier is basically any effect that alters a stat. Buffs, debuffs, and passive effects that affect a stat are all modifiers. Fury is a modifier (that modifies attack). Armor break is a modifier (that modifies armor rating). "Modifier" is just easier and shorter to use than "all buffs, debuffs, and passive effects that affect a stat."
So for those that don't know, or are confused, or may have this wrong, here's two questions:
1. What is the difference between a multiplicative and additive modifier?
2. In what order are modifiers calculated?
These are actually trick questions. The answer to the first is "fundamentally nothing" and the answer to the second is "the question is meaningless." Here's why.
Every once in a while someone comes along and claims that -100% AAR should reduce ability accuracy to zero, regardless of any other effects that occur. For example, suppose we have -100% AAR and a +50% AA effect. Either the -100% AAR is applied first reducing AA to zero, whereupon +50% would still be zero, or the -100% AAR is applied last, in which case no matter what AA is at that point -100% AAR should reduce it to zero. The words "logical" and "mathematical" are often tossed in there. In fact, AA normally becomes 50% at that point. Sometimes, the counterargument is that those modifiers aren't actually multiplicative, they are additive, so AA is just 100% - 100% + 50% = 50%. This is actually *wrong*.
The correct answer is: both the -100% modifier and the +50% modifier are relative to the *base* value. So -100% is -100% of base, and +50% is +50% of base, so we have Base - 100% x Base + 50% x Base = 50% x Base. Since base ability accuracy is usually 100%, the final value is 50%. But why?
Here's the actual reason. In terms of what the game itself understands, all modifiers are additive. All modifiers simply add their value to the base. But there are two kinds of modifiers: normal modifiers and relative modifiers. Normal modifiers literally add their value to the base stat they modify. So if you have a +100 Fury buff, you add 100 to the base attack value. Relative modifiers are also additive, but their actual value is relative to the base value which is another way of saying their value is some multiple of the base value. So if you had a +100 Fury buff and it was designated a relative modifier, that wouldn't add 100 to base attack, that would add 100 x BaseAttack to the base value of attack. What we humans see as a "percentage modifier" is actually a relative modifier. +5% is actually a 0.05 relative modifier. -100% is actually a -1.0 relative modifier.
With that said, here's the rule for all modifiers: all modifiers add to the base value of the stat.
That's it. Since all modifiers are additive, by definition order doesn't matter. It never matters in what order you add numbers together, you always get the same result. Question 2 above is basically meaningless because it doesn't matter. Furthermore, all modifiers are actually additive. Question 1 above is also more or less meaningless. True: there is some multiplication involved in *calculating* the value of a relative modifier, but that modifier is added to the base value of the stat it modifies just like all other modifiers. Those of us who have been calling these "multiplicative modifiers" (including me) have been unintentionally steering some people wrong. No modifier is "multiplicative" they are all additive in terms of how they work. Some are absolute value some are relative value, all add to the stat.
This solves the riddle of -100% AAR. The people who think that should reduce ability accuracy to zero believe that -100% is minus one hundred percent of ability accuracy. It isn't. It is minus one hundred percent of base ability accuracy. It is just a kind of "short hand" for the *additive* modifier "negative one point zero value added to the base ability accuracy stat." It does not *multiply* anything to zero, it *subtracts* something down to zero, but other things can still add to that value to bring it back to a positive number.
For those who basically "get" the whole "percents are relative to base value" thing, this is all old news. But for those that don't, and for those that know the math but might not know why, here's all you have to remember:
1. All modifiers add to the base value.
2. Non-percentage modifiers are direct or "normal" modifiers by default.
3. Percentage modifiers are relative modifiers by default. You determine their value by taking a percentage of the base value of the stat they modify.
4. The key word "flat" means a percentage modifier should be treated as a direct modifier, not a relative one.
If base ability accuracy is 60%, then a +10% AA modifier is a +10% relative modifier, which is equal to a +6% direct modifier, and AA increases from 60% to 66%. But a +10% flat modifier should be interpreted as a +10% direct modifier, which means ability accuracy increases from 60% to 70%.
For those that did not know how this works, hopefully this helps. For those that already did, here's my question: where in the game does this explanation fail? Are there direct percentage modifiers that are not explicitly stated to be "flat" (I'm pretty sure there are)? Are there relative modifiers that are not described as such, or flat modifiers that are not described as such? Is there anywhere in the game where a modifier appears to be working in a completely different way?
My goal here is to a) firm up our understanding of modifiers to make sure we've accounted for all the behavior in the game, b) find and resolve any bugs that might exist surrounding how modifiers work, and c) fix up all the in-game descriptive text that is wrong. If we can gather all that stuff up in one place, I can try to direct the developers to take a look at it and maybe fix a bunch of it.
Also, if anyone thinks my description of how this stuff works is completely wrong, please explain. I've been wrong before, and if someone has a better or more accurate description of how this stuff works, I'd like to hear it. If it is provably better, I'll adopt it. One thing I'm hopeful might happen is that we eventually extinguish calling modifiers "multiplicative" and start calling them "relative" because I think that terminology is less confusing. But if someone has a better idea there as well, I'm all ears.
First of all, what do I mean by "modifier?" A modifier is basically any effect that alters a stat. Buffs, debuffs, and passive effects that affect a stat are all modifiers. Fury is a modifier (that modifies attack). Armor break is a modifier (that modifies armor rating). "Modifier" is just easier and shorter to use than "all buffs, debuffs, and passive effects that affect a stat."
So for those that don't know, or are confused, or may have this wrong, here's two questions:
1. What is the difference between a multiplicative and additive modifier?
2. In what order are modifiers calculated?
These are actually trick questions. The answer to the first is "fundamentally nothing" and the answer to the second is "the question is meaningless." Here's why.
Every once in a while someone comes along and claims that -100% AAR should reduce ability accuracy to zero, regardless of any other effects that occur. For example, suppose we have -100% AAR and a +50% AA effect. Either the -100% AAR is applied first reducing AA to zero, whereupon +50% would still be zero, or the -100% AAR is applied last, in which case no matter what AA is at that point -100% AAR should reduce it to zero. The words "logical" and "mathematical" are often tossed in there. In fact, AA normally becomes 50% at that point. Sometimes, the counterargument is that those modifiers aren't actually multiplicative, they are additive, so AA is just 100% - 100% + 50% = 50%. This is actually *wrong*.
The correct answer is: both the -100% modifier and the +50% modifier are relative to the *base* value. So -100% is -100% of base, and +50% is +50% of base, so we have Base - 100% x Base + 50% x Base = 50% x Base. Since base ability accuracy is usually 100%, the final value is 50%. But why?
Here's the actual reason. In terms of what the game itself understands, all modifiers are additive. All modifiers simply add their value to the base. But there are two kinds of modifiers: normal modifiers and relative modifiers. Normal modifiers literally add their value to the base stat they modify. So if you have a +100 Fury buff, you add 100 to the base attack value. Relative modifiers are also additive, but their actual value is relative to the base value which is another way of saying their value is some multiple of the base value. So if you had a +100 Fury buff and it was designated a relative modifier, that wouldn't add 100 to base attack, that would add 100 x BaseAttack to the base value of attack. What we humans see as a "percentage modifier" is actually a relative modifier. +5% is actually a 0.05 relative modifier. -100% is actually a -1.0 relative modifier.
With that said, here's the rule for all modifiers: all modifiers add to the base value of the stat.
That's it. Since all modifiers are additive, by definition order doesn't matter. It never matters in what order you add numbers together, you always get the same result. Question 2 above is basically meaningless because it doesn't matter. Furthermore, all modifiers are actually additive. Question 1 above is also more or less meaningless. True: there is some multiplication involved in *calculating* the value of a relative modifier, but that modifier is added to the base value of the stat it modifies just like all other modifiers. Those of us who have been calling these "multiplicative modifiers" (including me) have been unintentionally steering some people wrong. No modifier is "multiplicative" they are all additive in terms of how they work. Some are absolute value some are relative value, all add to the stat.
This solves the riddle of -100% AAR. The people who think that should reduce ability accuracy to zero believe that -100% is minus one hundred percent of ability accuracy. It isn't. It is minus one hundred percent of base ability accuracy. It is just a kind of "short hand" for the *additive* modifier "negative one point zero value added to the base ability accuracy stat." It does not *multiply* anything to zero, it *subtracts* something down to zero, but other things can still add to that value to bring it back to a positive number.
For those who basically "get" the whole "percents are relative to base value" thing, this is all old news. But for those that don't, and for those that know the math but might not know why, here's all you have to remember:
1. All modifiers add to the base value.
2. Non-percentage modifiers are direct or "normal" modifiers by default.
3. Percentage modifiers are relative modifiers by default. You determine their value by taking a percentage of the base value of the stat they modify.
4. The key word "flat" means a percentage modifier should be treated as a direct modifier, not a relative one.
If base ability accuracy is 60%, then a +10% AA modifier is a +10% relative modifier, which is equal to a +6% direct modifier, and AA increases from 60% to 66%. But a +10% flat modifier should be interpreted as a +10% direct modifier, which means ability accuracy increases from 60% to 70%.
For those that did not know how this works, hopefully this helps. For those that already did, here's my question: where in the game does this explanation fail? Are there direct percentage modifiers that are not explicitly stated to be "flat" (I'm pretty sure there are)? Are there relative modifiers that are not described as such, or flat modifiers that are not described as such? Is there anywhere in the game where a modifier appears to be working in a completely different way?
My goal here is to a) firm up our understanding of modifiers to make sure we've accounted for all the behavior in the game, b) find and resolve any bugs that might exist surrounding how modifiers work, and c) fix up all the in-game descriptive text that is wrong. If we can gather all that stuff up in one place, I can try to direct the developers to take a look at it and maybe fix a bunch of it.
Also, if anyone thinks my description of how this stuff works is completely wrong, please explain. I've been wrong before, and if someone has a better or more accurate description of how this stuff works, I'd like to hear it. If it is provably better, I'll adopt it. One thing I'm hopeful might happen is that we eventually extinguish calling modifiers "multiplicative" and start calling them "relative" because I think that terminology is less confusing. But if someone has a better idea there as well, I'm all ears.
24
Comments
KP with Hood has +100 purify ability accuracy. KP at base has a 60% chance to purify . So this would mean a 120% chance to purify debuffs.
But the pacify mastery can cause this to fail(found out the hard way during war) even though it is a 30% reduction to ability accuracy.which as you say should be applied to base and result in 102%(120-18 chance) chance to purify but it does not work that way.
Apparently synergies are treated as new base abilities. (See LOL enrage) credit: @CoatHang3r
Like just to make up some numbers, the difference between +50% flat and +50% direct would be hard to tell if the base value was 100%, but a lot easier to tell if the base value was 10%. And even if people think, but aren't sure, describing the situation might allow for more careful testing to confirm one way or the other, if other people (like me) were aware of the situation in the first place.
Ability accuracy in particular is one of those parts of the game where many veterans have an instinctive awareness of how things work but new players might be completely lost, and having simpler explanations for the game mechanics combined with more accurate in-game descriptions of how things worked would be of great benefit to a large subset of the players of the game. As the game becomes more complex over time (simply from getting larger alone) the amount of stuff players need to be aware of grows. Anywhere we can simplify the game without losing fidelity or accuracy is a long term win in my opinion.
Now if the attack rating was calculated as a relative modifier then the opponent should have 0 base attack rating when 2 degens are stacked, very short testing will show that that's not the case, and if some number were crunched I think you'll see that it's actually a 75% Attack reduction.
The DAAR however does seem to work as described as there is >=100% DAAR with 2 degens which can be tested against electro.
As far as I know, petrify works additively while poison works entirely different. Two petrifies reduces base healing by 60% (100% - number of petrifies * petrify potency), but poison rather does this: 100% - (100% - poison potency)^number of poisons. Two poisons reduce healing by 49% rather than 60%.
I'm hoping if we find all such examples and put them together in a single pile, looking at them all at once might offer more insight. Alternatively, they might just be singular special case exceptions (in which case the developers should document them in-game as such).
Although I've discussed both of these in the past, I didn't want to bias the discussion here by making a list of the ones I'm aware of or were made aware of by other players. I want to see what other players have to say about these things.
Even if there is someone who knows everything, they aren't going to take the time to answer the question "can you please give me a list of how every single interaction in the game works." That just isn't going to happen.
What *can* happen is we can ask them to improve the in-game documentation of these things, and better explain them to the players. But to ask them to improve the in-game documentation, the players have to actually have some idea of how things are supposed to work in the first place. If players have no idea how things should work, there's no way to know if they aren't working correctly. We have to have some solid idea of what to expect, so we can then tell the devs when they are either explaining things wrong or doing things wrong or both.
The more we know and understand the more leverage we have to get Kabam to fill in the holes in our knowledge. And the more we ask them to explain things correctly, the more pressure we put on them to do things in a consistent way so that they can *be* explained. If everything is just a mystery to most players, it doesn't matter how things work. But if the players are educated to expect things to happen in certain well documented ways, we can leverage that expectation to ask that things happen in ways that integrate with that understanding better.
In an ideal world players shouldn't have to do this, but we don't live in that world. So we do the best we can with the hand we are dealt.
-- DNA 'Granger ' 3000
Or he increases a champs base 'Offensive' Ability accuracy by 30%?
I wonder if his ability accuracy is being reduced by 20% of 120%, i.e. 24%. It’s not a huge difference, but it could explain why he gets hit when a layman’s reading says he shouldn’t.
@DNA3000 does the Icarus node fall into this category? Each fury from Icarus increase your atk more than normal, and the difference is noticeable with a lot of furies: at 10 furies from Icarus you gain a 159% atk instead of 100% and it doesn't seems that high, but with 20 furies you gain around 572% atk instead of the 200% that you would expect.
What I understood is that each fury multiply your atk by 1.1 instead of adding a flat +10%, so each subsequent fury increase your atk based on the modified one, but I would love to hear your opinion on this subject ^_^
Warlock + Cable is a cracking synergy for both of them.
Cable's Base chance to regenerate is 15% when filling a bar of Power; and Warlock's synergy adds 10%.
If this worked as an (additive) multiplier, the result would be 15 + (15*0.10) = 16.5%
But if it's a 'flat' addition to base chance, the result would be a much more respectable 15+10=25%
Fighting ROL WS with a 1/25 Cable alongside Warlock:
98 bars of Power filled during the match (phew!)
31 x Regeneration buffs generated
Clearly RNG was in my side in the match; but I'd have to have been extremely lucky to get that many regen buffs with a modified chance of 16.5%. which makes me strongly suspect it's a 'flat' addition rather than a 'relative' addition.
I realise this doesn't constitute proof, so feel free to do more extensive testing DNA - I'll warn you, it's a very tedious fight... 😉
Reduce ability accuracy by 100%
Reduce chance purify ability will trigger by 100%(AA)
Reduce purify ability accuracy by 100%(APOC)
Reduce chance to purify debuffs by 100%(OMEGA)
If a champ starts with 100% ability accuracy and 150% chance to purify debuffs, where would each of the conditions result the values to be?
The slow description states it reduces evade ability acuuracy by 100% but despite this in 99.9% of scenarios it straight up stops the opponent from evading even if they're immune to AAR or have increased AA. This shouldn't the case if all slow truly did was reduce AA.
The 0.1% of the times where slow doesn't stop evade is when you throw an unblockable special at spider gwen with 4 or more spider sense charges.
After testing I found that at 3 spider sense charges spider gwen will NEVER evade. Why is that exactly?
SG has +550% ability accuracy against unblockable specials. It's unclear if this is a flat Increase or if its a relative increase but I think its now safe to assume its a relative increase. So with a slow debuff active her AA falls to 450% from 550%.
At 3 spider sense charges she has a 21% chance to evade. This is increased to 94.5% with an evade AA of 450%. You would expect SG to have a really high chance to successfully evade but she never does at 3 spider sense charges. She does evade at 4 charges however. With 4 charges she has a 28% chance to evade and this is increased to 126%.
With 4 or more charges she is guaranteed to evade and with 3 or fewer she still has a high chance to evade but never does. This can only be the case if the crucial info slow is missing from its description is that it also reduces evade chance by a flat 100% in addition to reducing evade AA by 100%. This would also explain why slow prevents champs that are immune to AA modification from evading and also champs with a relatively small increase in AA (like the "best defense" node).
It would be great to hear what you have to say and if you could pass any relevant info to the right people to get the slow description more in line with what it actually does.
If it doesn't happen that way, I would say the in-game descriptions should be changed to make that clear.
Would that mean in the 150% chance to shrug scenario, 1 spore would result in the chance to shrug dipping to 135%?
How would this make sense in the current explanation? If it is not a bug that is.
Also, would quake applying 100% AAR to the base of Longhist result in him having 0% left?
In the Longshot case, I believe the answer is yes, it should (again: part of this discussion is to find out if the theory matches reality, I’m not saying I know with certainty that’s how it works in the game now).