**KNOWN AW ISSUE**
Please be aware, there is a known issue with Saga badging when observing the AW map.
The team have found the source of the issue and will be updating with our next build.
We apologize for the inconvenience.
Please be aware, there is a known issue with Saga badging when observing the AW map.
The team have found the source of the issue and will be updating with our next build.
We apologize for the inconvenience.
Options
Comments
1. AI lightning reaction
2. Can the AI stop holding block and their specials for 30 seconds or more ? Especially in AQ, AW and battlegrounds where time's limited
Thank you !
Change is good. Except in MCOC (based on history).
Yes, it is a random dice roll to what action the attacker takes, but it is not an equal chance for every champion in every situation. It wouldn't make sense to create a defender that heavily relies on X to effectively defend, then not encourage them to use it.
The important distinction to make is that the AI is not adapting to you, the attacker, but rather to their own kit and profile.
The same way "fully predictable" would be bad, "absolute random" would also be bad.
The corrected code would take in the number of actions given the current state (say of length k), then use np.random.rand(k) to produce a vector of weights. You would then normalize to make sure the summation goes to 1 and THOSE would be the weights where the i’th position corresponds to the i’th action’s chance of occuring. The die is then rolled according to those weights. Furthermore, this would need to occur at every instance of the dice roll because, if not, you would just end up rolling one die over and over and over again which is not random weight generation.
“ people are confusing "changing the AI with champs having fight style preferences.”
…. what? The AI IS the weight of its actions. Thats all it is. Its just a summation of weights over actions. Of course changing the weights means changing the AI! Do you think if I set the probability of block to 1 I would have the “same” AI?? What does changing the AI mean to you if not changing its weighted actions?
To introduce the kind of biases youre talking about while actually being consistent with what was written in this post you would need to adjust the parameters of the distribution that the weights were drawn from. For instance, you COULD let the the upper bound of each weight parameter be tied to a distinct set and, given they were drawn uniformly, you would end up with larger weights on avg for distributions whose upper bounds were larger even after normalization. This could lead to “AI profiles” of sorts but the problem is that NO WHERE in this post do they allude to, imply or confirm any of that. They implied in blanket terms that these weights are strictly random processes in the same way that if I handed you a die and said its “all random” then you would probably be expecting a fair die not one with 75:25 split or 80:20 or some other nonsense. Its disingenuous to lump deliberate changes to distributions under the term “random” and that much is plainly obvious unless you want to take the “Well TEEEEEECCHNICALLY they didn’t lie” stance (while simultaneously taking up the notion of transparency no less!). Furthermore, these intentional changes to the structure of each distribution are BY DEFINITION AI MANIPULATION. They have come out saying they do not do this. Unless you can provide a definition of AI manipulation that doesnt involve intentionally manipulating the weights of their behaviors this is a contradiction.
I am genuinely curious as to what definition you could supply that somehow does not include that. What else is there to manipulate??
RANDOMLY weighted. Not deterministically based on whichever champion we have at hand. Ill ease this into an analogy: I hand you a die and tell you that the die is weighted randomly. In reality the one comes up 80% of the time ie P(D=1)=.8 and you can pick the other 5 weights as you please. Now, the outcome of the ROLL is a random process but those weights ARE FULLY DETERMINISTIC. The die should not have the same weights two times in a row if the weights themselves are random variables. Ill take it on good faith that this was a typo but these two things cannot be true at the same time
Here are three examples of what happens depending on how you do this.
1) A randomly generated weight for the die, you can note that since I only generated the weight ONCE, the effect is that you have a biased die.
2) A uniform die where the weights are completely even. The weights are still not random, and the resulting picture is UNIFORM.
3) A die where at EACH ITERATION the weights are RANDOMLY generated. The result is STILL uniform because randomness will disperse approximately uniformly over a large number of trials. The only way this picture becomes biased is if you TAMPER with the sampling distribution of the i’th component of the weight. You did not imply that was the case in your original post. Random generation of weights does not naturally produce bias.That means that each time you release a champ you are intentionally manipulating the weights. That is LITERALLY AI manipulation. Saying they aren’t “adapting” isn’t even where we are in the conversation yet. That is something we can discuss after this clarification but we have to understand the basic asymptotic reality first
Code (in R):
#Case 1
loaded_dice_probs <- runif(6, min = 0, max = 100)
normalized_loaded <-loaded_dice_probs/sum(loaded_dice_probs)
normalized_loaded
x <- sample (1:6, size = 100000, replace = T, prob = normalized_loaded)
table(x)
barplot(table(x), main = "One Random Generation")
#Case 2
y <- sample (1:6, size = 100000, replace = T)
barplot(table(y), main = "Non-random Uniform Die")
#Case 3
n <- 10000
results <- rep(0,6)
for (i in 1:n) {
random_weights_probs <- runif(6)
normalized_rand <- random_weights_probs/sum(random_weights_probs)
roll <- sample(1:6, size = n, replace = T, prob = normalized_rand)
for (j in 1:6){
results[j] <- results[j]+sum(roll == j)
}
}
barplot(results, names.arg = 1:6, main = "Randomly Generated Weights")
Most likely.
Example, 100 basis points for RNG.
Stipulated that Action-1 occupies spots 1-10.
Action-2 is 11-35 (heavier weighted action, based on champion)
Action-3 is 36-40 (a rarer occurrence action)
Etc, etc.
Now, the RNG generator choices an EXACTLY RANDOM number between 1 and 100. A 100-sided dice of exactly equal chance to generate any particular number between 1 and 100. And then the lookup is done to determine which action corresponds to that particular RNG roll.
Note, this is basically how Feature Crystals work (higher chance at the Feature champ), or even those months when we have RIFT-Based Side Quests (odds of rolling the path with the higher level, more desired, item on it is lesser chance that others).
But, in both those actual cases, it is still RANDOM RNG.
and stated there are other ways
secondly you do not want the champs to have a random weighting.
that would not give them a bias at all.
you want the weighting to be set per champ.
so champ xxx is 50% more likely to throw a heavy than normal. so 50% more options result in a heavy.
they do not allude to ai profiles becuase firstly there is no real true ai profile.
there is champion preferences but they all call on the same AI, they just have weighting to interpret it slightly differently.
secondly it is plainly obvious that different champs have weighting toward different fight styles as it suits their kits.
this is the basis of creating a somewhat decent AI.
a fully random AI where every champ is exact same and everything is fully random is not a good AI at all.
coding is set like this.
you have call backs.
so you have
AI
you have
CHAMPION
each champion does not have its own ai. each champion uses the same ai. the ai is not changed for anyone. the champion code calls on the ai, the ai does its thing and the champion interprets it.
so to make a champ have a heavy bias you do not touch the AI.
you do not change the AI.
you change the champion code.
the AI code remains exactly the same
you call it AI MANIPULATION, i call it AI INTEPRETATION.
but as i said of course the champion interprets the AI to formulate a playstyle that suits its kit.
otherwise the AI would suck
you are confusing a champ sending different requests and data to the AI and then interpreting the results differently than another champ as "changing the AI"
this is not changing the AI.
the AI code and formulas stay unchanged for all.
it is the data sent and the way the results are interpreted that vary.
different interpretation does not mean changing.
they both use the exact same AI just using different parameters.
- Can’t punish their heavies, even when the animation to hit them is complete after their heavy.
- They will throw a special, the moment you dash at them
- They will light intercept like very few people can
- They will wait till power stings on them expire (if no taunt)
- Even taunted, they may just ignore it if it suits them
-…
They are also using different profiles for champs in different content because in BGs they don’t act the same as they do in Arena.
“Our AI works on a years old system of *randomly weighted* actions”.
AI interpretation. Cute. You made up a term to make it sound nicer. If thats the hill you choose to die on then sure. Ill call this “interpretation”.
What you are describing is called inheritance and classes. If you change the weights you’ve changed the code and fundamentally altered how the champion works. Thats why its A NEW CLASS. It may call the same functions to “block” or “dash”, but what makes AI different is *when* it chooses to call certain functions. Of course you dont rewrite the literal “how to block” code. Its when the AI will do so, how frequently, under which circumstances that matters. Thats what you manipulate. It’s behavior is fundamentally different from the base class. Do you actually think that because their base is the same that makes everything you alter past that identical?? Furthermore the code in each individual champions kit IS part of the AI code, just their individual AI’s behavior.
You are fundamentally confused. You have base AI and you have branches from that base for each champion each of which represents a unique AI (otherwise they would play the same). This is the first time, as far as I know, that kabam has ever openly stated they manipulate behavior at the individual champion level.
You are correct that a fully random AI is bad. We know that the AI is not fully random. My point this whole time was that when Jax wrote that the AI was just a random set of weights and dice rolls the post was intentionally misleading and clearly could not map onto reality as evidenced by the rapid progression in the optimal behavior of the AI. He then came out with a supplement to walk back how “random” the word was supposed to imply. Randomness does not optimize. It is random. NOW after prodding, they have come out and started to actually open up starting with expressly stating that they are selecting for certain behaviors based on what champ is defending
a champ with heavy as big part of kit, like doom will have a heavy bias,
some champs will have a more defensive bias, some will have a bias toward a sp1 or a sp2 instead of the other, its just common sense to expect and understand this.
i don't think kabam have ever needed top explicitly say this. it just makes sense.
as far as kabam saying they never change the AI,
well yeah they don't, if they write code for a new champ and give that a champ a particular bias, there is nothing changing the AI the AI remains the same for everyone.
the only thing that changes is a new set of parameters added for a new champ to interpret the AI in a particular way.
random does not mean fully random, random can mean random within a set of parameters.
random with a particular bias.
fully random = 1-100 with even chance
random within a range - 30-100
random with a weighting - 1 - 100 with a higher chance to be closer to 100
a combination of both.
random with a multiplier - 1-100 then x 1.3
all those are random but will result in different behavior.
call it "manipulation" all you want but its just standard. how else is a champ going to have a preference to do any action unless the "manipulate" the AI?
if the champ does not "manipulate" the AI then all will be the same.
also they have stated before that some champs have certain biases like doom with heavies and a few others.....
AI reparrying the second hit of a special after blocking the first.
The reason it appears to chain specials is maybe because it realizes you’re not holding block after the forth hit an looks at “no block” options and fires it off without hesitation.
AI doing full MLLLM combos tho is a bit odd.
It defies logic to refer to an rng engine as AI. There is no such thing as AI that doesn’t assign different values to actions when determining which to perform. The literal purpose of AI is to try to calculate the best action for a determined result
None of this AI message says anything AT ALL about recovery time issues, or the “AI” exploiting mechanical issues. They claim the “AI” hasn’t been changed in years. Example: Mordo does not have a new kit. There’s no change to a fight between him and another unchanged champ. Yet mordo wont hold his block during powergain when you throw a heavy while he’s not against the wall… why would he only back up? If it’s a random action, then why doesn’t he punish your heavy by attacking? He can still perfom a light medium heavy special block or evade correct?
Hows your parry working?
Your champs still walk forward for no reason?
I personally love standing there after a parry with my hands at my side when I’m trying to heavy 😂
Or my new favorite… getting reparried on my first light attack when I’m supposed to hit the block. Peni or using hulkling after an intimidate, or the wonderful game nodes that require you to hit the block
Hey Kabam why doesn’t the “AI” have the issues? I’d love to get parried and still be able to punish them after I’m stunned
To the point about injecting randomness by adjusting parameters I have already said the following-
“To introduce the kind of biases youre talking about while actually being consistent with what was written in this post you would need to adjust the parameters of the distribution that the weights were drawn from. For instance, you COULD let the the upper bound of each weight parameter be tied to a distinct set and, given they were drawn uniformly, you would end up with larger weights on avg for distributions whose upper bounds were larger even after normalization.”
Im feeling nice so Ill even attach the code which accomplishes this at the end.
Now, my issue with this post, AGAIN, was the following:
“My point this whole time was that when the MCOC TEAM wrote that the AI was just a random set of weights and dice rolls the post was intentionally misleading and clearly could not map onto reality as evidenced by the rapid progression in the optimal behavior of the AI. He then came out with a supplement to walk back how “random” the word was supposed to imply. Randomness does not optimize. It is random. NOW after prodding, they have come out and started to actually open up starting with expressly stating that they are selecting for certain behaviors based on what champ is defending”
Which is OBVIOUSLY a distinctly different type of randomness than the original post implied. I built upon that point when I then said:
“Its disingenuous to lump deliberate changes to distributions under the term “random” and that much is plainly obvious unless you want to take the “Well TEEEEEECCHNICALLY they didn’t lie” stance (while simultaneously taking up the notion of transparency no less!)”
Finally, the distinction you’re making between the base AI and the subclasses is still ridiculous. Once again you aren’t rewriting the functions which say “call this function when you want to block”. You’re rewriting the frequency and conditions under which it tends to call it. By changing its tendencies you are CHANGING the AI. What you seem to think constitutes changing means every individual branch has its own unique function that dictates this is HOW i block not WHEN I block. Changing the weights is EVERYTHING when we are talking about AI manipulation. Everything. I can’t even believe you’re trying to claim that manipulating the weights is not fundamentally changing the AI.
With your logic I could literally set the the distributions of every ability except block to sample from [0,0] and the resulting AI would only ever stand still holding block. You would look at this and say: “well it’s actually the same AI because its using the same base blocking function so this AI is not distinct”. Ridiculous
Code for biased random sampling:
results_2 <- rep(0,6)
for (i in 1:n) {
dash_forward <- runif(1, 0, 50)
block <- runif(1, 0, 50)
heavy <- runif(1, 0, 100)
dash_back <- runif(1, 0, 25)
throw_special <- runif(1, 0, 60)
idle <- runif(1, 0, 10)
ai_weights <- c(dash_forward, block, heavy, dash_back, throw_special,
idle)
normalized_ai <- ai_weights/sum(ai_weights)
rolls <- sample(1:6, size = n, replace = T, prob = normalized_ai)
for (j in 1:6){
results_2[j] <- results_2[j]+sum(rolls == j)
}
}
ai_names <- c("Dash F", "Block", "Heavy", "Dash Back",
"Throw Special", "Idle")
barplot(results_2, names.arg = ai_names,
main = "Case 4: Randomly Generated Weights From Biased Distributions",
cex.names = .6)