Rating System Metrics ππ€―π³
SkyLord7000
Member Posts: 4,025 β
β
β
β
β
Many dislike rating systems. One reason is because the extent of subjectiveness in the contest. Every champion has their quirks. And a tier list lessons the value of all champs though the blanket of a set metric.
Take kabamβs example of added Magiks metric which describes her as βeasy to useβ. What does this metric even mean?! Does it mean she has less text in her ability description? Does it mean her βpeakβ playstyle is easy to use? If so, how would something like that even be measured using in-game through statistics as Kabam Miike says it is.
My question to the forum community is the following. If any, what metrics would you use to best rate a champion?
10
Comments
Damage: 6/5
Utility: 6/5
Everything Else: 6/5
Only correct answer
Damage, Utility, Survivability, Synergy in/out/both, Complexity, Class considerations just off the top without getting into what counts as what and when.
Buffs, Debuffs, Passives, both their presence, absence, potency, and number should be relevant.
Usefulness without awakening
Same with awakened (like how much value does sig+high sig add)
Then:
Damage (split into immediate/rampup)
Health (just percentile)
Utility (split into immunity, AA manipulation, unique aspects (like parry projectiles))
Regen capabilities
DoT (inflicting, shrugoff is in Utility section)
Playstyle (rotational or versatile or...)
But it is still super inaccurate. Karatemike had a good system imo
Basically because:
1. The target ratings are going to be subjective design metrics: It's a condensed numeric score that doesn't need to reflect the real status of the champion in game. It's what the team want a champion to be in each category. *The team* being underlined, it has nothing to do with what a random player or the community wants or finds useful.
2. The actual rating will be loosely calculated from data. And I say loosely because some ratings will be just too difficult to measure from fight data without making a large amount of approximations, leaving lots of things out. Or the data will have a large variability due to hidden root causes not explored or addressed (sig? synergies? playstyle? node combination?), that likely won't be captured at all. For example, we saw data comparisons to address Guillotine's rework healing VS others, where only averages were used.
They will also need a scale model to convert the different statistic measures into the actual 1-5 categorical rating by the way, which might be a problem on it's own. If it's a metric compounded by several different measures they will also need to (manually?) assign weights.
For other ratings named in the post, I can't see how they can even be measured so I guess they will rely on some kind of heuristic? maybe a inside vote?.
In summary, they want to build a somewhat unreliable data-driven score set that will then be compared with the team's totally subjective prior numbers. And then use the relative differences in each category to explain changes in newly released champions?.
It seems a overly complicated and unnecessary way of covering your back when rebalancing champions, to be honest.