I agree. I find it’s a lot easier to get older champs from the basic 5* than the newer ones that were just added.
I wish someone could audit what their randomness claims really are
The issue with that is probably the legalities of the patents attached to the codes they've written.
But I will say this, I am curious to see how they do add the new champions to the RNG because I have a decent understanding of some of the ways the RNGs are coded, so would be interesting to see how they write the new champs in. I won't pretend to know the exact type of coding kabam uses though, never really cared that much because it is just a game I play for fun anyways. However, I do know it is difficult (for me at least) to create code for an unbiased RNG after adding new champs (I would imagine I personally would have to rewrite the entire code).
I fail to see how coding an unbiased RNG would be complicated. The only way it would be complicated if there is bias involved. Whether the bias is based on desirability, age of champ or if the user already has the champ is anyone's guess.
It is tricky to write an unbiased RNG. So no one does: they use the RNG that their C libraries give them.
What's almost impossible is writing a deliberately biased RNG that can't be exploited against you. Actually random crystals are impossible to manipulate or exploit. But any non-random predictability in the crystals is something players could one day discover and exploit. Randomness cannot be cheated.
Back when I used to work with load balancers, there were generally a variety of ways you could ask a load balancer to balance load. You could round robin the load, you could attempt to measure the load on each device and direct load to the least busy device. You could send load to the fastest responding device. But the mode that often made people scratch their heads was sometimes called stochastic mode, or more commonly called "random" mode. The load balancer would just randomly send load where ever. This seemed completely nonsensical: why randomly select where your load was going to go, ignoring the environment entirely?
Because random load balancers are immune to resonance. However you choose to try to balance your load, there is always one set of circumstances that, if this set of circumstances arises, the load balancer will always get tricked into doing the worst possible thing, further unbalancing load and further making its performance worse. The right set of circumstances can make the whole thing shake itself apart, no matter how intelligently you try to balance load.
Stochastic (random) "balancers" are completely immune to this problem. There is no way to engineer load accidentally or on purpose that will break a random balancer. You can't "cheat" them because you can never predict what they will do. In the long run, they will still balance roughly evenly. But you still can't predict how they will do that. And any attempt to resonate them will fall apart very quickly due to randomness.
@BitterSteel
Yeah I put it down to a visual glitch.. A very cruel and frustrating visual glitch lol (nothing else I could put it down to I had the 5* Cyclops and that was that)... That's insightful.. I've often seen posts where players have claimed the champ is predetermined but never given the info on how they came to the conclusion.. So thanks for that mate.. Makes sense..
Lol it is funny though how the reel slows on a good champ or even a 5* champ when spinning FGMCs then does the famous rollovers to the next garbage champ.
It is tricky to write an unbiased RNG. So no one does: they use the RNG that their C libraries give them.
Although I understand the point that you're trying to make i.e. that even "random" subroutines are not entirely random, but I would suspect that for the purposes of doing crystal pulls, they are random enough.
I also don't think anyone has access to enough observable results to determine an exploit. i.e. opening at a particular time because the RNG is seeded on time of day or something like that.
And we are not talking about manufacturing a DOS attack on servers that are supposed to handle millions of requests.
So again, I still fail to see how doing an unbiased RNG would be that complicated.
It is tricky to write an unbiased RNG. So no one does: they use the RNG that their C libraries give them.
Although I understand the point that you're trying to make i.e. that even "random" subroutines are not entirely random, but I would suspect that for the purposes of doing crystal pulls, they are random enough.
I also don't think anyone has access to enough observable results to determine an exploit. i.e. opening at a particular time because the RNG is seeded on time of day or something like that.
And we are not talking about manufacturing a DOS attack on servers that are supposed to handle millions of requests.
So again, I still fail to see how doing an unbiased RNG would be that complicated.
It has to do with the nature of how a computing system functions (a computer), based on instructions. In a sense computers are actually pretty stupid and you have to tell them what to do. I know this because of my work with creating codes for analysing neural networks and things like probabilites of neuron action potentials firing and neurotransmitter release as well as analyses of brain imaging. I mostly have worked with simple java, matlab, R.
LOL it is really hard to go too deep while in a simple fun discussion like this, especially not knowing exactly the type of RNG systems Kabam uses.
But ya DNA brought up good points and one in particular that it does go both ways, coding a specific bias or truly unbias RNG is difficult.
My issue, although DNA also addresses, is having a system in place and then further adding to the system (i.e. new champs) while maintaining the degree of unbiasedness.
@vinniegainz@DNA3000 I understand that you are talking about pseudo versus true random number generation. My point is that true random number generation usually comes from truly random events like atmospheric noise and radioactive decay.
Given the issues that are prevalent in the game, would you think investing in true versus pseudo RNG would really make a difference? I doubt we would see any observable difference from that investment.Everyone likes the concept of best practice, but in reality, constraints on budget and time can make it impractical. Obviously there are different tolerances to what is considered "good enough" or acceptable. Levels of randomness may differ depending upon application i.e. bank encryption versus crystal drops. I would accept pseudo RNG in one and not the other.
I still fail to see how it is complicated unless the system is biased in some way shape or form.
Good post. My 5 star roster is Rhino, venom, magneto"dupped", cyclops blueT, antman "dupped", iron fist, red hulk "dupped and my one saving grace Archangle 10th crystal
It is tricky to write an unbiased RNG. So no one does: they use the RNG that their C libraries give them.
Although I understand the point that you're trying to make i.e. that even "random" subroutines are not entirely random, but I would suspect that for the purposes of doing crystal pulls, they are random enough.
I also don't think anyone has access to enough observable results to determine an exploit. i.e. opening at a particular time because the RNG is seeded on time of day or something like that.
And we are not talking about manufacturing a DOS attack on servers that are supposed to handle millions of requests.
So again, I still fail to see how doing an unbiased RNG would be that complicated.
No, that's not what I meant. What I meant was that really good pseudo-random number generator functions are much harder to create from scratch than many people think, and when people try to write them themselves thinking that it is easy they often contain horrible errors so bad they become easily noticable or exploitable. But it is very easy to write a really good random selection algorithm: you start with a known PRNG generator that has been studied well and is known to have the properties you want, and then you wrap that in code that uses that PRNG to do the selection. That's relatively easy.
When I've tested random number generators and random selectors, including in games, I've generally used some variation of the die hard tests. If this is something you're actually interested in, Google search "Diehard randomness tests."
@vinniegainz@DNA3000 I understand that you are talking about pseudo versus true random number generation. My point is that true random number generation usually comes from truly random events like atmospheric noise and radioactive decay.
Given the issues that are prevalent in the game, would you think investing in true versus pseudo RNG would really make a difference?
Not really. Also, most pseudo-random number generators are seeded with "true" randomness. The question of "how random" a generator is when it contains a pseudo-random part and a "true" random part is a very technical question that is usually discussed in terms of informatorial entropy. PRNGs are unpredictable but contain very little true entropy: their high entropy is a mathematical illusion (but still a useful one). Seeds can inject high entropy into the system, making the pseudo-random "more random." In computer servers, this is often done using network events. For example, if you record the exact time that you receive network packets down to the microsecond, the timing of those events in general are not random, but if you only look at the last digit of the relative timestamp that digit itself is essentially random: there's no way to predict what that will be and it is influenced by chaotic real world events that are for all intents and purposes random (it is influenced by small things like temperature differences in the switches, for example).
If you are interested in this sort of thing, Google search information and discussion about /dev/random. But fair warning: this is a very deep rabbit hole that even professionals argue about on the margins. I've had debates about entropic sources that came close to violence. The big point worth remembering, however, is there is a difference between *useful* randomness and theoretical randomness. Useful randomness is often called computational randomness and its what we use in the real world. Theoretical randomness is a mathematically complex idea that no one actually uses in the real world directly, because the usefulness of mathematical theoretical randomness is very limited.
@vinniegainz@DNA3000 I understand that you are talking about pseudo versus true random number generation. My point is that true random number generation usually comes from truly random events like atmospheric noise and radioactive decay.
Given the issues that are prevalent in the game, would you think investing in true versus pseudo RNG would really make a difference?
Not really. Also, most pseudo-random number generators are seeded with "true" randomness. The question of "how random" a generator is when it contains a pseudo-random part and a "true" random part is a very technical question that is usually discussed in terms of informatorial entropy. PRNGs are unpredictable but contain very little true entropy: their high entropy is a mathematical illusion (but still a useful one). Seeds can inject high entropy into the system, making the pseudo-random "more random." In computer servers, this is often done using network events. For example, if you record the exact time that you receive network packets down to the microsecond, the timing of those events in general are not random, but if you only look at the last digit of the relative timestamp that digit itself is essentially random: there's no way to predict what that will be and it is influenced by chaotic real world events that are for all intents and purposes random (it is influenced by small things like temperature differences in the switches, for example).
If you are interested in this sort of thing, Google search information and discussion about /dev/random. But fair warning: this is a very deep rabbit hole that even professionals argue about on the margins. I've had debates about entropic sources that came close to violence. The big point worth remembering, however, is there is a difference between *useful* randomness and theoretical randomness. Useful randomness is often called computational randomness and its what we use in the real world. Theoretical randomness is a mathematically complex idea that no one actually uses in the real world directly, because the usefulness of mathematical theoretical randomness is very limited.
LOL Loved the "Ive had debates (...) " part. I can relate to that for me usually happens after hours upon hours stuck in the same room with one of my colleagues trying to figure out what the frigg is going wrong with our coding.
Remember how people used to believe in the 'bottom right corner, then count to 5 or w.e.' when spinning a PHC, essentially they were trying to exploit a PRNG in a way.
They would swear the idea above guarentees a 3* at least and it made sense to me that kabam used the typical mutli level system where you roll and hit rare on the 1st level for a chance to roll again for a 3* or 4* and hitting rare again on the 2nd level gets you the 4*. (Maybe the 5* basic crystal has a multi level system like that for junk champions or good champions).
This is why my tin foil theory is that kabam coded a junky, simple RNG when the game first started out but I am guessing they use wayyyy more powerful coding now and most likely outsource it.
You would be surprised how many companies even some of the big pharma where I live still outsource all the handling of the research data and coding for their analytical programs. There is a ton of money to be made.
Statistical analyses, one of the most powerful yet detrimentally misused constructs.
This is why my tin foil theory is that kabam coded a junky, simple RNG when the game first started out but I am guessing they use wayyyy more powerful coding now and most likely outsource it.
I've actually seen it happen, in fact I caught it in an actual MMO once, but I don't think that is likely here. When it happens in a game like this it is usually an anomaly because the lazy thing to do is also the smartest thing to do: just use the random() that comes with your C compiler. It is still possible to use it improperly, but all the ways to use it badly (aliasing, fenceposting, etc) show statistical anomalies that would be impossible to miss in crystal openings.
I've only seen one oddity in crystal openings that might happen more often than random chance would suggest: consecutive duplicates seem to happen more often than statistically likely. But honestly the work involved to translate that guess into a statistical analysis is more work than I'm willing to put into something that doesn't actually skew the odds against the players one way or the other in general.
You would be surprised how many companies even some of the big pharma where I live still outsource all the handling of the research data and coding for their analytical programs. There is a ton of money to be made.
The biggest problem with Big Data is trying to analyze in-house, badly. The second biggest problem with Big Data is trying to outsource analysis, since most outsource consultants are not that much better. The technical problem is confusing refactoring for analysis. People show off their refactoring skills, and people think they are good analyzers. But that's like trying to figure out who the best poker player is by watching them sort a shuffled deck of cards.
@vinniegainz@DNA3000 I understand that you are talking about pseudo versus true random number generation. My point is that true random number generation usually comes from truly random events like atmospheric noise and radioactive decay.
Given the issues that are prevalent in the game, would you think investing in true versus pseudo RNG would really make a difference?
Not really. Also, most pseudo-random number generators are seeded with "true" randomness. The question of "how random" a generator is when it contains a pseudo-random part and a "true" random part is a very technical question that is usually discussed in terms of informatorial entropy. PRNGs are unpredictable but contain very little true entropy: their high entropy is a mathematical illusion (but still a useful one). Seeds can inject high entropy into the system, making the pseudo-random "more random." In computer servers, this is often done using network events. For example, if you record the exact time that you receive network packets down to the microsecond, the timing of those events in general are not random, but if you only look at the last digit of the relative timestamp that digit itself is essentially random: there's no way to predict what that will be and it is influenced by chaotic real world events that are for all intents and purposes random (it is influenced by small things like temperature differences in the switches, for example).
You've kind of proven my point. It's not that hard and it is good enough.
Relative to the complexities of matching animation key frames to hit detection, dealing with latency from touch input etc etc of the in-game fighting mechanics generating random enough crystal drops is not that hard.
Using alternate more complex and critical scenarios where true randomness is a serious consideration doesn't convince me that creating an unbiased system is complicated.
@vinniegainz@DNA3000 I understand that you are talking about pseudo versus true random number generation. My point is that true random number generation usually comes from truly random events like atmospheric noise and radioactive decay.
Given the issues that are prevalent in the game, would you think investing in true versus pseudo RNG would really make a difference?
Not really. Also, most pseudo-random number generators are seeded with "true" randomness. The question of "how random" a generator is when it contains a pseudo-random part and a "true" random part is a very technical question that is usually discussed in terms of informatorial entropy. PRNGs are unpredictable but contain very little true entropy: their high entropy is a mathematical illusion (but still a useful one). Seeds can inject high entropy into the system, making the pseudo-random "more random." In computer servers, this is often done using network events. For example, if you record the exact time that you receive network packets down to the microsecond, the timing of those events in general are not random, but if you only look at the last digit of the relative timestamp that digit itself is essentially random: there's no way to predict what that will be and it is influenced by chaotic real world events that are for all intents and purposes random (it is influenced by small things like temperature differences in the switches, for example).
You've kind of proven my point. It's not that hard and it is good enough.
Relative to the complexities of matching animation key frames to hit detection, dealing with latency from touch input etc etc of the in-game fighting mechanics generating random enough crystal drops is not that hard.
Using alternate more complex and critical scenarios where true randomness is a serious consideration doesn't convince me that creating an unbiased system is complicated.
Depends on the "it." You claimed making a good random number generator is easy. It is not. But that's not the same thing as making a good random selection algorithm. That's not hard if you understand the basics, and step one is don't try to making your own random number generator. Use a well documented one, and then wrap your selection algorithm around it.
It isn't *complicated* to make an biased system, but it is easy to make mistakes. Most of the time those mistakes are subtle and no player or user will likely notice, but sometimes they are noticable. Random number generator documentation often specifies what these mistakes are to prevent people from making them, but no one reads that documentation.
None of this has anything to do with "true randomness." Most of the errors in random selectors don't happen in the generator. They happen in the algorithm that uses those numbers incorrectly. The quality of the generator itself has no impact on them. For example, a common kind of error they usually teach people to avoid is where you do something like this:
X = random() % 100.
In other words, pick a random number and set X to be the last two digits of the generated number. This generates a random number between zero and 99. But it skews low when random() generates numbers between 0 and a maximum integer that is not one less than a multiple of one hundred, which it generally never does in any library. Note this problem occurs even if the random() function generates absolutely random numbers.
Depends on the "it." You claimed making a good random number generator is easy. It is not.
I never made that claim. I said making a system that randomly selects crystals in an unbiased manner is not that complicated. Put it this way, in 20 years of software dev, if I had a developer come to me and say they couldn't do it, I would have fired them.
I never said to reinvent the wheel and create your own random number generator. I take a pragmatic approach to software development. Write the code that you need to, leverage existing frameworks and libraries where possible. I would suggest that any skews that we are seeing in drop rates are intentional and not because Kabam don't know how to write code to make random selections.
Comments
It is tricky to write an unbiased RNG. So no one does: they use the RNG that their C libraries give them.
What's almost impossible is writing a deliberately biased RNG that can't be exploited against you. Actually random crystals are impossible to manipulate or exploit. But any non-random predictability in the crystals is something players could one day discover and exploit. Randomness cannot be cheated.
Back when I used to work with load balancers, there were generally a variety of ways you could ask a load balancer to balance load. You could round robin the load, you could attempt to measure the load on each device and direct load to the least busy device. You could send load to the fastest responding device. But the mode that often made people scratch their heads was sometimes called stochastic mode, or more commonly called "random" mode. The load balancer would just randomly send load where ever. This seemed completely nonsensical: why randomly select where your load was going to go, ignoring the environment entirely?
Because random load balancers are immune to resonance. However you choose to try to balance your load, there is always one set of circumstances that, if this set of circumstances arises, the load balancer will always get tricked into doing the worst possible thing, further unbalancing load and further making its performance worse. The right set of circumstances can make the whole thing shake itself apart, no matter how intelligently you try to balance load.
Stochastic (random) "balancers" are completely immune to this problem. There is no way to engineer load accidentally or on purpose that will break a random balancer. You can't "cheat" them because you can never predict what they will do. In the long run, they will still balance roughly evenly. But you still can't predict how they will do that. And any attempt to resonate them will fall apart very quickly due to randomness.
Lol it is funny though how the reel slows on a good champ or even a 5* champ when spinning FGMCs then does the famous rollovers to the next garbage champ.
Although I understand the point that you're trying to make i.e. that even "random" subroutines are not entirely random, but I would suspect that for the purposes of doing crystal pulls, they are random enough.
I also don't think anyone has access to enough observable results to determine an exploit. i.e. opening at a particular time because the RNG is seeded on time of day or something like that.
And we are not talking about manufacturing a DOS attack on servers that are supposed to handle millions of requests.
So again, I still fail to see how doing an unbiased RNG would be that complicated.
It has to do with the nature of how a computing system functions (a computer), based on instructions. In a sense computers are actually pretty stupid and you have to tell them what to do. I know this because of my work with creating codes for analysing neural networks and things like probabilites of neuron action potentials firing and neurotransmitter release as well as analyses of brain imaging. I mostly have worked with simple java, matlab, R.
LOL it is really hard to go too deep while in a simple fun discussion like this, especially not knowing exactly the type of RNG systems Kabam uses.
But ya DNA brought up good points and one in particular that it does go both ways, coding a specific bias or truly unbias RNG is difficult.
My issue, although DNA also addresses, is having a system in place and then further adding to the system (i.e. new champs) while maintaining the degree of unbiasedness.
Given the issues that are prevalent in the game, would you think investing in true versus pseudo RNG would really make a difference? I doubt we would see any observable difference from that investment.Everyone likes the concept of best practice, but in reality, constraints on budget and time can make it impractical. Obviously there are different tolerances to what is considered "good enough" or acceptable. Levels of randomness may differ depending upon application i.e. bank encryption versus crystal drops. I would accept pseudo RNG in one and not the other.
I still fail to see how it is complicated unless the system is biased in some way shape or form.
No, that's not what I meant. What I meant was that really good pseudo-random number generator functions are much harder to create from scratch than many people think, and when people try to write them themselves thinking that it is easy they often contain horrible errors so bad they become easily noticable or exploitable. But it is very easy to write a really good random selection algorithm: you start with a known PRNG generator that has been studied well and is known to have the properties you want, and then you wrap that in code that uses that PRNG to do the selection. That's relatively easy.
When I've tested random number generators and random selectors, including in games, I've generally used some variation of the die hard tests. If this is something you're actually interested in, Google search "Diehard randomness tests."
Not really. Also, most pseudo-random number generators are seeded with "true" randomness. The question of "how random" a generator is when it contains a pseudo-random part and a "true" random part is a very technical question that is usually discussed in terms of informatorial entropy. PRNGs are unpredictable but contain very little true entropy: their high entropy is a mathematical illusion (but still a useful one). Seeds can inject high entropy into the system, making the pseudo-random "more random." In computer servers, this is often done using network events. For example, if you record the exact time that you receive network packets down to the microsecond, the timing of those events in general are not random, but if you only look at the last digit of the relative timestamp that digit itself is essentially random: there's no way to predict what that will be and it is influenced by chaotic real world events that are for all intents and purposes random (it is influenced by small things like temperature differences in the switches, for example).
If you are interested in this sort of thing, Google search information and discussion about /dev/random. But fair warning: this is a very deep rabbit hole that even professionals argue about on the margins. I've had debates about entropic sources that came close to violence. The big point worth remembering, however, is there is a difference between *useful* randomness and theoretical randomness. Useful randomness is often called computational randomness and its what we use in the real world. Theoretical randomness is a mathematically complex idea that no one actually uses in the real world directly, because the usefulness of mathematical theoretical randomness is very limited.
LOL Loved the "Ive had debates (...) " part. I can relate to that for me usually happens after hours upon hours stuck in the same room with one of my colleagues trying to figure out what the frigg is going wrong with our coding.
Remember how people used to believe in the 'bottom right corner, then count to 5 or w.e.' when spinning a PHC, essentially they were trying to exploit a PRNG in a way.
They would swear the idea above guarentees a 3* at least and it made sense to me that kabam used the typical mutli level system where you roll and hit rare on the 1st level for a chance to roll again for a 3* or 4* and hitting rare again on the 2nd level gets you the 4*. (Maybe the 5* basic crystal has a multi level system like that for junk champions or good champions).
This is why my tin foil theory is that kabam coded a junky, simple RNG when the game first started out but I am guessing they use wayyyy more powerful coding now and most likely outsource it.
You would be surprised how many companies even some of the big pharma where I live still outsource all the handling of the research data and coding for their analytical programs. There is a ton of money to be made.
Statistical analyses, one of the most powerful yet detrimentally misused constructs.
I've actually seen it happen, in fact I caught it in an actual MMO once, but I don't think that is likely here. When it happens in a game like this it is usually an anomaly because the lazy thing to do is also the smartest thing to do: just use the random() that comes with your C compiler. It is still possible to use it improperly, but all the ways to use it badly (aliasing, fenceposting, etc) show statistical anomalies that would be impossible to miss in crystal openings.
I've only seen one oddity in crystal openings that might happen more often than random chance would suggest: consecutive duplicates seem to happen more often than statistically likely. But honestly the work involved to translate that guess into a statistical analysis is more work than I'm willing to put into something that doesn't actually skew the odds against the players one way or the other in general.
The biggest problem with Big Data is trying to analyze in-house, badly. The second biggest problem with Big Data is trying to outsource analysis, since most outsource consultants are not that much better. The technical problem is confusing refactoring for analysis. People show off their refactoring skills, and people think they are good analyzers. But that's like trying to figure out who the best poker player is by watching them sort a shuffled deck of cards.
You've kind of proven my point. It's not that hard and it is good enough.
Relative to the complexities of matching animation key frames to hit detection, dealing with latency from touch input etc etc of the in-game fighting mechanics generating random enough crystal drops is not that hard.
Using alternate more complex and critical scenarios where true randomness is a serious consideration doesn't convince me that creating an unbiased system is complicated.
Depends on the "it." You claimed making a good random number generator is easy. It is not. But that's not the same thing as making a good random selection algorithm. That's not hard if you understand the basics, and step one is don't try to making your own random number generator. Use a well documented one, and then wrap your selection algorithm around it.
It isn't *complicated* to make an biased system, but it is easy to make mistakes. Most of the time those mistakes are subtle and no player or user will likely notice, but sometimes they are noticable. Random number generator documentation often specifies what these mistakes are to prevent people from making them, but no one reads that documentation.
None of this has anything to do with "true randomness." Most of the errors in random selectors don't happen in the generator. They happen in the algorithm that uses those numbers incorrectly. The quality of the generator itself has no impact on them. For example, a common kind of error they usually teach people to avoid is where you do something like this:
X = random() % 100.
In other words, pick a random number and set X to be the last two digits of the generated number. This generates a random number between zero and 99. But it skews low when random() generates numbers between 0 and a maximum integer that is not one less than a multiple of one hundred, which it generally never does in any library. Note this problem occurs even if the random() function generates absolutely random numbers.
Was kinda hoping for another o.g. trash champ like rhino for more laughs.
I never made that claim. I said making a system that randomly selects crystals in an unbiased manner is not that complicated. Put it this way, in 20 years of software dev, if I had a developer come to me and say they couldn't do it, I would have fired them.
I never said to reinvent the wheel and create your own random number generator. I take a pragmatic approach to software development. Write the code that you need to, leverage existing frameworks and libraries where possible. I would suggest that any skews that we are seeing in drop rates are intentional and not because Kabam don't know how to write code to make random selections.