Draw probability from 120 T4CC shard crystals
Laserjunge
Member Posts: 33 ★
Hey guys, since it felt that I only had drawn cosmic T4CC shards in December (desperately waiting for third mystic T4CC for months), I decided to start a little statistics. Now, after 120 draws (I know, not a sufficient, but a respectable sample), I'd like to let you have look, what, I think, does not look like an equipartition. Each draw was done at different day/night times along the last three weeks, sometime 3 crystals at once, sometimes 20. Also, only the drops from a classic T4CC shard crystal have been included in the data set (i.e., no glory crystals etc).
I really do not like the imbalance of this "equal" drop rate supposed to have with those crystals. Any comment from Kabam what could be the issue here. I hope this is not indented. And I hope there is a better way to find a solution here than to close and delete the thread.
I really do not like the imbalance of this "equal" drop rate supposed to have with those crystals. Any comment from Kabam what could be the issue here. I hope this is not indented. And I hope there is a better way to find a solution here than to close and delete the thread.
3
Comments
Of course, you are right. I will keep going with this measure. However, 120 is not 10, and this is the amount of an average non-alliance player in the game collectable over roughly a month.
Note that, equal probability of a crystal means also that the statistics for each individual needs to be fulfilled. Otherwise, I, as an individual player, cannot rely on the percentage values given for e.g. featured offers, since this chance does not apply to me.
Actually, 120 is such an insignificant sample that it isn't much different from 10. With a tiny sample size, you'll pretty much always get what you consider to be "off" results
That's not what equal probability means. The game does not guarantee equal rates. It only guarantees equal odds But by definition, a random sample of crystals is unlikely to generate precisely equal drop rates.
I wrote a quick program to perform this same crystal opening experiment using python, whose random number generator is good enough for this sort of test (it satisfies basic partitioning odds). This is the first ten runs of that program:
[21, 9, 21, 25, 21, 23]
[17, 18, 24, 17, 19, 25]
[24, 19, 16, 22, 21, 18]
[23, 15, 20, 18, 21, 23]
[24, 19, 16, 18, 17, 26]
[23, 19, 14, 19, 22, 23]
[29, 23, 12, 22, 19, 15]
[23, 16, 18, 21, 21, 21]
[20, 16, 32, 14, 17, 21]
[23, 16, 22, 15, 21, 23]
*Usually* the numbers are in the vicinity of the statistical expectation which in this case is twenty. But very often they are about plus or minus five, which is also expected with this many pulls. Occasionally, they are significantly far away, the lowest offender being 9 in the first run on the low side and 29 in the seventh run on the high side. This is what you expect with genuinely random drop odds. For reference, this is what I believe your results were based on your reported percentages above: [11,13,24,25,29,18]. It actually looks remarkably close to run 7.
In fact, one test for *non* randomness is when the numbers end up too even. An experiment often performed in statistics classes in college asks the students to write down a hundred (or some number) of either random numbers or random coin flips (H, T) on a sheet of paper. These are then mixed together with one sheet of handwritten numbers that were themselves generated randomly. The random sheet is almost always very easy to spot, because the human generated numbers, which match people's expectations of what random looks like, is always way too homogeneous compared to genuinely randomly generated numbers.
Amen
Well, 120 is quite different from 10 in terms of margin for error, it is just that the margin is still plenty big at 120. It has been a while since I've done histogram analysis but I think the approximate margin for error for this test is about 50%. In other words, you'd expect to see results 20 plus or minus 10 in most of your runs. With 10 samples the error I think would be something more like 200%: in other words one or two, plus or minus three or four. In other words, wildly different.
Histograms are a little more complex, but for single choice selections (in other words, if we focused on how many pulls were one chosen class, like Cosmic) the margin for error converges much slower than people intuitively think: its about one over square root of the number of trials. In other words, the margin for error for any one class in 120 pulls is about eleven. Put another way, any result from 9 to 31 would be considered a bullseye.
@DNA3000 thanks for your experiment. So I was just unlucky to end up with one of the extreme cases, with value quite at the error margin. I understand. I will continue with this measure and report back when I have a significantly higher sample.
Cheers.