**Mastery Loadouts**
Due to issues related to the release of Mastery Loadouts, the "free swap" period will be extended.
The new end date will be May 1st.
Due to issues related to the release of Mastery Loadouts, the "free swap" period will be extended.
The new end date will be May 1st.
Options
Comments
I've been unlucky with my 7 star roster when it comes to crystals I've earned in game, and quite lucky when it comes to outright buying 7 stars. I don't think Kabam do 'rig crystals' (it's simply unfeasible to decide which champs were 'bad' and then weight the odds in favour of pulling them,) and I really don't see an advantage for them doing this.
The only valid concern people raise is sometimes being concerned that the odds were incorrectly displayed, or the crystals were coded incorrectly. I believe this has happened in the past (such as crystals not containing the correct featured champion and compensation got sent out fairly quickly).
A lot of people just don't understand statistics. I for instance went 400 paragon crystals without a 7 star. That does not prove the crystals are rigged as I should've gotten 4 7stars. It simply means that I could be part of the really unlucky 1.79% (0.99^400) of people that it happened to (not an insignificant number when considering mcoc's playerbase).
TLDR: There's no 'proof' the crystals are rigged and I don't really see any good reason for Kabam to misrepresent the odds for them either. But equally Jax, why not show us the code that does govern the mechanics? That transparency could dispel a fair few (albeit unfounded) fears.
But punisher not being in the arena crystals kinda just cancels all the positives out so idk
The only problem is I don't know what an appropriate value would be for this (the standard is 0.05 which probably isn't a bad starting point) and we would need a decent sample size (maybe 1,000 paragon crystals?) Then if the p value exceeded 0.05 that would be decent evidence the droprates were indeed correct and would probably dispel a lot of the myths and fears people have.
One of the most common mistakes - so common it is even in the wikipedia page you decided to quote (yes, I checked) - is that a lot of people believe the more samples you collect from a random distribution, the closer the total number of samples that match a certain criteria should get to the expected total. There's an intuitive - but wrong - notion that they should get "closer and closer" to the statistical average values or expected distributions. This is, of course, false. Now, when you said "the accurate claim is that over sufficiently large batches of crystals the sample mean should converge a.s to the true mean" I assumed that was just clumsy wording, because "sample mean" when it comes to crystal openings doesn't exist. There is no such thing as the average crystal drop. My focus was on the notion of "convergence" which has both a colloquial and technical definition, and for distributions that definition doesn't really apply here.
Now, someone who actually knew what they were talking about might have said something like "no, what I meant was that the percentage of expected values approaches the expected percentage." Because while crystal openings don't have an average, they do have such a thing as a percentage of values that meet an expectation. Instead, you said
"I spoke only about the sample mean not some observed raw count of crystals which implicitly means that that proportion is all that matters (hence 1/n). If I were explaining it simply to someone I would even leverage that fact in that observed counts matter less as the sizes grow larger- thats kind of the point in a very measure theoretic naive way (although I have to stress that it really doesn’t capture the depth of the statement). I wouldn’t (and didnt) say that the accurate claim was that the observed count of crystals should converge to the expected count of crystals only, to quote myself: “ The accurate claim is that over sufficiently large batches of crystals the sample mean should converge a.s to the true mean (or in probability if you weaken the statement)”."
Now, you said this is how you would explain this "simply" to someone. I invite anyone to guess what you mean by "observed counts matter less as the sizes grow larger."
In any case, it is true that if you decide to completely refactor this discussion in terms of Bernoulli trials - which no one including you originally did - then you can talk about averages in that context because the crystal openings then reduce to binary values. You can average sequences of binary values. For those that are unaware, the idea behind Bernoulli trials is suppose there's some kind of experiment you can run, like say opening a Paragon crystal. And suppose that experiment has one of two possibilities, say drops a 6* or not. A Bernoulli process, or a set of Bernoulli trials, is essentially the act of conducting that experiment over and over. There are times when we decide to calculate statistics across Bernoulli trials, and we do that by assigning one result the value "1" and the other the value "0". We can then calculate things like averages.
But that's not the average of the sequence. Rather, it is the average of the Bernoulli encodings of that sequence as interpreted as a sequence of Bernoulli trials. There are times when this makes sense, particularly in certain complex situations. But no one, and I mean no one, no matter how hard core of a probability and statistics person, chooses to define "the percentage of 6* champs in a crystal opening" as "the average Bernoulli value of the sequence of crystal openings, given a Bernoulli value of 1 for drops that match the criteria of being 6* rarity." That would be like defining the number of drops in terms of set theory.
But why bring all of this into the discussion in the first place? Why bring up the Law of Large Numbers, its weak variation, Bernoulli trials, plus the Gambler's Fallacy on top? Could it be because the Wikipedia page for "Law of Large Numbers" mentions all of these things within the first couple of pages? Let's see here. Ah:
"More formally, the LLN states that given a sample of independent and identically distributed values, the sample mean converges to the true mean."
That exact phrasing sounded familiar. Note the precise wording. Given a sample of values "the sample mean converges to the true mean." Someone who knew what they were talking about would not combine the two ideas like it is a triviality. They would not say "the accurate claim is that over sufficiently large batches of crystals the sample mean should converge a.s to the true mean." Not if they were trying to perform a technical nit-pick.
Someone who knew what they were talking about would have understood my original post in context. Someone who knew what they were talking about and wanted to nit-pick me would have known what the point of contention was. Someone who knew what they were talking about would not confuse crystal openings with Bernoulli refactorizations.
And someone who knew what they were talking about would not give themselves away by essentially leaking Wikipedia all over the place.
More importantly, suppose (and this is never going to happen, but let's just say for the purposes of discussion) everyone and their dog decides to do a p-value based test of crystal drop rates. You'd now have thousands of players all reporting different p-values associated with their tests. You see the problem? If we assume, say, that 0.001 is a very high confidence threshold, what happens if ten thousand people conduct the same test. You're now looking at a one in a thousand chance of something being wrong, being tested by ten thousand people. What happens when ten players "prove" the crystals are broken with p-value 0.001?
If a hundred players test a million crystals each, p-value analysis would be interesting. If a million players test a hundred crystals each, p-value analysis would be less interesting. Note that the exact same number of crystals are being observed. This very, very lightly brushes past the issue of p-hacking. Definitely worth investigating, for people with any curiosity in this area.
Simple probability is difficult enough to explain. But arm people with p-value testing strategies and I think there's at least the potential for a meta-analysis dumpster fire.
The one oddity that showed up in my analysis of crystal openings back in the day was consecutive correlation: the odds of a drop being identical to a previous drop. The odds of that happening appeared to be very slightly higher than I would have expected at the time. However, it was only slightly higher, and within the margin for error, and even if it was correlated as much as I was seeing in my data it would not be the sort of thing that players would be able to notice.
Nobody really streams huge numbers of crystal openings any more, so sources of large uncontaminated data, along with my willingness to stare at thousands of crystals being opened one at a time, have both gone the way of the Dodo.
If the pseudo random number generator is using several factors to seed eg time of day, geolocation (probably not this one), alliance_id etc, its theroetically possible the same numbers are coming up more often than not when crystals are opened at the same time or in batches. It doesn't change that it's random, but...
Unless the whole game is written in BASIC with global variables, this seems unlikely to me.
But let's say that somehow this is true: the code is taking all sorts of player sensitive information and using it to seed the RNG. If *all* of it was deterministic, that would be so broken we'd see the results of that. We'd see sequences of crystal drops repeat in a very noticeable way. So that's not realistically possible. Let's say instead that some of the information is that sort of thing, and the rest is some more reasonable source of entropy like the time, or a random pool. In that case, the most likely and reasonable way to take all that information and use it to seed the RNG would be to hash it all together. And if you hash deterministic low entropy information with high entropy random sources, what you get is still a reasonably random hash with reasonably random bits.
It isn't hard to make a broken RNG. I've seen them, and I even helped address one in another game. But that one was so broken it was pretty obviously broken. It is not easy to make an RNG that is broken, but not too broken. Not impossible, but not easy to just do by accident.
I've used an awakening gem at least 5x only to awaken said champ within my next couple crystals.....
Or how I opened 5.....yes 5 abyss nexus crystals chasing ONLY HERC.....0-50....One day I finally got him randomly....then he was in two of my next three after that.....there are so many astronomically impossible things that happen in this game ....I refuse to believe that there is just some random number generator picking these ...NO CHANCE. I won't use the word rigged. Uncomfortably suspicious is a better fit.
Had all the resources ready to R4 and ascend Bullseye.
You'd think I'd pull him at least once out of 28 crystals...
But no. Pulled everybody else.
Except Bullseye.
Actually make that 29 coz I opened another one a day later.
I have no words for it, other than I'm never saving for features ever again.
Ps. Bullseye was the only champ I wanted. Naturally 🤦♀️
Operation MK Ultra comes to mind, where the US government (CIA) famously attempted to inject subjects with LSD to see if the drug had the potential for mind control, coupled with other methods, such as electroshock therapy. Other crazy sounding ones like research and testing into the so called 'gay bomb' (an experimental explosive designed to alter pheromones upon detonation for use in the Vietnam war) also turned out to be true, though, unsurprisingly, was a complete failure. So can you really blame people for believing in other conspiracy theories that seem positively tame by comparison?
“I invite anyone to guess what you mean by "observed counts matter less as the sizes grow larger."
Observed count: The number of successes
Expected count: The success rate times n (number of trials) ie what you EXPECT your count should be. Yes, I do expect that people can follow MY wording because when I use certain words because they are STANDARD. See any explanation on the Chi square test statistic for example. You say things like “measured result” and “statistical average” which are NOT standard and guess what, as I already illustrated, the precise wording matters a lot when youre phrasing certain claims. Now:
“There are times when we decide to calculate statistics across Bernoulli trials, and we do that by assigning one result the value "1" and the other the value "0". We can then calculate things like averages. But that's not the average of the sequence. Rather, it is the average of the Bernoulli encodings of that sequence as interpreted as a sequence of Bernoulli trials.”
ALL of this- and I do mean every single word- to say: “Yes, the exact thing you claimed is true. I will now proceed to dig my heels even further into the sand because walking back and agreeing is somehow appalling to me”. For the audience Ill list the things you say and compare it to what I said:
-You: “There are times when we decide to calculate statistics across Bernoulli trials, and we do that by assigning one result the value "1" and the other the value "0".
-Me: Thats why I mentioned using Bernoulli trials for each candidate. A 1 for a successful pull of the rarity of interest and a 0 for a failure- any other rarity
- You: “We can then calculate things like averages”
- Me: “You can ABSOLUTELY have a mean if your phrase the opening in terms of successes and failures.”
I should accuse you of plagiarism. Now:
“That exact phrasing sounded familiar. Note the precise wording. Given a sample of values”
THIS is your hangup? This would border on humorous if it weren’t so silly. The only reason I could think to even bring this up is if you somehow thought that we can’t discuss this because say, 6 star isn't a “value” or any other rarity for that matter. You can substitute a value though for the question of application such as 6 for 6 stars, 5 for 5 stars etc. You could then phrase it as the average sum of those values over your openings and the LLN would claim that the sample mean would approach the weighted sum of the values (where the weights are the proposed drop rates). The Wikipedia article you linked has a similar example but involving dice rolls. All this to say that I guess there isnt an “average” coin flip (to borrow the Wikipedia example) but that is not the same as saying you cant invoke the LLN when speaking on long term behavior which is what you are conflating and are just wrong. Wrong. Incorrect. Misinformed. Etc.
Now, why did I not opt to do it in the dice roll way? Because the question I responded to was one concerning why they had observed so few 6 stars over their openings. THATS ALREADY FRAMED AS A BERNOULLI EXPERIMENT. They only had one outcome of interest with all else being a failure. When I told them that they should be appealing to the LLN, sample mean and true mean I was already discussing the exact formulation Ive provided on two occasions. You borderline quoted it back to me so Ill assume you completely acknowledge that what I said was correct.
Which is where you came in with that extrapolation about the observed count growing further while the proportions are what actually approach one another…. Yes, duh. Thats why we are using measures of proportion: The mean and sample mean. Funnily enough the article you posted even mentions that exact point RIGHT UNDER THE BERNOULLI FRAMING PARAGRAPH: “ The LLN only applies to the average of the results obtained from repeated trials and claims that this average converges to the expected value; it does not claim that the sum of n results gets close to the expected value times n as n increases.”
I didn't expound on that point but my phrasing had ZERO misconceptions in it. You misread or misunderstood what I wrote then incorrectly claimed that I was appealing to something/or claiming something which I wasn't (something you have a really bad habit of doing I see). If you're so sure quote the EXACT line back to me where I make any claim about the observed count approaching the expected count. Ill wait. What you WILL find which you seem to keep conceding, is that the sample mean will converge to the true mean which, again, does not care about observed and expected counts.
The rest are just the ad hominem ramblings of someone with little of substance to contribute so I won’t respond to that.
“In any case, it is true that if you decide to completely refactor this discussion in terms of Bernoulli trials - which no one including you originally did”
THIS, is why I say don’t mistake a lack of understanding on your part for an error on mine. As I supplied above there was ample reason to assume Bernoulli refactoring because the poster cares ONLY ABOUT ONE RARITY. You can bend over backwards trying to say “thats not what you meant” but unless you’re the guy in my pfp (and you come off as a bit less astute) this is you just forcing a viewpoint into my throat to try and walk backwards blindly into some semblance of being correct. You arent.
Could it be that the person I was responding to literally appealed to the Gamblers fallacy?? No! That couldn’t possibly be it! Then, its impossible that I pointed that out to them and chose to supply them with what they COULD appeal to (LLN) and I that chose to mention the WLN because, if you’re at all familiar with the literature on the subject, its NATURAL to mention them in close proximity if you care about the specifics of the convergence properties- which I do. Gah! Got me again I see!
Also I opened three paragon crystals and duped my only 7 star that needs a dupe--hawkeye
But I don't want to, having struck out 29 times I'll just stick to basics for the 7* shards.
This round I’ve pulled Xpool and Ikaris 😖
So I bought 10 normal ones, pulled werewolf, hulkling, cgr, absman and warlock…all 1 time pulls…no joke…got a Doom and Kingpin to, but they are both at max sig.
So, only normal ones for me going forward 🥳