CFree wrote: » DTMelodicMetal wrote: » will-o-wisp wrote: » That doesn't prove anything ... Just bad luck. https://sciencing.com/meaning-sample-size-5988804.html@DNA3000 is the expert on this stuff, but the above video's margin for error being around 2-2.25% is believable Doesn’t make much sense to me given that this is the opening of crystals governed by RNG and not a poll. Can you address this distinction?
DTMelodicMetal wrote: » will-o-wisp wrote: » That doesn't prove anything ... Just bad luck. https://sciencing.com/meaning-sample-size-5988804.html@DNA3000 is the expert on this stuff, but the above video's margin for error being around 2-2.25% is believable
will-o-wisp wrote: » That doesn't prove anything ... Just bad luck.
Spicyslicer wrote: » Well my luck seems bad grants to you all
DNA3000 wrote: » CFree wrote: » DTMelodicMetal wrote: » will-o-wisp wrote: » That doesn't prove anything ... Just bad luck. https://sciencing.com/meaning-sample-size-5988804.html@DNA3000 is the expert on this stuff, but the above video's margin for error being around 2-2.25% is believable Doesn’t make much sense to me given that this is the opening of crystals governed by RNG and not a poll. Can you address this distinction? When you do a poll, you are measuring a subset of some whole. If there are a million voters, there is an actual preference for those voters. Some actual number of them would vote one way and the rest would vote the other way (for example). A poll attempts to determine what the whole would do by looking at a representative subset. But how do you know the subset represents the whole? Statistics can calculate the odds that one thousand people picked randomly will match the preferences of the whole. The higher the sample, the more likely the poll matches the whole. In theory, a poll that measured all one million people would be 100% accurate. A poll that measures only one guy is obviously going have a strong chance of being wrong, if the preferences are split in any reasonable way. When we open crystals to figure out the drop odds, we aren't polling. There is no such thing as all the drops in the world that we are looking at a sample of. Instead, we presume there is a certain chance for each thing to drop, and we are attempting to measure those odds by repeatedly rolling the dice, so to speak. In that case, the margin for error is a quantitative measure of how likely a particular set of odds would actually generate the drops you actually see. When you do that kind of test, the margin for error is largely based on the number of drops, not the number of pulls. Consider a simple case where we have a crystal that has a 1% chance to drop a 4*, and a 99% chance to drop something else. You open 100 crystals and you get one drop. Your friend opens 100 crystals and gets two. The difference is only one extra drop, but that's the difference between one out of 100 and one out of 50. And what if your friend got his second drop on pull number 98? In that case, had you both tested by opening 97 crystals you both would have only gotten one drop. One drop is just too sensitive to one more or less lucky or unlucky drop. For more complex math reasons, the margin for error is about 1/SQRT(n) where n is the number of drops. That means in the video above, the margin for error for the measured 4* rate is much higher than the margin for error for the measured 2* rate, because that video sees more 2* champs than 4* champs. The 4* margin for error is actually pretty big, but its still suggestive of being in the general area. Last thing: margin for error doesn't mean the true value is *definitely* that close. It is just *probably* that close.
CFree wrote: » DNA3000 wrote: » CFree wrote: » DTMelodicMetal wrote: » will-o-wisp wrote: » That doesn't prove anything ... Just bad luck. https://sciencing.com/meaning-sample-size-5988804.html@DNA3000 is the expert on this stuff, but the above video's margin for error being around 2-2.25% is believable Doesn’t make much sense to me given that this is the opening of crystals governed by RNG and not a poll. Can you address this distinction? When you do a poll, you are measuring a subset of some whole. If there are a million voters, there is an actual preference for those voters. Some actual number of them would vote one way and the rest would vote the other way (for example). A poll attempts to determine what the whole would do by looking at a representative subset. But how do you know the subset represents the whole? Statistics can calculate the odds that one thousand people picked randomly will match the preferences of the whole. The higher the sample, the more likely the poll matches the whole. In theory, a poll that measured all one million people would be 100% accurate. A poll that measures only one guy is obviously going have a strong chance of being wrong, if the preferences are split in any reasonable way. When we open crystals to figure out the drop odds, we aren't polling. There is no such thing as all the drops in the world that we are looking at a sample of. Instead, we presume there is a certain chance for each thing to drop, and we are attempting to measure those odds by repeatedly rolling the dice, so to speak. In that case, the margin for error is a quantitative measure of how likely a particular set of odds would actually generate the drops you actually see. When you do that kind of test, the margin for error is largely based on the number of drops, not the number of pulls. Consider a simple case where we have a crystal that has a 1% chance to drop a 4*, and a 99% chance to drop something else. You open 100 crystals and you get one drop. Your friend opens 100 crystals and gets two. The difference is only one extra drop, but that's the difference between one out of 100 and one out of 50. And what if your friend got his second drop on pull number 98? In that case, had you both tested by opening 97 crystals you both would have only gotten one drop. One drop is just too sensitive to one more or less lucky or unlucky drop. For more complex math reasons, the margin for error is about 1/SQRT(n) where n is the number of drops. That means in the video above, the margin for error for the measured 4* rate is much higher than the margin for error for the measured 2* rate, because that video sees more 2* champs than 4* champs. The 4* margin for error is actually pretty big, but its still suggestive of being in the general area. Last thing: margin for error doesn't mean the true value is *definitely* that close. It is just *probably* that close. So the margin for error of a 4* drop within a PHC based on the video is higher than the margin for error of a 2* drop? Any implication that the margin of error for all the videos results is around 2% is incorrect?
DNA3000 wrote: » CFree wrote: » DNA3000 wrote: » CFree wrote: » DTMelodicMetal wrote: » will-o-wisp wrote: » That doesn't prove anything ... Just bad luck. https://sciencing.com/meaning-sample-size-5988804.html@DNA3000 is the expert on this stuff, but the above video's margin for error being around 2-2.25% is believable Doesn’t make much sense to me given that this is the opening of crystals governed by RNG and not a poll. Can you address this distinction? When you do a poll, you are measuring a subset of some whole. If there are a million voters, there is an actual preference for those voters. Some actual number of them would vote one way and the rest would vote the other way (for example). A poll attempts to determine what the whole would do by looking at a representative subset. But how do you know the subset represents the whole? Statistics can calculate the odds that one thousand people picked randomly will match the preferences of the whole. The higher the sample, the more likely the poll matches the whole. In theory, a poll that measured all one million people would be 100% accurate. A poll that measures only one guy is obviously going have a strong chance of being wrong, if the preferences are split in any reasonable way. When we open crystals to figure out the drop odds, we aren't polling. There is no such thing as all the drops in the world that we are looking at a sample of. Instead, we presume there is a certain chance for each thing to drop, and we are attempting to measure those odds by repeatedly rolling the dice, so to speak. In that case, the margin for error is a quantitative measure of how likely a particular set of odds would actually generate the drops you actually see. When you do that kind of test, the margin for error is largely based on the number of drops, not the number of pulls. Consider a simple case where we have a crystal that has a 1% chance to drop a 4*, and a 99% chance to drop something else. You open 100 crystals and you get one drop. Your friend opens 100 crystals and gets two. The difference is only one extra drop, but that's the difference between one out of 100 and one out of 50. And what if your friend got his second drop on pull number 98? In that case, had you both tested by opening 97 crystals you both would have only gotten one drop. One drop is just too sensitive to one more or less lucky or unlucky drop. For more complex math reasons, the margin for error is about 1/SQRT(n) where n is the number of drops. That means in the video above, the margin for error for the measured 4* rate is much higher than the margin for error for the measured 2* rate, because that video sees more 2* champs than 4* champs. The 4* margin for error is actually pretty big, but its still suggestive of being in the general area. Last thing: margin for error doesn't mean the true value is *definitely* that close. It is just *probably* that close. So the margin for error of a 4* drop within a PHC based on the video is higher than the margin for error of a 2* drop? Any implication that the margin of error for all the videos results is around 2% is incorrect? If my calculations are correct, the video shows 12 4* champions, 159 3* champions, and 1829 2* champions. The approximate margin for error is 28.9% for 4*, 7.9% for 3*, and 2.3% for 2* champions. I think if someone believes the margin for error for that video is about 2%, they incorrectly projected the margin for error based on the pulls (2000) rather than the drops (different for each type). All the percents can get confusing, so it would be better to express the margin for error this way: the 4* result is 12 plus or minus about 3.5, the 3* result is 159 plus or minus 12.6, and the 2* result is 1829 plus or minus 42.8. That means this one test by itself demonstrates that the odds of pulling 4* is most likely to be within 8.5 and 15.5 out of 2000, or 0.43% to 0.78%. This is a simplified analysis. Because the drops for each rarity are not independent (they have to add up to 100% after all) the true margin for error is not exactly this. But it is close enough for our purposes here.
CFree wrote: » DNA3000 wrote: » CFree wrote: » DNA3000 wrote: » CFree wrote: » DTMelodicMetal wrote: » will-o-wisp wrote: » That doesn't prove anything ... Just bad luck. https://sciencing.com/meaning-sample-size-5988804.html@DNA3000 is the expert on this stuff, but the above video's margin for error being around 2-2.25% is believable Doesn’t make much sense to me given that this is the opening of crystals governed by RNG and not a poll. Can you address this distinction? When you do a poll, you are measuring a subset of some whole. If there are a million voters, there is an actual preference for those voters. Some actual number of them would vote one way and the rest would vote the other way (for example). A poll attempts to determine what the whole would do by looking at a representative subset. But how do you know the subset represents the whole? Statistics can calculate the odds that one thousand people picked randomly will match the preferences of the whole. The higher the sample, the more likely the poll matches the whole. In theory, a poll that measured all one million people would be 100% accurate. A poll that measures only one guy is obviously going have a strong chance of being wrong, if the preferences are split in any reasonable way. When we open crystals to figure out the drop odds, we aren't polling. There is no such thing as all the drops in the world that we are looking at a sample of. Instead, we presume there is a certain chance for each thing to drop, and we are attempting to measure those odds by repeatedly rolling the dice, so to speak. In that case, the margin for error is a quantitative measure of how likely a particular set of odds would actually generate the drops you actually see. When you do that kind of test, the margin for error is largely based on the number of drops, not the number of pulls. Consider a simple case where we have a crystal that has a 1% chance to drop a 4*, and a 99% chance to drop something else. You open 100 crystals and you get one drop. Your friend opens 100 crystals and gets two. The difference is only one extra drop, but that's the difference between one out of 100 and one out of 50. And what if your friend got his second drop on pull number 98? In that case, had you both tested by opening 97 crystals you both would have only gotten one drop. One drop is just too sensitive to one more or less lucky or unlucky drop. For more complex math reasons, the margin for error is about 1/SQRT(n) where n is the number of drops. That means in the video above, the margin for error for the measured 4* rate is much higher than the margin for error for the measured 2* rate, because that video sees more 2* champs than 4* champs. The 4* margin for error is actually pretty big, but its still suggestive of being in the general area. Last thing: margin for error doesn't mean the true value is *definitely* that close. It is just *probably* that close. So the margin for error of a 4* drop within a PHC based on the video is higher than the margin for error of a 2* drop? Any implication that the margin of error for all the videos results is around 2% is incorrect? If my calculations are correct, the video shows 12 4* champions, 159 3* champions, and 1829 2* champions. The approximate margin for error is 28.9% for 4*, 7.9% for 3*, and 2.3% for 2* champions. I think if someone believes the margin for error for that video is about 2%, they incorrectly projected the margin for error based on the pulls (2000) rather than the drops (different for each type). All the percents can get confusing, so it would be better to express the margin for error this way: the 4* result is 12 plus or minus about 3.5, the 3* result is 159 plus or minus 12.6, and the 2* result is 1829 plus or minus 42.8. That means this one test by itself demonstrates that the odds of pulling 4* is most likely to be within 8.5 and 15.5 out of 2000, or 0.43% to 0.78%. This is a simplified analysis. Because the drops for each rarity are not independent (they have to add up to 100% after all) the true margin for error is not exactly this. But it is close enough for our purposes here. Like most polls, the results were spun to support a position, but your analysis makes sense to me. Thanks.
Brew_Swayne wrote: » I've pulled four 4* from PHC in the last week. Have opened maybe 30 in that period. I've also gone a couple months without pulling a single 4* from them. I also opened a batch of 50 at one time and got none. It's luck. Dumb friggin luck.
DoctorofEvil wrote: » I've pulled a LOT Of 4 star champs from PHC - SW, Magik, Dr. Voodoo, X-23 and Hyperion. But its somewhere between every 50 and every 100. PHC = Class Iso and 3* shards. Remember that. NEVER buy them.
KyleM wrote: » Too be far I this doesn’t prove much I opened a few random phcs here and there and got 2 4*s so this doesn’t really prove anything except he has bad luck. Opening just a few work better than mass openings but that’s just opinion also just to add the four stars were mephisto and stark spidey