**UPDATE - iPAD STUCK FLICKERING SCREEN**
The 47.0.1 hotfix to address the issue of freezing & flashing lights on loading screens when trying to enter a fight, along with other smaller issues, is now ready to be downloaded through the App Store on IOS.
More information here.
The 47.0.1 hotfix to address the issue of freezing & flashing lights on loading screens when trying to enter a fight, along with other smaller issues, is now ready to be downloaded through the App Store on IOS.
More information here.
If You Have Ever Wondered About MCOC's Damage Cap:
DawsMan
Member Posts: 2,169 ★★★★★
This post is for you.
I saw this post yesterday and was like, wait a second, I've seen that number before. Today Beroman posted a video about the universal MCOC damage cap.
Same number. That prompted me to do a bit of research, and as the screenshot suggests, 2 147 483 647 is the largest 32 bit (4 byte) integer in a lot of different coding programs. It is the highest number that can be displayed. There are exceptions if someone is hacking or if using a newer, different program that can display ridiculous numbers. This is also why when I played Harry Potter years 5-7 on my Nintendo 3DS XL I turned on all the multipliers but was never able to get more than 2.147 billion studs.
So, turns out this number is just a product of an internal 'cap' or limit in the program that MCOC was coded in.
I used these sites as references: https://en.wikipedia.org/wiki/2,147,483,647
https://www.networkworld.com/article/3010974/whats-so-special-about-2147483648.html
I saw this post yesterday and was like, wait a second, I've seen that number before. Today Beroman posted a video about the universal MCOC damage cap.
Same number. That prompted me to do a bit of research, and as the screenshot suggests, 2 147 483 647 is the largest 32 bit (4 byte) integer in a lot of different coding programs. It is the highest number that can be displayed. There are exceptions if someone is hacking or if using a newer, different program that can display ridiculous numbers. This is also why when I played Harry Potter years 5-7 on my Nintendo 3DS XL I turned on all the multipliers but was never able to get more than 2.147 billion studs.
So, turns out this number is just a product of an internal 'cap' or limit in the program that MCOC was coded in.
I used these sites as references: https://en.wikipedia.org/wiki/2,147,483,647
https://www.networkworld.com/article/3010974/whats-so-special-about-2147483648.html
Nervously twiddling my thumbs, waiting to see if DNA3000 has something, or a lot of things, to add...
34
Comments
https://forums.playcontestofchampions.com/en/discussion/186095/who-are-your-biggest-damage-dealers#latest
If you include negative numbers then half of the possible numbers you can display are “taken up” by negative numbers. If you exclude negative numbers, you could have double the number shown on the screen.
For example, if you had (edit: technically 21 including 0) 20 possible numbers, and included negative. You would have a range of -10 to +10. But if you excluded negative numbers you would be able to show from 0-20. Same range, one is signed and one is unsigned.
But back in the days of OG Lego batman on the DS they might not have been a thing I can't remember.
For those wondering, there's also a gold cap: it is 2^53-1. (https://forums.playcontestofchampions.com/en/discussion/194014/the-gold-cap)
The damage cap is the maximum signed integer you can represent in a 32 bit integer. The gold cap is the maximum integer value you can represent in 64 bit standard floating point numbers. You see these kinds of limits all the time in most computer software that isn't explicitly written with infinite precision math libraries (which would be overkill and slower for something like MCOC to use).
This stuff is not as complicated as it might seem for those not familiar with binary notations and representations. You just need to recognize that computers have two fundamental problems when it comes to storing and manipulating numbers. The first one is one that might sound initially obvious, but has a less obvious catch. Computers represent numbers (and everything else) in binary. The computer stores ones and zeros only. Most people know that. Most people don't realize this means there is no direct way for computers to store minus signs or decimal points. Numbers are just ones and zeros.
So when computers store numbers, we need to appropriate some of those binary bits to represent the sign of the number and sometimes where we are supposed to put the decimal point. In terms of negative numbers computers use a solution that is very analogous to a car odometer. If we try to roll an odometer backwards, we eventually reach zero. If we keep going, 000000 rolls back to 999999. then 999998, and so on. If we *define* 999999 to be "negative one" and "999998" to be "negative two" we can do negative numbers in an odometer that has no sign. Computers do this, just in binary. And the way computers deal with fractions is similar to how calculators deal with numbers too big to fit on the display. If a calculator cannot fit 213476283954847 on the screen, it instead shows something like 2.13476 E14. The "E14" says "the decimal place is actually fourteen places to the right from where it shows on the screen." Computers do something similar, storing decimal numbers by storing all the digits, and separately storing where the (binary) decimal place should go.
Using these rules, you can figure out what the biggest and smallest numbers you can store in those formats, just like you can say what the largest mileage a car odometer you can store is by looking at the number of digits in the display. For a car if you see six digits, you know the largest number you can store is 999999. Another way of putting this is that the biggest number you can store is the biggest number that doesn't overflow. If you have six digits, overflow happens if you need seven digits. The smallest number that overflows is 10^6 = 1000000. The biggest number you can store is one less than that: 10^6 - 1. Exactly the same thing happens in binary. Biggest number you can store in 32 bits is 2^32 -1. If one of them is stuck representing sign, then you only have 31 bits for the number and the max you can store is 2^31-1.
But if you want to show negative numbers (which is usually the default). You’d need to use 1 of those 32 digits as a marker for whether it’s positive or negative. So you can only use 31 of the remaining spaces to tell the computer how high the value is. Meaning 2,147,483,647
Just second guessed myself whether now with compiling code for 64-bit processors, whether or not some of those defined variable types are actually dependent on what processor (32-bit vs 64-bit) you are compiling for. Or whether an INT is say always xx # of bits no matter the Target processor (with LONG being double that amount).
Or when compiling for 64-bit processors, does that SAME CODE (specifying INT and LONG types) get treated as twice as much storage variables instead of their previous meanings. In which case, a LONG on a 64-bit compiled program would have some very huge wicked number available.
(edited, forgot that BYTE is lowest being 8-bit, and SHORT is actually the next up being 16 bit, etc)
Six digit number. The maximum value is 999999. That's 10^6 -1. Why minus one? Because 10^6 is a seven digit number (1,000,000). 10^6 is the first number that doesn't fit. 10^6-1 is the biggest number that still fits.
What if you need space for a minus sign? Then you only have five spaces for actual digits, and one space reserved for the minus sign. So now the biggest number you can store is 10^5-1. That's 99999.
If the largest number we can fit in there is 10^6-1, how many total numbers are possible? Well, there's 10^6-1 numbers from one to 10^6-1 (in other words, there are exactly 999,999 numbers from 1 to 999,999 - of course). You can also store zero as 000000. So the total number of possible numbers you can display is (10^6-1) + 1 = 10^6.
There are thus one million different numbers (10^6) you can display in a six digit display. 999,999 different numbers from one to 999,999, and zero.
Change all the tens to twos and the nines to ones, and this basically describes the situation in binary.
Another side bit of trivia is that why can the “Negative” numbers go 1 numerical spot higher that what their “Positive” numbers can ?
Answer, because ZERO is treated as basically a Positive number (and is basically the first position on the positive side of the scale). While the Negative side of the scale begins at -1, so can top out at a number one higher than what you can reach on the Positive side.
ie (in simplistic 2-bit signed notation), you can have -2, -1, 0, 1.
Dawsman : This is the damage cap cause of a 32 bit integer
The first few posts : *Lego games*