I have a bit of trouble digesting this whole “feel” thing .. if i can consistently produce the “bug”, how is it just a feel thing? I think all it means is that they couldn’t definitively find the problem by feel alone. They mention that in the beta test they gave half the players a fix, and half the control with no fix. And the results were inconclusive because players were asked to “feel” whether it was wrong. 61% of the players with the fix said it was better, 63% without the fix said it was better. Android users even said the fix was better or worse. And this shows that it was a psychological change, not anything measurable. All it means is that this test they tried didn’t work. It would be like a coffee company testing a new coffee and giving the test group a new one, and a control group the old one then, the control group says that their coffee tastes better than before, even though it’s the same coffee. They’re not saying the bug is just a feel thing, the feel direction was one way they tried to solve it, and in the end decided it wouldn’t work. They then decided to create the robot Sir Tapsalot, to give them something to actually measure. So instead of asking you or me whether we felt like parry was better or worse, they made the robot that can map when the window of input for parry is. But how did they select the players? I know people playing for 6 years who get hit by iron mans sp1, what good is their input to this process? They’ll have been chosen randomly from those who signed up for the beta. Any statistical test will be random to make sure there’s no bias That’s not correct though. A bias towards getting a more informed opinion is not wrong. It won’t represent everyone but this isn’t a referendum on people’s rights, this is a technical issue that they are trying to pin down. With the timing window that small, they should have gone to CCP, summoner showdown finalists or tier 1 AW players.
I have a bit of trouble digesting this whole “feel” thing .. if i can consistently produce the “bug”, how is it just a feel thing? I think all it means is that they couldn’t definitively find the problem by feel alone. They mention that in the beta test they gave half the players a fix, and half the control with no fix. And the results were inconclusive because players were asked to “feel” whether it was wrong. 61% of the players with the fix said it was better, 63% without the fix said it was better. Android users even said the fix was better or worse. And this shows that it was a psychological change, not anything measurable. All it means is that this test they tried didn’t work. It would be like a coffee company testing a new coffee and giving the test group a new one, and a control group the old one then, the control group says that their coffee tastes better than before, even though it’s the same coffee. They’re not saying the bug is just a feel thing, the feel direction was one way they tried to solve it, and in the end decided it wouldn’t work. They then decided to create the robot Sir Tapsalot, to give them something to actually measure. So instead of asking you or me whether we felt like parry was better or worse, they made the robot that can map when the window of input for parry is. But how did they select the players? I know people playing for 6 years who get hit by iron mans sp1, what good is their input to this process? They’ll have been chosen randomly from those who signed up for the beta. Any statistical test will be random to make sure there’s no bias
I have a bit of trouble digesting this whole “feel” thing .. if i can consistently produce the “bug”, how is it just a feel thing? I think all it means is that they couldn’t definitively find the problem by feel alone. They mention that in the beta test they gave half the players a fix, and half the control with no fix. And the results were inconclusive because players were asked to “feel” whether it was wrong. 61% of the players with the fix said it was better, 63% without the fix said it was better. Android users even said the fix was better or worse. And this shows that it was a psychological change, not anything measurable. All it means is that this test they tried didn’t work. It would be like a coffee company testing a new coffee and giving the test group a new one, and a control group the old one then, the control group says that their coffee tastes better than before, even though it’s the same coffee. They’re not saying the bug is just a feel thing, the feel direction was one way they tried to solve it, and in the end decided it wouldn’t work. They then decided to create the robot Sir Tapsalot, to give them something to actually measure. So instead of asking you or me whether we felt like parry was better or worse, they made the robot that can map when the window of input for parry is. But how did they select the players? I know people playing for 6 years who get hit by iron mans sp1, what good is their input to this process?
I have a bit of trouble digesting this whole “feel” thing .. if i can consistently produce the “bug”, how is it just a feel thing? I think all it means is that they couldn’t definitively find the problem by feel alone. They mention that in the beta test they gave half the players a fix, and half the control with no fix. And the results were inconclusive because players were asked to “feel” whether it was wrong. 61% of the players with the fix said it was better, 63% without the fix said it was better. Android users even said the fix was better or worse. And this shows that it was a psychological change, not anything measurable. All it means is that this test they tried didn’t work. It would be like a coffee company testing a new coffee and giving the test group a new one, and a control group the old one then, the control group says that their coffee tastes better than before, even though it’s the same coffee. They’re not saying the bug is just a feel thing, the feel direction was one way they tried to solve it, and in the end decided it wouldn’t work. They then decided to create the robot Sir Tapsalot, to give them something to actually measure. So instead of asking you or me whether we felt like parry was better or worse, they made the robot that can map when the window of input for parry is.
I have a bit of trouble digesting this whole “feel” thing .. if i can consistently produce the “bug”, how is it just a feel thing?
While the robot is a brilliant way to get at actual measurement and to be able to test fixes with a controlled "fixed" input and response. It definitely will be impractical to test on even the few models of iOS, let alone the multiple manufacturers and models of android phones and tablets. If I could suggest, you might be able to use something like the parry training ground to crowd source some data for a beta test. Have something where you have to parry like 10 medium hits from 5-6 different characters and you should be able to gather a bunch of data from hundreds or thousands doing the exact same test on the same characters.I'm not saying this would prevent a robot uprising...but maybe delay it a bit Unfortunately, it won't help. As we mentioned in the post, there is no way for us to measure this in software because the issue resides between the Game Engine and the Hardware. Absolutely, I get that you can't actually measure the input timing that way. What I was thinking was more of a way to quantify if the fixes worked for players. So like if you repeated the "Does parry/dex feel better" experiment but instead of asking about feel could measure that it took say 16 mediums to get 10 parries on average for those with the released version, vs say 12 mediums to get 10 parries on the hotfix beta. Then no matter what they say parry felt like you would have data to show that parry was working better with the hotfix. It just seemed like that type of data (parry success rate) could be gathered, since the training mode already recognizes when you landed a successful parry. But I have no real idea if that data is available under the hood or not. Tapsalot eliminates the need for that kind of testing. Prior to Tapsalot, the devs had to guess at what changes might have caused a change in player feeling. But now that this can be quantified, either the new build replicates the old timing sufficiently well or it doesn't.Keep in mind the task for Kabam is not to change the game until everyone's Parry skills improve. The task for Kabam is to replicate the old behavior. If they do and players are still missing Parries, that would be a psychological problem in their heads that the game client can't fix and can only be solved by the players readjusting to the old-new-old normal. Tapsalot will eliminate that need for testing for any device that can be tested using Tapsalot. Automatically extending results found with an iPhone 12 max to other devices that have not been tested with Tapsalot would decidedly be poor scientific method. It would be like me saying my research using mice is automatically applicable to humans just because both are mammals. And it would seem that expanding the robot army to test the numerous different apple and android phones and tablets would rapidly become cost/time prohibitive. So really what I was suggesting was like a phase I/ Phase II trials. In phase I, Tapsalot gives rapid data collection in a closed system with a relatively small number of devices. My theoretical phase II trial would then be releasing said potential fix to hundreds to thousands with multiple different devices and you could see if the fix works in the "wild" of a beta. You'd have to find testers whose feedback meant something. In their limited beta test, humans were statistically incapable of being able to correctly identify when the problem changed or didn't change. Which is exactly why you would never use any sort of feeling or self reported response for the data. It would only be measuring, in this theoretical trial, parry success rate or the number of medium attacks to get 10 parries. As long as the groups each include a similar cross section of the player base the average across each group should represent the actual difference between the current and potential fixed versions. Granted any study using humans as subjects will be biased as you can only include people who actually want to be included. But if you look at an actual test of a measurable skill vs asking how does it feel, you should get a reasonable approximation of how that change is working in a live version with people on different devices with different levels of wifi or cellular data, which should be a wider pool than could be done with Tapsalot. Unfortunately that doesn't work. It might seem that Parry success rates are objective data, but that presumes humans are robots that can execute Parries in a predictable way, and thus any difference in the data must be due to a change in the game client. But that's not how humans work. We should take it as a given that if a player reports that "Parry got worse" they aren't reporting a random guess, they are actually seeing their Parries fail. The subjective nature isn't in whether players are correctly remembering if their Parries are working or not, it is in whether or not the players are doing the same things successfully or not. Or to put it in baseball terms, we can't tell the difference between a player whose game client changed behavior, and a player who only thinks the game client changed behavior and gets the yips. The beta test they conducted suggests this is not just a theoretical problem, it actually happens to players asked to test the game, and are presumably trying to avoid just such a problem.The only way to eliminate this problem would be to do a double blind test: change half the players clients without telling them and see if you see a change in problem reports. But that would be difficult to do, and maybe even slightly unethical to do, because you could be making some players' experience worse without telling them and letting them opt into testing voluntarily.
While the robot is a brilliant way to get at actual measurement and to be able to test fixes with a controlled "fixed" input and response. It definitely will be impractical to test on even the few models of iOS, let alone the multiple manufacturers and models of android phones and tablets. If I could suggest, you might be able to use something like the parry training ground to crowd source some data for a beta test. Have something where you have to parry like 10 medium hits from 5-6 different characters and you should be able to gather a bunch of data from hundreds or thousands doing the exact same test on the same characters.I'm not saying this would prevent a robot uprising...but maybe delay it a bit Unfortunately, it won't help. As we mentioned in the post, there is no way for us to measure this in software because the issue resides between the Game Engine and the Hardware. Absolutely, I get that you can't actually measure the input timing that way. What I was thinking was more of a way to quantify if the fixes worked for players. So like if you repeated the "Does parry/dex feel better" experiment but instead of asking about feel could measure that it took say 16 mediums to get 10 parries on average for those with the released version, vs say 12 mediums to get 10 parries on the hotfix beta. Then no matter what they say parry felt like you would have data to show that parry was working better with the hotfix. It just seemed like that type of data (parry success rate) could be gathered, since the training mode already recognizes when you landed a successful parry. But I have no real idea if that data is available under the hood or not. Tapsalot eliminates the need for that kind of testing. Prior to Tapsalot, the devs had to guess at what changes might have caused a change in player feeling. But now that this can be quantified, either the new build replicates the old timing sufficiently well or it doesn't.Keep in mind the task for Kabam is not to change the game until everyone's Parry skills improve. The task for Kabam is to replicate the old behavior. If they do and players are still missing Parries, that would be a psychological problem in their heads that the game client can't fix and can only be solved by the players readjusting to the old-new-old normal. Tapsalot will eliminate that need for testing for any device that can be tested using Tapsalot. Automatically extending results found with an iPhone 12 max to other devices that have not been tested with Tapsalot would decidedly be poor scientific method. It would be like me saying my research using mice is automatically applicable to humans just because both are mammals. And it would seem that expanding the robot army to test the numerous different apple and android phones and tablets would rapidly become cost/time prohibitive. So really what I was suggesting was like a phase I/ Phase II trials. In phase I, Tapsalot gives rapid data collection in a closed system with a relatively small number of devices. My theoretical phase II trial would then be releasing said potential fix to hundreds to thousands with multiple different devices and you could see if the fix works in the "wild" of a beta. You'd have to find testers whose feedback meant something. In their limited beta test, humans were statistically incapable of being able to correctly identify when the problem changed or didn't change. Which is exactly why you would never use any sort of feeling or self reported response for the data. It would only be measuring, in this theoretical trial, parry success rate or the number of medium attacks to get 10 parries. As long as the groups each include a similar cross section of the player base the average across each group should represent the actual difference between the current and potential fixed versions. Granted any study using humans as subjects will be biased as you can only include people who actually want to be included. But if you look at an actual test of a measurable skill vs asking how does it feel, you should get a reasonable approximation of how that change is working in a live version with people on different devices with different levels of wifi or cellular data, which should be a wider pool than could be done with Tapsalot.
While the robot is a brilliant way to get at actual measurement and to be able to test fixes with a controlled "fixed" input and response. It definitely will be impractical to test on even the few models of iOS, let alone the multiple manufacturers and models of android phones and tablets. If I could suggest, you might be able to use something like the parry training ground to crowd source some data for a beta test. Have something where you have to parry like 10 medium hits from 5-6 different characters and you should be able to gather a bunch of data from hundreds or thousands doing the exact same test on the same characters.I'm not saying this would prevent a robot uprising...but maybe delay it a bit Unfortunately, it won't help. As we mentioned in the post, there is no way for us to measure this in software because the issue resides between the Game Engine and the Hardware. Absolutely, I get that you can't actually measure the input timing that way. What I was thinking was more of a way to quantify if the fixes worked for players. So like if you repeated the "Does parry/dex feel better" experiment but instead of asking about feel could measure that it took say 16 mediums to get 10 parries on average for those with the released version, vs say 12 mediums to get 10 parries on the hotfix beta. Then no matter what they say parry felt like you would have data to show that parry was working better with the hotfix. It just seemed like that type of data (parry success rate) could be gathered, since the training mode already recognizes when you landed a successful parry. But I have no real idea if that data is available under the hood or not. Tapsalot eliminates the need for that kind of testing. Prior to Tapsalot, the devs had to guess at what changes might have caused a change in player feeling. But now that this can be quantified, either the new build replicates the old timing sufficiently well or it doesn't.Keep in mind the task for Kabam is not to change the game until everyone's Parry skills improve. The task for Kabam is to replicate the old behavior. If they do and players are still missing Parries, that would be a psychological problem in their heads that the game client can't fix and can only be solved by the players readjusting to the old-new-old normal. Tapsalot will eliminate that need for testing for any device that can be tested using Tapsalot. Automatically extending results found with an iPhone 12 max to other devices that have not been tested with Tapsalot would decidedly be poor scientific method. It would be like me saying my research using mice is automatically applicable to humans just because both are mammals. And it would seem that expanding the robot army to test the numerous different apple and android phones and tablets would rapidly become cost/time prohibitive. So really what I was suggesting was like a phase I/ Phase II trials. In phase I, Tapsalot gives rapid data collection in a closed system with a relatively small number of devices. My theoretical phase II trial would then be releasing said potential fix to hundreds to thousands with multiple different devices and you could see if the fix works in the "wild" of a beta. You'd have to find testers whose feedback meant something. In their limited beta test, humans were statistically incapable of being able to correctly identify when the problem changed or didn't change.
While the robot is a brilliant way to get at actual measurement and to be able to test fixes with a controlled "fixed" input and response. It definitely will be impractical to test on even the few models of iOS, let alone the multiple manufacturers and models of android phones and tablets. If I could suggest, you might be able to use something like the parry training ground to crowd source some data for a beta test. Have something where you have to parry like 10 medium hits from 5-6 different characters and you should be able to gather a bunch of data from hundreds or thousands doing the exact same test on the same characters.I'm not saying this would prevent a robot uprising...but maybe delay it a bit Unfortunately, it won't help. As we mentioned in the post, there is no way for us to measure this in software because the issue resides between the Game Engine and the Hardware. Absolutely, I get that you can't actually measure the input timing that way. What I was thinking was more of a way to quantify if the fixes worked for players. So like if you repeated the "Does parry/dex feel better" experiment but instead of asking about feel could measure that it took say 16 mediums to get 10 parries on average for those with the released version, vs say 12 mediums to get 10 parries on the hotfix beta. Then no matter what they say parry felt like you would have data to show that parry was working better with the hotfix. It just seemed like that type of data (parry success rate) could be gathered, since the training mode already recognizes when you landed a successful parry. But I have no real idea if that data is available under the hood or not. Tapsalot eliminates the need for that kind of testing. Prior to Tapsalot, the devs had to guess at what changes might have caused a change in player feeling. But now that this can be quantified, either the new build replicates the old timing sufficiently well or it doesn't.Keep in mind the task for Kabam is not to change the game until everyone's Parry skills improve. The task for Kabam is to replicate the old behavior. If they do and players are still missing Parries, that would be a psychological problem in their heads that the game client can't fix and can only be solved by the players readjusting to the old-new-old normal. Tapsalot will eliminate that need for testing for any device that can be tested using Tapsalot. Automatically extending results found with an iPhone 12 max to other devices that have not been tested with Tapsalot would decidedly be poor scientific method. It would be like me saying my research using mice is automatically applicable to humans just because both are mammals. And it would seem that expanding the robot army to test the numerous different apple and android phones and tablets would rapidly become cost/time prohibitive. So really what I was suggesting was like a phase I/ Phase II trials. In phase I, Tapsalot gives rapid data collection in a closed system with a relatively small number of devices. My theoretical phase II trial would then be releasing said potential fix to hundreds to thousands with multiple different devices and you could see if the fix works in the "wild" of a beta.
While the robot is a brilliant way to get at actual measurement and to be able to test fixes with a controlled "fixed" input and response. It definitely will be impractical to test on even the few models of iOS, let alone the multiple manufacturers and models of android phones and tablets. If I could suggest, you might be able to use something like the parry training ground to crowd source some data for a beta test. Have something where you have to parry like 10 medium hits from 5-6 different characters and you should be able to gather a bunch of data from hundreds or thousands doing the exact same test on the same characters.I'm not saying this would prevent a robot uprising...but maybe delay it a bit Unfortunately, it won't help. As we mentioned in the post, there is no way for us to measure this in software because the issue resides between the Game Engine and the Hardware. Absolutely, I get that you can't actually measure the input timing that way. What I was thinking was more of a way to quantify if the fixes worked for players. So like if you repeated the "Does parry/dex feel better" experiment but instead of asking about feel could measure that it took say 16 mediums to get 10 parries on average for those with the released version, vs say 12 mediums to get 10 parries on the hotfix beta. Then no matter what they say parry felt like you would have data to show that parry was working better with the hotfix. It just seemed like that type of data (parry success rate) could be gathered, since the training mode already recognizes when you landed a successful parry. But I have no real idea if that data is available under the hood or not. Tapsalot eliminates the need for that kind of testing. Prior to Tapsalot, the devs had to guess at what changes might have caused a change in player feeling. But now that this can be quantified, either the new build replicates the old timing sufficiently well or it doesn't.Keep in mind the task for Kabam is not to change the game until everyone's Parry skills improve. The task for Kabam is to replicate the old behavior. If they do and players are still missing Parries, that would be a psychological problem in their heads that the game client can't fix and can only be solved by the players readjusting to the old-new-old normal.
While the robot is a brilliant way to get at actual measurement and to be able to test fixes with a controlled "fixed" input and response. It definitely will be impractical to test on even the few models of iOS, let alone the multiple manufacturers and models of android phones and tablets. If I could suggest, you might be able to use something like the parry training ground to crowd source some data for a beta test. Have something where you have to parry like 10 medium hits from 5-6 different characters and you should be able to gather a bunch of data from hundreds or thousands doing the exact same test on the same characters.I'm not saying this would prevent a robot uprising...but maybe delay it a bit Unfortunately, it won't help. As we mentioned in the post, there is no way for us to measure this in software because the issue resides between the Game Engine and the Hardware. Absolutely, I get that you can't actually measure the input timing that way. What I was thinking was more of a way to quantify if the fixes worked for players. So like if you repeated the "Does parry/dex feel better" experiment but instead of asking about feel could measure that it took say 16 mediums to get 10 parries on average for those with the released version, vs say 12 mediums to get 10 parries on the hotfix beta. Then no matter what they say parry felt like you would have data to show that parry was working better with the hotfix. It just seemed like that type of data (parry success rate) could be gathered, since the training mode already recognizes when you landed a successful parry. But I have no real idea if that data is available under the hood or not.
While the robot is a brilliant way to get at actual measurement and to be able to test fixes with a controlled "fixed" input and response. It definitely will be impractical to test on even the few models of iOS, let alone the multiple manufacturers and models of android phones and tablets. If I could suggest, you might be able to use something like the parry training ground to crowd source some data for a beta test. Have something where you have to parry like 10 medium hits from 5-6 different characters and you should be able to gather a bunch of data from hundreds or thousands doing the exact same test on the same characters.I'm not saying this would prevent a robot uprising...but maybe delay it a bit Unfortunately, it won't help. As we mentioned in the post, there is no way for us to measure this in software because the issue resides between the Game Engine and the Hardware.
While the robot is a brilliant way to get at actual measurement and to be able to test fixes with a controlled "fixed" input and response. It definitely will be impractical to test on even the few models of iOS, let alone the multiple manufacturers and models of android phones and tablets. If I could suggest, you might be able to use something like the parry training ground to crowd source some data for a beta test. Have something where you have to parry like 10 medium hits from 5-6 different characters and you should be able to gather a bunch of data from hundreds or thousands doing the exact same test on the same characters.I'm not saying this would prevent a robot uprising...but maybe delay it a bit
@Kabam Miike A question regarding the single player compensation. If it contains revives, potions it wouldn’t be useful for those who’ve already completed the available content. I hope Kabam has considered this aspect while deciding what should be the compensation.
I have and I did. I also said that the fact that they have dismissed an android problem, is a kick in the teeth.
Better yet, do you play the game and are you on Ios or android? If it's Ios, just stop. If it's android, I don't believe you, if you don't even play the game... I'd give it a 25% chance.
Thanks, but I really don't care to look at any of your posts, you made an assumption of me based on the "cherrypicking" of a quote, also stated that I didn't read the entire post to begin with.You, at this point, are going to pretend to be affronted when countered with the same? Come on. I elicited the exact response that I expected, thank you for validating.
? I'm seriously confused. You were the one that "diverted" it here. I'm not getting into a conversation about this. I read it all, I understood it all. When I said "you lost me" it was because the assumption that kabam was making was that there were no android dex/parry issues. I personally know there are, as do half my alliance, as do half on my ingame friends.That was part and parcel of my post. Just stop
@Camby01 if you think Kabam are saying android players don’t have an issue with parry and dex you really do need to re-read the post. They don’t have the issue with parry and dex
Stop posr. Your interpretation of a kabam mod post does not prove, yes that's the way to spell it, anything. Kabam has to show good faith, and imo, over the course of the last couple years they haven't. I will, at that point, be more diligent trying to hold them accountable.
It's not lag, it's not fps drop, it been a persistent issue for months if not years.
Swipe back to dex? Nope, just block for a tenth of a second and take a combo in the face.I'm happy for you if you haven't experienced any issues, but, for all means, don't pretend that you might speak for everyone. I like others, have a flagship phone, should be no issues with compatibility. Is it me or the game? Come one, put down the koolaid.
Based on what? I and everyone else I know that plays on Android has had issues from the outset. It seems like kabam wants to throw everything to fps or whatever. Not buying it. They actually say that android devices are not affected. I'm not sure what you are trying to get at.I think based on the data that they presented, doing a medium parry on 1 apple device. Doesn't exactly set the world on fire for your argument.I have learned to deal with the half/full second freeze mid-fight, the inconsistent parry, the inconsistent dex. Does that mean the game isn't broken because I've learned to adjust? Again, happy you are not experiencing the same, but don't try to diminish others for whatever reason.
It seems like kabam wants to throw everything to fps or whatever. Not buying it.
is it safe to say android players have been a fake alarm till now and theyre not getting any more compensations? (im on android too btw) Not necessarily. I can't speak for Kabam, but when troubleshooting these kinds of very subtle systemic issues, the first rule to keep in mind is: finding a problem is not the same thing as finding the problem.
is it safe to say android players have been a fake alarm till now and theyre not getting any more compensations? (im on android too btw)
is it safe to say android players have been a fake alarm till now and theyre not getting any more compensations? (im on android too btw) I wouldn't say that. One of the reasons we wanted to make this post is so that we can draw a line on what is and isn't this particular issue because some Players were confusing other issues (like lag and stuttering) for it as well, and we want to make sure we can address that stuff separately and ensure people continue to report other issues without thinking they're one in the same. So correct me if I’m reading it wrong, but you guys believe that the parry issues caused by the engine update are IOS specific, but the update to the lagging and stuttering in 32.3 and 33.0 *should* fix most issues for android players? I think they mean for everyone both Android and IOS. Sure, but I’m talking specifically for parry and dex. From the way it’s written, it seems to be implying that any parry or dex issue an android player experienced will be from the stuttering and lagging that is going to be fixed, ergo, after 32.3 and 33.0 android players won’t experience the parry or dex bug but Apple players will until it’s fixed in early 2022.
is it safe to say android players have been a fake alarm till now and theyre not getting any more compensations? (im on android too btw) I wouldn't say that. One of the reasons we wanted to make this post is so that we can draw a line on what is and isn't this particular issue because some Players were confusing other issues (like lag and stuttering) for it as well, and we want to make sure we can address that stuff separately and ensure people continue to report other issues without thinking they're one in the same. So correct me if I’m reading it wrong, but you guys believe that the parry issues caused by the engine update are IOS specific, but the update to the lagging and stuttering in 32.3 and 33.0 *should* fix most issues for android players? I think they mean for everyone both Android and IOS.
is it safe to say android players have been a fake alarm till now and theyre not getting any more compensations? (im on android too btw) I wouldn't say that. One of the reasons we wanted to make this post is so that we can draw a line on what is and isn't this particular issue because some Players were confusing other issues (like lag and stuttering) for it as well, and we want to make sure we can address that stuff separately and ensure people continue to report other issues without thinking they're one in the same. So correct me if I’m reading it wrong, but you guys believe that the parry issues caused by the engine update are IOS specific, but the update to the lagging and stuttering in 32.3 and 33.0 *should* fix most issues for android players?
is it safe to say android players have been a fake alarm till now and theyre not getting any more compensations? (im on android too btw) I wouldn't say that. One of the reasons we wanted to make this post is so that we can draw a line on what is and isn't this particular issue because some Players were confusing other issues (like lag and stuttering) for it as well, and we want to make sure we can address that stuff separately and ensure people continue to report other issues without thinking they're one in the same.
So…we’re looking at roughly 6months(including the pervious months) before this even gets fixed….optimistically lol
Now you left me anxious. I have an android device and I can't tell whether there actually was any issue on the game's side or if it's just me getting old and losing reflexes. Though given that most of my problems look like opponents' actions being mistimed and many people reporting the same thing at the same time, I'm inclined to go with the former.Anyway good to see where we are with this. And frankly, with the last couple of weeks in general the playerbase needed this kind of info.