Thought chatGPT could help. Guess not...

Awesomep12Awesomep12 Member Posts: 1,489 ★★★★
edited September 25 in General Discussion










«1

Comments

  • Awesomep12Awesomep12 Member Posts: 1,489 ★★★★
    Couldn't post it in the correct order. Its in reverse
  • ahmynutsahmynuts Member Posts: 7,734 ★★★★★
    It knows something we didn't obviously
  • JLordVileJJLordVileJ Member Posts: 4,859 ★★★★★
    Nah they let something slip, ai gonna change the sacred timeline yall
  • Drago_von_DragoDrago_von_Drago Member Posts: 979 ★★★★
    Wicket329 said:

    I’ve said it before and I’ll say it again: Large Language Models (what has been passing for AI recently) are incredibly stupid. They do not know anything. All they can do is look to see which words frequently appear near each other and stick them into a programmed capacity for grammar to imitate thought.

    Obviously countering Drax isn’t a real problem you’re having, but just saying this for everybody’s benefit: DO NOT USE CHATGPT OR ANY OTHER AI SYSTEM TO ANSWER YOUR QUESTIONS.

    That’s what resonated with me. People ask these systems a question and assume the response is valid. In this case, we can see how wrong it is but if you asked about a topic you knew nothing about you could get misled very easily.
  • JLordVileJJLordVileJ Member Posts: 4,859 ★★★★★
    edited September 25
    Imagine chat gpt told someone grind 1k units and use it all on premium hero crystals
    Also why is it premium HERO crystals?
  • captain_rogerscaptain_rogers Member Posts: 10,073 ★★★★★
    Wicket329 said:

    I’ve said it before and I’ll say it again: Large Language Models (what has been passing for AI recently) are incredibly stupid. They do not know anything. All they can do is look to see which words frequently appear near each other and stick them into a programmed capacity for grammar to imitate thought.

    Obviously countering Drax isn’t a real problem you’re having, but just saying this for everybody’s benefit: DO NOT USE CHATGPT OR ANY OTHER AI SYSTEM TO ANSWER YOUR QUESTIONS.

    I mean, chatgpt is really good for academics, and definitely good for coding if you atleast have basic idea bout the language/framework you're working on. It can't answer niche topics like a specific mobile game (Mcoc here) since they train on the data user gives (I believe GPT4 can browse the net but still the niche info is not available on the internet either).
  • kvirrkvirr Member Posts: 788 ★★★

    Gotta rely on your fellow human community members instead for advice it looks like haha

    Could you give me a 7* Titania ( it’s worth a shot of asking right? )
  • ButtehrsButtehrs Member Posts: 6,251 ★★★★★
    Living tribunal as a playable champ in 2025?
  • Awesomep12Awesomep12 Member Posts: 1,489 ★★★★
    It was a fun experiment, and I learned that I won't be needing gbt for mcoc. Thank god I didn't learn that the hard way XD
  • DNA3000DNA3000 Member, Guardian Posts: 19,841 Guardian
    Wicket329 said:

    I’ve said it before and I’ll say it again: Large Language Models (what has been passing for AI recently) are incredibly stupid. They do not know anything. All they can do is look to see which words frequently appear near each other and stick them into a programmed capacity for grammar to imitate thought.

    Obviously countering Drax isn’t a real problem you’re having, but just saying this for everybody’s benefit: DO NOT USE CHATGPT OR ANY OTHER AI SYSTEM TO ANSWER YOUR QUESTIONS.

    I would argue they do know things, but what they know is subtle. When you ask them a question, what they know is "how might a person answer the question: fill in the blank." What that means exactly is an unresolved debate.

    The real problem is that the data they are trained on does not, in general, contain the null result. In other words, if you were to ask a million people to answer the question "what is the capital of Nebraska" you'd get a lot of right answers, a bunch of wrong answers, and a bunch of people who would simply not answer because they don't know.

    LLMs cannot be trained on the null response. They never learn that the correct answer is ever: I don't know.

    That's why they hallucinate. They are essentially mimicking the behavior of people who always answer something, even when they don't know or are wrong. So of course, when they don't know they make up an answer, because their training says there is always an answer.

    You could also argue that this is not the fault of the LLMs directly, but rather a design decision built into the way the system generates results. LLMs invented the temperature parameter to tune the output to be less robotic, but no one has seriously tried to invent a parameter or set of parameters to quantify when the LLM "doesn't know." They are designed to be conversational, not cautious.
  • JLordVileJJLordVileJ Member Posts: 4,859 ★★★★★
    DNA3000 said:

    Wicket329 said:

    I’ve said it before and I’ll say it again: Large Language Models (what has been passing for AI recently) are incredibly stupid. They do not know anything. All they can do is look to see which words frequently appear near each other and stick them into a programmed capacity for grammar to imitate thought.

    Obviously countering Drax isn’t a real problem you’re having, but just saying this for everybody’s benefit: DO NOT USE CHATGPT OR ANY OTHER AI SYSTEM TO ANSWER YOUR QUESTIONS.

    I would argue they do know things, but what they know is subtle. When you ask them a question, what they know is "how might a person answer the question: fill in the blank." What that means exactly is an unresolved debate.

    The real problem is that the data they are trained on does not, in general, contain the null result. In other words, if you were to ask a million people to answer the question "what is the capital of Nebraska" you'd get a lot of right answers, a bunch of wrong answers, and a bunch of people who would simply not answer because they don't know.

    LLMs cannot be trained on the null response. They never learn that the correct answer is ever: I don't know.

    That's why they hallucinate. They are essentially mimicking the behavior of people who always answer something, even when they don't know or are wrong. So of course, when they don't know they make up an answer, because their training says there is always an answer.

    You could also argue that this is not the fault of the LLMs directly, but rather a design decision built into the way the system generates results. LLMs invented the temperature parameter to tune the output to be less robotic, but no one has seriously tried to invent a parameter or set of parameters to quantify when the LLM "doesn't know." They are designed to be conversational, not cautious.
    Meanwhile, computer science teachers i know: ah yes, so a cpu is where all the electrical stuff is innit
  • DNA3000DNA3000 Member, Guardian Posts: 19,841 Guardian

    DNA3000 said:

    Wicket329 said:

    I’ve said it before and I’ll say it again: Large Language Models (what has been passing for AI recently) are incredibly stupid. They do not know anything. All they can do is look to see which words frequently appear near each other and stick them into a programmed capacity for grammar to imitate thought.

    Obviously countering Drax isn’t a real problem you’re having, but just saying this for everybody’s benefit: DO NOT USE CHATGPT OR ANY OTHER AI SYSTEM TO ANSWER YOUR QUESTIONS.

    I would argue they do know things, but what they know is subtle. When you ask them a question, what they know is "how might a person answer the question: fill in the blank." What that means exactly is an unresolved debate.

    The real problem is that the data they are trained on does not, in general, contain the null result. In other words, if you were to ask a million people to answer the question "what is the capital of Nebraska" you'd get a lot of right answers, a bunch of wrong answers, and a bunch of people who would simply not answer because they don't know.

    LLMs cannot be trained on the null response. They never learn that the correct answer is ever: I don't know.

    That's why they hallucinate. They are essentially mimicking the behavior of people who always answer something, even when they don't know or are wrong. So of course, when they don't know they make up an answer, because their training says there is always an answer.

    You could also argue that this is not the fault of the LLMs directly, but rather a design decision built into the way the system generates results. LLMs invented the temperature parameter to tune the output to be less robotic, but no one has seriously tried to invent a parameter or set of parameters to quantify when the LLM "doesn't know." They are designed to be conversational, not cautious.
    Meanwhile, computer science teachers i know: ah yes, so a cpu is where all the electrical stuff is innit
    To be fair, LLMs are 800 pounds of math with two coats of computer science paint. If your job title is "computer science teacher" LLMs might as well be voodoo magic.
  • EdisonLawEdisonLaw Member Posts: 8,181 ★★★★★
    OP can you ask GPT why the AI is such a **** in this game?
  • peixemacacopeixemacaco Member Posts: 3,460 ★★★★











    Why not ask on Forums?
    A.I. only have data before some date.
    And as @DNA3000 said sometimes is at random...
  • Awesomep12Awesomep12 Member Posts: 1,489 ★★★★
    @peixemacaco it was an experiment
  • SummonerNRSummonerNR Member, Guardian Posts: 13,169 Guardian
    edited September 26
    Wonder if you asked specifically for a “Champ Released in 2024”, whether it would have given more precise (year-release) candidates.

    Instead of it interpreting the question as “in 2024, what is a good champ….” or “what character in 2024”, which neither specifically say you only want one that was released in 2024.
  • Emilia90Emilia90 Member Posts: 3,502 ★★★★★

    Wicket329 said:

    I’ve said it before and I’ll say it again: Large Language Models (what has been passing for AI recently) are incredibly stupid. They do not know anything. All they can do is look to see which words frequently appear near each other and stick them into a programmed capacity for grammar to imitate thought.

    Obviously countering Drax isn’t a real problem you’re having, but just saying this for everybody’s benefit: DO NOT USE CHATGPT OR ANY OTHER AI SYSTEM TO ANSWER YOUR QUESTIONS.

    I mean, chatgpt is really good for academics, and definitely good for coding if you atleast have basic idea bout the language/framework you're working on. It can't answer niche topics like a specific mobile game (Mcoc here) since they train on the data user gives (I believe GPT4 can browse the net but still the niche info is not available on the internet either).
    I refuse to believe that people think that GPT doesn’t work for academics. It may not handle expert or complex questions but it can do enough that helps with a lot of stuff
  • captain_rogerscaptain_rogers Member Posts: 10,073 ★★★★★
    Emilia90 said:

    Wicket329 said:

    I’ve said it before and I’ll say it again: Large Language Models (what has been passing for AI recently) are incredibly stupid. They do not know anything. All they can do is look to see which words frequently appear near each other and stick them into a programmed capacity for grammar to imitate thought.

    Obviously countering Drax isn’t a real problem you’re having, but just saying this for everybody’s benefit: DO NOT USE CHATGPT OR ANY OTHER AI SYSTEM TO ANSWER YOUR QUESTIONS.

    I mean, chatgpt is really good for academics, and definitely good for coding if you atleast have basic idea bout the language/framework you're working on. It can't answer niche topics like a specific mobile game (Mcoc here) since they train on the data user gives (I believe GPT4 can browse the net but still the niche info is not available on the internet either).
    I refuse to believe that people think that GPT doesn’t work for academics. It may not handle expert or complex questions but it can do enough that helps with a lot of stuff
    True. I wrote 2 of my project papers and studied for all of my Undergraduate exams via chatgpt. I am currently using chatgpt to do most of my work in my internship. It's all depends on how you interact it with it, how you train it smartly to give the answers you want.
  • Awesomep12Awesomep12 Member Posts: 1,489 ★★★★

    Wonder if you asked specifically for a “Champ Released in 2024”, whether it would have given more precise (year-release) candidates.

    Instead of it interpreting the question as “in 2024, what is a good champ….” or “what character in 2024”, which neither specifically say you only want one that was released in 2024.

    That's true
  • Sundance_2099Sundance_2099 Member Posts: 3,488 ★★★★★
    edited September 26
    Like I said before, people out here expecting something like JARVIS or KITT (eighties kids represent) and what they're getting is something like JARVIS or KITT after a lobotomy.

    A lot of this AI stuff is a load of garbage and it's bad for the environment because it needs loads of power which is wasting resources, it means computers run for longer, meaning they wear out quicker meaning more waste when they're junked. I know there's been some good come of it, but this chatGPT stuff is just absolute pony and trap.
  • Fit_Fun9329Fit_Fun9329 Member Posts: 2,198 ★★★★★
    OP, any chance you are between 14-17 years old?
  • Grootman1294Grootman1294 Member Posts: 932 ★★★★
    Now that people are here, who IS the best counter to the almighty Drax who is invisible to the naked eye?
  • Awesomep12Awesomep12 Member Posts: 1,489 ★★★★

    Like I said before, people out here expecting something like JARVIS or KITT (eighties kids represent) and what they're getting is something like JARVIS or KITT after a lobotomy.

    A lot of this AI stuff is a load of garbage and it's bad for the environment because it needs loads of power which is wasting resources, it means computers run for longer, meaning they wear out quicker meaning more waste when they're junked. I know there's been some good come of it, but this chatGPT stuff is just absolute pony and trap.

    I wasn't even expecting bb-8. My bar was way low
  • Awesomep12Awesomep12 Member Posts: 1,489 ★★★★

    OP, any chance you are between 14-17 years old?

    Why do you ask?
  • Awesomep12Awesomep12 Member Posts: 1,489 ★★★★
  • EdisonLawEdisonLaw Member Posts: 8,181 ★★★★★
    edited September 26
    Emilia90 said:

    Wicket329 said:

    I’ve said it before and I’ll say it again: Large Language Models (what has been passing for AI recently) are incredibly stupid. They do not know anything. All they can do is look to see which words frequently appear near each other and stick them into a programmed capacity for grammar to imitate thought.

    Obviously countering Drax isn’t a real problem you’re having, but just saying this for everybody’s benefit: DO NOT USE CHATGPT OR ANY OTHER AI SYSTEM TO ANSWER YOUR QUESTIONS.

    I mean, chatgpt is really good for academics, and definitely good for coding if you atleast have basic idea bout the language/framework you're working on. It can't answer niche topics like a specific mobile game (Mcoc here) since they train on the data user gives (I believe GPT4 can browse the net but still the niche info is not available on the internet either).
    I refuse to believe that people think that GPT doesn’t work for academics. It may not handle expert or complex questions but it can do enough that helps with a lot of stuff
    Agree. I use it all the time for my studies, especially for biology and chemistry.
  • JLordVileJJLordVileJ Member Posts: 4,859 ★★★★★
    I actually never use chat gpt, in what way do yall use it for academics? Exam questions? Personally I just use websites that give past exams or past exam questions by topic and difficulty
Sign In or Register to comment.