Abstract:

Hallucination has been widely recognized to be a significant drawback for large language models (LLMs). There have been many works that attempt to reduce the extent of hallucination. These efforts have mostly been empirical so far, which cannot answer the fundamental question whether it can be completely eliminated. In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs. Specifically, we define a formal world where hallucina- tion is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all of the computable functions and will therefore always hal- lucinate. Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs. Furthermore, for real world LLMs constrained by provable time complexity, we describe the hallucination-prone tasks and empirically validate our claims. Finally, using the formal world framework, we discuss the possible mechanisms and efficacies of existing hallucination mitigators as well as the practical implications on the safe deployment of LLMs.

  • Sibbo@sopuli.xyz
    link
    fedilink
    arrow-up
    8
    ·
    10 months ago

    I didn’t read more than the abstract. It sounds like they are arguing that hallucinations are inevitable because the LLM cannot know everything. But wouldn’t it be enough for the LLM to know what it knows, and therefore know what it does not know?

    • LibertyLizard@slrpnk.net
      link
      fedilink
      arrow-up
      29
      ·
      10 months ago

      The issue is not that it doesn’t know everything, it’s that it doesn’t know anything. It’s not capable of knowledge in the sense that humans are. All it does is probabilistically predict which sequence of words might best respond to a prompt, based on huge amounts of human text that it was trained on.

      Part of the issue is how will you train the model to know which things in its training data are factual and which are not? An incredible amount of human curation already goes into just avoiding the model from repeating offensive things, but the realm of facts is so so much broader than that. I don’t see any way it could be done.

      But on the other hand I am only a casual observer of this technology and perhaps the experts will come up with a creative solution we can’t yet imagine.

      • jmp242@sopuli.xyz
        link
        fedilink
        arrow-up
        5
        ·
        10 months ago

        I think it’s very clear that this “stochastic parrot” idea is less and less accepted by researchers and philosophers, maybe only in the podcasts I listen to…

        It’s not capable of knowledge in the sense that humans are. All it does is probabilistically predict which sequence of words might best respond to a prompt

        I think we need to be careful thinking we understand what human knowledge is and our understanding of the connotations if the word “sense” there. If you mean GPT4 doesn’t have knowledge like humans have like a car doesn’t have motion like a human does then I think we agree. But if you mean that GPT4 cannot reason and access and present information - that’s just false on the face of just using the tool IMO.

        It’s also untrue that it’s predicting words, it’s using tokens, which are more like concepts than words, so I’d argue already closer to humans. To the extent it is just predicting stuff, it really calls into question the value of most of the school essays it writes so well now…

        • TheChurn@kbin.social
          link
          fedilink
          arrow-up
          17
          ·
          10 months ago

          A token is not a concept. A token is a word or word fragment that occured often in free text and was assigned a number. Common words, prefixes, and suffixes are the vast majority of tokens, and the rest are uncommon pairs of letters.

          The algorithm to generate tokens is essentially compression, there is no semantic meaning embedded in them.

          • jmp242@sopuli.xyz
            link
            fedilink
            arrow-up
            1
            ·
            10 months ago

            Yea, that was a bad way to phrase it - I just meant that from what I’ve heard tokens are very much not word by word. And sometimes it’s a couple words, but maybe that was misinformation. And I was trying (and failing) to make an analogy for a human - a concept is a compression of what otherwise would be a bunch of words, though I kind of meant more like a reference I guess.

        • kciwsnurb@aussie.zone
          link
          fedilink
          arrow-up
          6
          ·
          10 months ago

          only in the podcasts I listen to

          Yes definitely. Many of my fellow NLP researchers would disagree with those researchers and philosophers (not sure why we should care about the latter’s opinions on LLMs).

          it’s using tokens, which are more like concepts than words

          You’re clearly not an expert so please stop spreading misinformation like this.

          • jmp242@sopuli.xyz
            link
            fedilink
            arrow-up
            1
            ·
            10 months ago

            Yes definitely. Many of my fellow NLP researchers would disagree with those researchers and philosophers (not sure why we should care about the latter’s opinions on LLMs).

            I’m not sure what you’re saying here - do you mean you do or don’t think LLMs are “stochastic parrot”s?

            In any case, the reason I would care about philosophers opinions on LLMs is mostly because LLMs are already making “the masses” think they’re potentially sentient, and or would deserve personhood. What’s more concerning is that the academics that sort of define what thinking even is seem confused by LLMs if you take the “stochastic parrot” POV. This eventually has real world affects - it might take a decade or two, but these things spread.

            I think this is a crazy idea right now, but I also think that going into the future eventually we’ll need to have something like a TNG “Measure of a Man” trial about some AI, and I’d want to get that sort of thing right.

          • AVeryCleverName@lemmy.one
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            10 months ago

            researchers and philosophers (not sure why we should care about the latter’s opinions on LLMs).

            Philosophers may not be represent an authory on the mechanics of LLMs, but in a discussion of the nature consciousness (which is really what the stochastic parrot stuff is about), their opinion is as valid as anyone elses, and they have one of the richer histories of conceptualizing it, long before more rigorous empirical disciplines could dream of doing so.

      • solanaceous@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        10 months ago

        Sure, it’s hard to say whether a computer program can “know” anything or what that even means. But the paper isn’t arguing that. It assumes very little about how how LLMs actually work, and it defines “hallucination” as “not giving the right answer” with no option for the machine to answer “I don’t know”. Then the proof follows basically from the fact that the LLM-or-whatever can’t know everything.

        The result is not very surprising, and saying that it means hallucination is inevitable is an oversell. It’s possible that hallucinations, or at least wrong answers, are inevitable for different reasons though.