• CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      5 days ago

      I actually was going to link the same one I always do, which I think I heard about through a blog or talk. If that’s not good enough, it’s easy to devise your own test and put it to an LLM. The way you phrased that makes it sound like you’re more interested in ignoring any empirical evidence, though.

      • Jtotheb@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          12 hours ago

          You can devise a task it couldn’t have seen in the training data, I mean.

          You don’t even have access to the “thinking” side of the LLM.

          Obviously, that goes for the natural intelligences too, so it’s not really a fair thing to require.