• csfirecracker@lemmyf.uk
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    This demonstrates in a really layman-understandable way some of the shortcomings of LLMs as a whole, I think.

    • Nougat@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I would say the specific shortcoming being demonstrated here is the inability for LLMs to determine whether a piece of information is factual (not that they’re even dealing with “pieces of information” like that in the first place). They are also not able to tell whether a human questioner is being truthful, or misleading, or plain lying, honestly mistaken, or nonsensical. Of course, which one of those is the case matters in a conversation which ought to have its basis in fact.

  • treefrog@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    So, just like real people, AI hate telling people I don’t know.