• Cass.Forest@beehaw.org
    link
    fedilink
    arrow-up
    7
    ·
    11 months ago

    There is, however, still the concept of the Chinese Room thought experiment, and I don’t think AI will topple that one for a while.

    For those who don’t know and don’t wish to browse off the site, the thought experiment posits a situation in which a guy who does not understand Chinese is sat in a room and told to respond to sets of Chinese characters that come into the room. He has a little booklet of responses—all completely in Chinese—for him to use to send responses out of the room. The thought experiment questions whether or not the system of the Chinese Room itself can be thought to understand Chinese or even the man himself.

    With the Turing Test getting all of the media spotlight in AI, machine learning, and cognitive science, I think the Chinese Room should enter into the conversation as the field of AI looks towards G.A.I.

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      The Chinese Room has already been surpassed by LLMs, which have shown to contain neurons that activate in such high correlation to abstract concepts like “formal text” or “positive sentiment”, that tweaking them is one of the options that LLM based chatbots are presenting to the user.

      Analyzing the activation space, it’s also been shown that LLMs categorize and cluster sequences of text representing similar concepts closer to each other, which allows them to present reasonably accurate zero shot responses that have never been in the training set (that “weren’t in the book” for the Chinese Room).

      • howrar@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        I don’t understand what you mean by “The Chinese Room has already been surpassed by LLMs”. It’s not a test that can be surpassed. It’s just a thought experiment.

        In any case, you do bring up a good point. Perhaps this understanding is in the organization of the information. So if you have a Chinese room where all the query-response pairs are in arbitrary orders, then maybe you wouldn’t consider that to be understanding. But if you have the data organized such that similar queries/responses are close to each other and this person in the room doing the answering can make mistakes such as accidentally copying out the response next to the correct response and still make sense, then maybe we can consider this system to have better understanding.