• NaNin@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 days ago

    Do you think that our current iteration of A.I. can have these kinds if gains? Like, what if the extreme increase happens beyond our lifetimes? or beyond the lifetime of our planet?

    • agamemnonymous@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      5 days ago

      I think we can’t know, but LLMs definitely feel like a notable acceleration. Exponential functions are also, well, exponential. As X grows, X × X grows faster. The exponential part is gonna come from meta-models, coordinating multiple specialized models to complete complex tasks. Once we get a powerful meta-model, we’re off to the races. AI models developing AI models.

      It could take 50 years, it could take 5, it could happen this Wednesday. We won’t know which development is going to be the one to tip us over the edge until it happens, and even then only in retrospect. But it could very well be soon.

    • leftzero@lemmynsfw.com
      link
      fedilink
      arrow-up
      5
      arrow-down
      3
      ·
      5 days ago

      No, LLMs have always been an evident dead end when it comes to general AI.

      They’re hampering research in actual AI, and the fact that they’re being marketed as AI ensures that no one will invest in actual AI research in decades after the bubble bursts.

      We were on track for a technological singularity in our lifetimes, until those greedy bastards derailed us and murdered the future by poisoning the Internet with their slop for some short term profits.

      Now we’ll go extinct due to ignorance and global warming long before we have time to invent something smart enough to save us.

      But, hey, at least, for a little while, their line did go up, and that’s all that matters, it seems.