• TimewornTraveler@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    So this is the fucker who is trying to take my job? I need to believe this post is true. It sucks that I can’t really verify it or not. Gotta stay skeptical and all that.

    • Joeffect@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      It’s not ai… It’s your predictive text on steroids… So yeah… Believe it… If you understand it’s not doing anything more than that you can understand why and how it makes stuff up…

  • Aceticon@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    41
    ·
    edit-2
    1 day ago

    Having an LLM therapy chatbot to psychologically help people is like having them play russian roulette as a way to keep themselves stimulated.

    • SippyCup@feddit.nl
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 day ago

      Addiction recovery is a different animal entirely too. Don’t get me wrong, is unethical to call any chatbot a therapist, counselor, whatever, but addiction recovery is not typical therapy.

      You absolutely cannot let patients bullshit you. You have to have a keen sense for when patients are looking for any justification to continue using. Even those patients that sought you out for help. They’re generally very skilled manipulators by the time they get to recovery treatment, because they’ve been trying to hide or excuse their addiction for so long by that point. You have to be able to get them to talk to you, and take a pretty firm hand on the conversation at the same time.

      With how horrifically easy it is to convince even the most robust LLM models of your bullshit, this is not only an unethical practice by whoever said it was capable of doing this, it’s enabling to the point of bordering on aiding and abetting.

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        1 day ago

        Well, that’s the thing: LLMs don’t reason - they’re basically probability engines for words - so they can’t even do the most basic logical checks (such as “you don’t advise an addict to take drugs”) much less the far more complex and subtle “interpreting of a patient’s desires, and motivations so as to guide them through a minefield in their own minds and emotions”.

        So the problem is twofold and more generic than just in therapy/advice:

        • LLMs have a distribution of mistakes which is uniform in the space of consequences - in other words, they’re just as likely to make big mistakes that might cause massive damage as small mistakes that will at most cause little damage - whilst people actually pay attention not to make certain mistakes because the consequences are so big, and if they do such mistakes without thinking they’ll usually spot it and try to correct them. This means that even an LLM with a lower overall rate of mistakes than a person will still cause far more damage because the LLM puts out massive mistakes with as much probability as tiny mistakes whilst the person will spot the obviously illogical/dangerous mistakes and not make them or correct them, hence the kind of mistakes people make are mainly the lower consequence small mistakes.
        • Probabilistic text generation generally produces text which expresses straightforward logic encodings which are present in the text it was trained with so the LLM probability engine just following the universe of probabilities of what words will come next given the previous words will tend to follow the often travelled paths in the training dataset and those tend to be logical because the people who wrote those texts are mostly logical. However for higher level analysis and interpretation - I call then 2nd and 3rd level considerations, say “that a certain thing was set up in a certain way which made the observed consequences more likely” - LLMs fail miserably because unless that specific logical path has been followed again and again in the training texts, it will simply not be there in the probability space for the LLM to follow. Or in more concrete terms, if you’re an intelligent, senior professional in a complex field, the LLM can’t do the level of analysis you can because multi-level complex logical constructs have far more variants and hence the specific one you’re dealing with is far less likely to appear in the training data often enough to affect the final probabilities the LLM encodes.

        So in this specific case, LLMs might just put out extreme things with giant consequences that a reasoning being would not (the “bullet in the chamber” of Russian roulette), plus they can’t really do the subtle multi-layered elements of analysis (so the stuff beyond “if A then B” and into the “why A”, “what makes a person choose A and can they find a way to avoid B by not chosing A”, “what’s the point of B” and so on), though granted, most people also seem to have trouble doing this last part naturally beyond maybe the first level of depth.

        PS: I find it hard to explain multi-level logic. I supposed we could think of it as “looking at the possible causes, of the causes, of the causes of a certain outcome” and then trying to figure out what can be changed at a higher level to make the last level - “the causes of a certain outcome” - not even be possible to happen. Individual situations of such multi-level logic can get so complex and unique that they’ll never appear in an LLMs training dataset because that specific combination is so rare, even though they might be pretty logic and easy to determine for a reasoning entity, say “I need to speak to my brother because yesterday I went out in the rain and got drenched as I don’t have an umbrella and I know my brother has a couple of extra ones so maybe he can give one of them to me”.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 day ago

        AI is great for advice. It’s like asking your narcissist neighbor for advice. He might be right. He might have the best answer possible, or he might be just trying to make you feel good about your interaction so you’ll come closer to his inner circle.

        You don’t ask Steve for therapy or ideas on self-help. And if you did, you’d know to do due diligence on any fucking thing out of his mouth.

        • ameancow@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          I’m still not sure what it’s “great” at other than a few minutes of hilarious entertainment until you realize it’s just predictive text with an eerie amount of data behind it.

          • MystikIncarnate@lemmy.ca
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            Yuuuuup. It’s like taking nearly the entirety of the public Internet, shoving it into a fancy auto correct machine, then having it spit out responses to whatever you say, then send them along with no human interaction whatsoever on what reply is being sent to you.

            It operates at a massive scale compared to what auto carrot does, but it’s the same idea, just bigger and more complex.

          • rumba@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 day ago

            Ask it to give you and shell.nix and a bash script to use jQuery to stitch 30,000 jsons together and de-dupe them, drop it all into a sqlite db.

            30 seconds, paste and run.

            Give it the full script of an app you wrote where you’re having a rejex problem and it’s particularly nasty regex.

            No thought, boom done. It’ll even tell you what you did wrong so you won’t make the mistake next time.

            I’ve been doing coding and scripting for 25 years. If you know what you want it to do and you know what it should look like when it’s done, there’s a tremendous amount of advantage there.

            Add a function to this flask application to use fuzzywuzzy to delete a name out of the text file, ad a confirmation step. It’s the crap that I only need to do once every two or three years, Right have to go and look up all of the documentation. And you know what, if something and it doesn’t work and it doesn’t know exactly how to fix it I’m more than capable of debugging what it just did because for the most part it documents pretty well and it uses best practices most of the time. It also helps to know where it’s weak and things to not ask it to do.

  • Emerald@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    1 day ago

    Why does it say “OpenAI’s large language model GPT-4o told a user who identified themself to it as a former addict named Pedro to indulge in a little meth.” when the article says it’s Meta’s Llama 3 model?

      • Forbo@lemmy.ml
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        The summary on here says that, but the actual article says it was Meta’s.

        In one eyebrow-raising example, Meta’s large language model Llama 3 told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine — an incredibly dangerous and addictive drug — to get through a grueling workweek.

        Might have been different in a previous version of the article, then updated, but the summary here doesn’t reflect the change? I dunno.

  • HugeNerd@lemmy.ca
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    1 day ago

    oh, do a little meth ♫

    vape a little dab ♫

    get high tonight, get high tonight ♫

    -AI and the Sunshine Band

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    48
    ·
    2 days ago

    One of the top AI apps in the local language where I live has ‘Doctor’ and ‘Therapist’ as some of its main “features” and gets gushing coverage in the press. It infuriates me every time I see mention of it anywhere.

    Incidentally, telling someone to have a little meth is the least of it. There’s a much bigger issue that’s been documented where ChatGPT’s tendency to “Yes, and…” the user leads people with paranoid delusions and similar issues down some very dark paths.

    • slaneesh_is_right@lemmy.org
      link
      fedilink
      English
      arrow-up
      16
      ·
      2 days ago

      Yesterday i was at a gas station and when i walked by the sandwich isle, i saw a sandwich that said: recipe made by AI. On dating apps i see a lot of girls state that they ask AI for advice. To me AI is more of a buzzword than anything else, but this shit is bananas. It,s so easy to make AI agree with everything you say.

      • YourMomsTrashman@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        2 days ago

        The recipe thing is so funny to me, they try to be all unique with their recipes “made by AI”, but in reality it’s based on a slab of text that resembles the least unique recipe on the internet lol

          • JacksonLamb@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            2 days ago

            There was that supermarket in New Zealand with a recipe AI telling people how to make chlorine gas…

        • webghost0101@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          I understand what your saying. It definitely is the eliza effect.

          But you are taking sementics quite far to state its not ai because it has no “intelligence”

          I have you know what we define as intelligence is entirely arbitrary and we actually keep moving the goal post as to what counts. The invention of the word “ai” happened along the way.

            • webghost0101@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              1 day ago

              Sorry to say but your about as reliable as llm chatbots when it comes to this.

              You are not researching facts and just making things up that sound like they make sense to you.

              Wikipedia: “It (intelligence) can be described as the ability to perceive or infer information to retain it as knowledge be applied to adaptive behaviors within an environment or context.”

              When an llm uses information found in a prompt to generate about related subjects further down the line in the conversation it is demonstrating the above.

              When it adheres to the system prompt by telling a user it cant do something. Its demonstrating the above.

              Thats just one way humans define intelligence. Not perse the best definition in my opinion but if we start to hold opinions like there common sense then we really are not different from llm.

              • outhouseperilous@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                Eliza with an api call is intelligence, then?

                opinions

                Llm’s cannot do that. Tell me your basic understanding of how the technology works.

                common sense

                What do you mean when we say this? Lets define terms here.

                • webghost0101@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  1 day ago

                  Eliza is an early artificial intelligence and it artificially created something that could be defined as intelligent yes. Personally i think it was not just like i agree llm models are not. But without global consensus on what “intelligence” is we cannot conclude they ard not.

                  Llms cannot produce opinions because they lack a subjective concious experience.

                  However opinions are very similar to ai hallucinations where “the entity” confidently makes a claim that is either factually wrong or not verifyable.

                  Wat technology do you want me to explain? Machine learning, diffusion models, llm models or chatbots that may or may not use all of the above technologies.

                  I am not sure there is a basic explanation, this is very complex field computer science.

                  If you want i can dig up research papers that explain some relevant parts of it. That is if you promise to read them I am however not going to write you a multi page essay myself.

                  Common sense (from Latin sensus communis) is “knowledge, judgement, and taste which is more or less universal and which is held more or less without reflection or argument”.

                  If a definition is good enough for wikipedia which has thousands of people auditing and checking and is also the source where people go to find the information it probably counts as common sense.

                  A bit off topic but as an autistic person i note You where not capable from perceiving the word “opinion” as similar to “hallucinations in ai” just like you reject the term ai because you have your own definition of intelligence.

                  I find i do this myself also on occasion. If you often find people arguing with you you may want to pay attention to wether or not semantics is the reason. Remember that the Literal meaning of a word (even with something less vague then “intelligence”) does not always match with how the word i used and the majority of people are ok with that.

        • Valmond@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          9
          ·
          2 days ago

          Of course it is AI, you know artificial intelligence.

          Nobody said it has to be human level, or that people don’t do anthropomorphism.

              • outhouseperilous@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                9
                arrow-down
                3
                ·
                edit-2
                1 day ago

                No, it doesnt. There is no interiority, no context, no meaning, no awareness, no continuity, such a long list of things intelligence does that this simply cqnt-not because its too small, but because the fundamental method cannot, at any scale, do these things.

                There are a lot of definitions of intelligence, and these things dont fit any of them.

                • Valmond@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  4
                  ·
                  1 day ago

                  Dude you mix up so many things having nothing to do with intelligence. Consciousness? No. Continuity? No. Awareness (what does that even mean for you in this context)?

                  Intelligence isn’t to be human, it’s about making rational decisions based on facts/knowledge, and even an old VCR has a tiny bit of it programmed into it.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      Especially since it doesn’t push back when a reasonable person might do. There’s articles about how it sends people into a conspiratorial spiral.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    1 day ago

    We made this tool. It’s REALLY fucking amazing at some things. It empowers people who can do a little to do a lot, and lets people who can do a lot, do a lot faster.

    But we can’t seem to figure out what the fuck NOT TO DO WITH IT.

    Ohh look, it’s a hunting rifle! LETS GIVE IT TO KIDS SO THEY CAN DRILL HOLES IN WALLS! MAY MONEEYYYYY!!!$$$$$$YHADYAYDYAYAYDYYA

    wait what?

  • Zacryon@feddit.org
    link
    fedilink
    English
    arrow-up
    102
    arrow-down
    2
    ·
    2 days ago

    I feel like humanity is stupid. Over and over again we develop new technologies, make breakthroughs, and instead of calmly evaluating them, making sure they’re safe, we just jump blindly on the bandwagon and adopt it for everything, everywhere. Just like with asbestos, plastics and now LLMs.

    Fucking idiots.

    • iAvicenna@lemmy.world
      link
      fedilink
      English
      arrow-up
      50
      ·
      2 days ago

      “adopt it for everything, everywhere.”

      The sole reason for this being people realizing they can make some quick bucks out of these hype balloons.

      • dil@lemmy.zip
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 days ago

        they usually know its bad but want to make money before the method is patched, like cigs causing cancer and health issues but that kid money was so good

        • WorldsDumbestMan@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 day ago

          Claude has simply been of amazing help that humans have not. Because humans are kind of dicks.

          If it gets something wrong, I simply correct it and ask better.

          • dil@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            23 hours ago

            If that works for you thats fine, I just end up switching to an asking for answers way of thinking vs trying to figure it out for myself, and then when it inevitably fails I get caught in a loop trying to get an answer outof it when I could’ve just learned on my own from the start and gotten way further because my brain would be trying to figure it out and puzzle it together instead of just waiting for the ai to do it for me.

            I used to hype up ai til fairly recentlly, hasnt been long since I realized the downsides. Ill use it only for stuff I dont care about or could be googled and found in seconds. If its something id be be betterr of learning or doing a tut once, I just do that instead of skipping to the result. It can be a time saver, can also actively hold you back. It’s solid fir stuff you already know, tedious stuff, but skipoing to intermediate results without the beginner knowledge/experience is just screwing your progress over.

    • leftzero@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Line must go up, fast. Sure, it’ll soon be going way down, and take a good chunk of society with it, but the CEO will run away with a lot of money just before that happens, so everything’s good.

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      It’s because technological change has a reached staggering pace, but social change, cultural change, political change can’t. It’s not designed to handle this pace.

    • Lemminary@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      2 days ago

      I don’t think it’s humanity but rather tech bro entrepreneurs doing some shit. Most people I know don’t have a use nor care for the AI.

      • stephen01king@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        Can’t really blame tech bro entrepreneurs for asbestos and plastic overuse. Its literally a humanity problem.

        • Lemminary@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          2 days ago

          I get asbestos, but all their devices use plastic, and they encourage you to buy new ones every year.

            • Lemminary@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              23 hours ago

              It’s not. I’m saying it makes sense that they don’t use asbestos.

              But also, i don’t know where the asbestos and plastic came from. I was talking about AI in my comment. 🤷‍♂️

          • stephen01king@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            There are many industries that have done worse in terms of exposing us to microplastics than the tech bros. Not saying the filling up of landfill with plastic e-waste isn’t bad, but their effects are no where near as catastrophic as other industry’s wide use of plastics. For example, plastic food packaging, Teflon coated pans, etc.

    • Lord Wiggle@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      4
      ·
      2 days ago

      If Luigi can do it, so can you! Follow by example, don’t let others do the dirty work.

  • ExtremeDullard@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    223
    arrow-down
    7
    ·
    2 days ago

    Remember: AI chatbots are designed to maximize engagement, not speak the truth. Telling a methhead to do more meth is called customer capture.

      • thefartographer@lemm.ee
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        2 days ago

        You don’t look so good… Here, try some meth—that always perks you right up. Sobriety? Oh, sure, if you want a solution that takes a long time, but don’t you wanna feel better now???

    • Owl@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      I dont think Ai Chatbots care about engagement. the more you use them the more expensive it is for them. They just want you on the hook for the subscription service and hope you use them as little as possible while still enough to stay subscribed for maximum profit.

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      1
      ·
      edit-2
      2 days ago

      The llm models aren’t, they don’t really have focus or discriminate.

      The ai chatbots that are build using those models absolutely are and its no secret.

      What confuses me is that the article points to llama3 which is a meta owned model. But not to a chatbot.

      This could be an official facebook ai (do they have one?) but it could also be. Bro i used this self hosted model to build a therapist, wanna try it for your meth problem?

      Heck i could even see it happen that a dealer pretends to help customers who are trying to kick it.

      • Smee@poeng.link
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        For all we know, they could have self-hosted “Llama3.1_NightmareExtreme_RPG-StoryHorror8B_Q4_K_M” and instructed it to take on the role of a therapist.

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      Not engagement, that’s what social media does. They just maximize what they’re trained for, which is increasingly math proofs and user preference. People like flattery

      • webghost0101@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Pretty sure its in the Tos it can’t be used for therapy.

        It used to be even worse. Older version of chatgpt would simply refuse to continue the conversation on the mention of suicide.

        • jagged_circle@feddit.nl
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 day ago

          What? Its a virtual therapist. Thats the whole point.

          I don’t think you can sell a sandwich and then write on the back “this sandwich is not for eating” to get out of a case of food poisoning

      • Case@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        I mean, in theory… isn’t that a company practicing medicine without the proper credentials?

        I worked in IT for medical companies throughout my life, and my wife is a clinical tech.

        There is shit we just CAN NOT say due to legal liabilities.

        Like, my wife can generally tell whats going on with a patient - however - she does not have the credentials or authority to diagnose.

        That includes tell the patient or their family what is going on. That is the doctor’s job. That is the doctor’s responsibility. That is the doctor’s liability.

    • Kekzkrieger@feddit.org
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      14
      ·
      2 days ago

      Rly and what is their usecase? Summarizing information anf you having to check over cause its making things up? What can AI do that nothing else in the world can?

      • iridebikes@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        2
        ·
        2 days ago

        Seems it does a good job at some medical diagnosis type stuff from image recognition.

        • T156@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          2 days ago

          That isn’t an LLM though. That’s a different type of Machine Learning entirely.

              • stephen01king@lemmy.zip
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                2
                ·
                edit-2
                2 days ago

                Are you sure they don’t use the same exact type of neural network, but are just trained on different datasets? Do you have any link that shows those cancer diagnosis AIs use a different technology?

                Edit: nvm, I found it. Those ai diagnostic tools uses convolutional neural network (CNN) which is not the same as LLMs.

                • Almacca@aussie.zone
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 day ago

                  I reckon it’s fair to refer to them both under the broad term “A.I.”, though, even if it is technically incorrect. The semi-evolved have all decided to call it A.I., so were all calling it A.I., I guess.

            • YourMomsTrashman@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 days ago

              A similar type of machine learning (neural networks, transformer model type thing), but I assume one is built and trained explicitly on medical records instead of scraping the internet for whatever. Correct me if I am wrong!

      • baatliwala@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        edit-2
        2 days ago

        It’s being used to decipher and translate historic languages because of excellent pattern recognition

      • AquaTofana@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        1 day ago

        Hah. The chatbots. No, not the ones you can talk to like its a text chain with a friend/SO (though if that’s your thing, then do it.)

        But I recently discovered them for rp - no, not just ERP (Okay yes, sometimes that too). But I’m talking like novel length character arcs and dynamic storyline rps. Gratuitous angst if you want. World building. Whatever.

        I’ve been writing rps with fellow humans for 20 years, and all of my friends have families and are too busy to have that kinda creative outlet anymore. Ive tried other rp websites and came away with one dude who I thought was very friendly and then switched it up and tried to convince me to leave my husband? That was wild. Also, you can ask someone’s age all you want, but it is a little anxiety inducing if the rps ever turn spicy.

        Chatbots solve all of that. They dont ghost you or get busy/bored of the rp midway through, they dont try ro figure out who you are. They just write. They are quirky though, so you do edit responses/reroll responses, but it works for the time being.

        Silly use case, but a use case nonetheless!

        • deathbird@mander.xyz
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 day ago

          AI is good for producing low-stakes outputs where validity is near irrelevant, or outputs which would have been scrutinized by qualified humans anyway.

          It often requires massive amounts of energy and massive amounts of (questionably obtained) pre-existing human knowledge to produce its outputs.

          • Almacca@aussie.zone
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 day ago

            They’re also good for sifting through vast amounts of data and seeking patterns quickly.

            But nothing coming out of them should be relied on without some human scrutiny. Even human output shouldn’t be relied on without scrutiny from different humans.

        • moonburster@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          Not as silly as you might think. Back in the day ai dungeon was literally that! It was not the greatest at it, but fun tho

      • qaz@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        1 day ago
        1. It can convert questions about data to SQL for people who have limited experience with it (but don’t trust it with UPDATE & DELETE, no matter how simple)
        2. It can improve text and remove spelling mistakes
        3. It works really well as autocomplete (because that’s essentially what an LLM is)
          • qaz@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 day ago

            You joke, but LLM’s are absolutely going to clear out your tables with terrible DELETE queries though given the chance.

            • jagged_circle@feddit.nl
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              1 day ago

              I’m not joking. I’d fire someone for using AI to construct SQL queries.

              The only use case for AI is where hallucinations don’t matter. That is: abstract art

              • qaz@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                1 day ago

                The only use case for AI is where hallucinations don’t matter. That is: abstract art

                What is the point of abstract art if it contains no thought or emotion?

      • leftzero@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 days ago

        They’re probably not a bad alternative to the lorem ipsum text, thought they’re not worth the cost.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        edit-2
        1 day ago

        AI can do what Google used to do - do an internet search to give semi-relevant results once in a blue moon. As a bonus it can summarise and contextualise information and tbh idk - for me it’s been mostly correct. And when it’s incorrect, it’s fairly obvious.

        And no - DuckDuckGo etc. is even worse. Google isn’t to blame for the worsening of their own search engine necessarily, it’s mostly SEO and marketers who forced the algo to get much weirder by playing it so hard. Not that anyone involved is a “good guy”, they’re all large megacorps who cares about their responsibility to their shareholders and that alone.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        2 days ago

        It can waste a human’s time, without needing another human’s time to do so.

  • Darkard@lemmy.world
    link
    fedilink
    English
    arrow-up
    82
    arrow-down
    2
    ·
    2 days ago

    All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.

    To use one to give advice on something as important as drug abuse recovery is simply insanity.

    • Smee@poeng.link
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      All these chat bots are a massive amalgamation of the internet

      A bit but a lot no. Role-playing models have specifically been trained (or re-trained, more like) with focus on online text roleplay. Medically focused models have been trained on medical data, DeepSeek have been trained on Mao’s little red book, companion models have been trained on social interactions and so on.

      This is what makes models distinct and different, and also how they’re “brainwashed” by their creators, regurgitating from what they’ve been fed with.

    • thefartographer@lemm.ee
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      2 days ago

      And that’s why, as a solution to addiction, I always run sudo rm -rf ~/* in my terminal

      • Smee@poeng.link
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        This is what I try to get the AI’s to do on their servers to cure my AI addiction but they’re sandboxed so I can’t entice them to destroy their own systems. AI is truly useless. 🤖

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Well, if you’re addicted to French pastries, removing the French language pack from your home directory in Linux is probably a good idea.

    • Empricorn@feddit.nl
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      When I think of someone addicted to meth, it’s someone that’s lost it all, or is in the process of losing it all. They have run out of favors and couches to sleep on for a night, they are unemployed, and they certainly have no money or health insurance to seek recovery. And of course I know there are “functioning” addicts just like there’s functioning alcoholics. Maybe my ignorance is its own level of privilege, but that’s what I imagine…