ChatGPT has a style over substance trick that seems to dupe people into thinking it’s smart, researchers found::Developers often prefer ChatGPT’s responses about code to those submitted by humans, despite the bot frequently being wrong, researchers found.

  • joe@lemmy.world
    link
    fedilink
    English
    arrow-up
    82
    arrow-down
    2
    ·
    1 year ago

    A caveat: This user analysis involved just 12 programmers being asked to assess if they prefer the responses of ChatGPT or those written by humans on Stack Overflow to 2,000 randomly sampled questions.

    Nothing to see here.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      1 year ago

      What they should have done is asked those same 12 programmers to ask a common everyday question on Stack Overflow and then while waiting for a response, ask ChatGPT the same question.

      I’d bet 50 bucks almost all of them would get an acceptable answer to their question out of ChatGPT 4 in far less time than it takes the moderators at Stack Overflow to delete the question. I can’t imagine any of the questions will actually be answered on SO.

      • joe@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        1 year ago

        Right. The problem with SO is that you don’t actually get to ask any questions; so reason would suggest anything is at least as good as SO-- even asking a house plant, or Siri, or whatever. Something that actually answers your question would obviously be a better option.

        Stack Overflow brought their irrelevance on themselves, I suspect.

    • HellAwaits@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      1 year ago

      This is the one profession I don’t mind AI taking jobs away from. Maybe that’s a bit harsh, but I’m so sick and tired of the clickbait BS.

      • joe@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        This is the inevitable result of the decision to fund the internet at large via ads. And there would be (has been) tremendous friction from users when it comes to switching from ad-based to subscription, so we might just be stuck with it.

  • sj_zero@lotide.fbxl.net
    link
    fedilink
    arrow-up
    15
    arrow-down
    3
    ·
    1 year ago

    Anyone who has actually needed a correct answer to a question realized this a long time ago.

    The problem is that most people don’t bother checking the answers.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      If you need a correct answer, you’re doing it wrong!

      I’m joking of course, but there’s a seed of truth: I’ve found ChatGPT’s wrong or incomplete answers to be incredibly helpful as a starting point. Sometimes it will suggest a Python module I didn’t even know about that does half my work for me. Or sometimes it has a lot of nonsense but the one line I actually need is correct (or close enough for me to understand).

      Nobody should be copying code off Stack Overflow without understanding it, either.

    • sumofchemicals@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      1 year ago

      This hasn’t been my experience. Yes, chatgpt gets stuff wrong, and fairly regularly. But I can ask it my question directly, and can include sample code, and I get an answer immediately. Anyone going on stack overflow has to either google around and sift through answers for relevance, or has to post the question and wait for someone to respond.

      With either chatgpt or stack you have to check the answer to make sure it works - that’s how coding goes. But one I know if it works or not pretty much immediately with fairly low investment of time and effort. And if it doesn’t, I just rephrase the question, or literally say “that doesn’t seem to work, now I’m getting this error: $error”

      • sj_zero@lotide.fbxl.net
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        When it gets stuff wrong though, it doesn’t just get stuff wrong, it gets stuff completely made up. I’ve seen it create entire apis, I’ve seen it generate legal citations out of whole cloth and entire laws that don’t exist. I’ve seen it very confidently tell me to write a command that clearly doesn’t work and if it did then I wouldn’t be asking a question.

        But I don’t think that the alternative to chat GPT would even be stackoverflow, it would be an expert. Given the choice between the two, you would definitely want an expert every time.

        • sumofchemicals@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          You’re right that it completely fabricates stuff. And even with that reality, it improves my productivity, because I can take multiple swings and still be faster than googling. (And sometimes might just not find an answer googling)

          Of course you’ve got to know that’s how the tool works, and some people are hyping it and acting like it’s useful in all situations. And there are scenarios where I don’t know enough about the subject to begin with to ask the right question or realize how incorrect the answer it’s giving is.

          I only commented because you said you can’t get the correct answer, and that people don’t check the answer, both of which I know from my and my friends actual usage is not the case.

      • the_medium_kahuna@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        1 year ago

        But the fact is that you need to check every time to be sure it isn’t the rare inaccuracy. Even if it could cite sources, how would you know it was interpreting the source’s statements accurately?

        imo, it’s useful for outlining and getting ideas flowing, but anything beyond that high level, the utility falls off pretty quickly

        • DreamButt@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Ya it’s great for exploring options. Anything that’s raw textual is good enough to give you a general idea. And moreoftenthannot it will catch a mistake about the explanation if you ask for a clarification. But actual code? Nah, it’s about a 50/50 if it gets it right the first time and even then the style is never to my liking

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 year ago

    This was the first thing I’ve noticed on day one. The way it “speaks” is designed to sound like a polite authority in the field.

  • neptune@dmv.social
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 year ago

    When you underpay a bunch of gig workers to rate the outputs? Obviously it’s going to write in a manner that best BS’s a layperson.

    Would be too expensive to hire experts in every field to train the AI to actually do good work. Imagine paying software engineers 100k plus benefits to vote on its code outputs, or getting Miss Manners to comment on its etiquette suggestions.

  • BellaDonna@mujico.org
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    1 year ago

    It’s the same way that people are convinced I’m way smarter than I actually am, it’s the way I construct sentences and respond, the words I choose, not to much the substance and verity of.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    9
    ·
    1 year ago

    It’s like crypto, or really any other con job.

    It makes idiots feel smart.

    Make a mark feel like they’re smart, and they’ll become attached to the idea and defend it to their death. Because the alternative is they aren’t really smart and fell for a scam.

    When smart people try to explain that to the idiots, it just makes them defend the scam even harder.

    Try to tell people chatgpt isn’t great, and they just ramble on about some nonsensical stuff they don’t even understand themselves and then claim anyone that disagrees just isn’t smart enough to get it.

    It’s a great business plan if you have zero morals, which is why the method never really goes away, just moves to another product.

    • FredericChopin_@feddit.uk
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      1 year ago

      “ChatGPT is great” depends on what context you’re talking about. Use it for generating mock data and I’d say it’s pretty great.

      Getting help with my code as a professional software developer is pretty decent and much better than Google or Stack.

      Getting it to tell me about Physics, not great in my situation as I don’t know enough about physics to know where it’s wrong.

      Point being, it can be a great TOOL to aid you in your specialist field of work.

      People think that it’ll do your job for you but it’s more a kin to a calculator. It’s a tool to help people.

      • CherenkovBlue@iusearchlinux.fyi
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I find it to be an excellent tool to help me write. Staring at a blank page is one of the hardest hurdles to overcome. By asking questions to chatGPT, I start organizing my thoughts about what I want to write, and it gives me instant words on the page to start manipulating. I am a subject matter expert on these topics and therefore screen what it gives me for correctness. It’s surprisingly good, but it has hallucinated some things. But on the balance I find it very helpful.

    • sumofchemicals@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      I have seen someone type “tell me how make a million dollar business” into chatgpt. Of course that’s not going to work. But LLMs have immediate obvious value that crypto does not, and I think making the comparison reveals a lack of experience with those useful applications. I’m using chatgpt nearly every day as a tool to help with coding. It’s not a replacement for a person, but it is like giving a person a forklift.

  • daellat@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    edit-2
    1 year ago

    Certainly it’s gotten worse as we’ve all seen the news probably. When gpt4 came to the API it was impressive at times. A caveat always remained: don’t blindly trust it, but that goes for stack overflow replies too.

    Ohh cool, a downvote and smug reply. Go back to reddit or something.

    Lol https://mastodon.social/@rodhilton/110894818243613681

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I’ve seen that in the news. I haven’t experienced it at all. In fact I’m getting far better results now than I ever did before, though I suspect that’s mostly on me - experience using almost any tool will improve the output.