• Snowman44@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      ChatGPT just want Mr. Incredible on you.

      I’d like to tell you that the captcha says overlooks and inquiry, but I can’t. I’m sorry ma’am. I know you’re upset. I’d like to help you, but I can’t.

  • profdc9@lemmy.world
    link
    fedilink
    English
    arrow-up
    76
    ·
    1 year ago

    Everyone knows that the real purpose of CAPTCHA tests are to train computers to replace us.

    • hex@programming.dev
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      1
      ·
      1 year ago

      This but unironically… The purpose literally is to train computers to get better at recognising things

    • Draconic NEO@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      1 year ago

      And also to frustrate people who use anonimization techniques including use of the Tor Network to get them to turn off their protections to be more easily fingerprinted.

    • over_clox@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      1 year ago

      The funniest part of that is the people designing the AI systems seem to be completely oblivious to the fact that they’re slowly but surely trying to eliminate their own species. ☹️

      • sheogorath@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        Yes the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders.

  • Overzeetop@kbin.social
    link
    fedilink
    arrow-up
    43
    arrow-down
    2
    ·
    1 year ago

    There is considerable overlap between the smartest AI and the dumbest humans. The concerns over bears and trash cans in US National Parks was ahead of its time.

  • Phen@lemmy.eco.br
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    4
    ·
    1 year ago

    Curious how this study suggesting we need a new way to prevent bots came out just a fews days after Google started taking shit for proposing something that among other things would do just that.

  • superkret@feddit.de
    link
    fedilink
    English
    arrow-up
    34
    ·
    edit-2
    1 year ago

    online study
    not peer reviewed
    “published” on arxiv (which is a public document server, not a journal)
    study and authors not named or linked in the article

    tl/dr: “Someone uploaded a pdf and we’re writing about it.”

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 year ago

      I suppose it’s this paper. Most prolific author seems to be Gene Tsudik, h-index of 103. Yeah that’s not “someone”. Also the paper is accepted for USENIX Security 2023, which is actually ongoing right now.

      Also CS doesn’t really do academia like other sciences, being somewhere on the intersection of maths, engineering, and tinkering. Shit’s definitely not invalid just because it hasn’t been submitted to a journal this could’ve been a blog post but there’s academics involved so publish or perish applies.

      Or, differently put: If you want to review it, bloody hell do it it’s open access. A quick skim tells me “way more thorough than I care to read for the quite less than extraordinary claim”.

    • CookieJarObserver@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      I mean its pretty obvious that nowadays AI is absolutely capable of doing that and some people are just blind or fat finger the keyboard.

    • Zeth0s@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      8
      ·
      edit-2
      1 year ago

      You are overrating peer reviewing. It’s basically a tool to help editors to understand if a paper “sells”, to improve readability and to discard clear garbage.

      If methodologies are not extremely flawed, peer reviewing almost never impact quality of the results, as reviewers do not redo the work. From the “trustworthy” point of view, peer reviewing is comparable to a biased rng. Google for actual reproducibility of published experiments and peer-reviewing biases for more details

      Preprints are fine, just less polished

        • Zeth0s@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          5
          ·
          edit-2
          1 year ago

          Unfortunately not. https://www.nature.com/articles/533452a

          Most peer reviewed papers are non reproducible. Peer review has the primary purpose of telling the editor how sellable is a paper in a small community he only superficially knows, and to make it more attractive to that community by suggesting rephrasing of paragraphs, additional references, additional supporting experiment to clarify unclear points.

          But it doesn’t guarantees methodology is not flawed. Editor chooses reviewer very superficially, and reviews are mainly driven by biases, and reviewers cannot judge the quality of a research because they do not reproduce it.

          Honesty of researchers is what guarantees quality of a paper

          • C4d@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            Yes. A senior colleague sometimes tongue-in-cheek referred to it as Pee Review.

            • Zeth0s@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              edit-2
              1 year ago

              The downvotes to my comments shows that no many people here has ever done research or knows the editorial system of scientific journals :D

              • C4d@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                There is some variation across disciplines; I do think that in general the process does catch a lot of frank rubbish (and discourages submission of obvious rubbish), but from time to time I do come across inherently flawed work in so-called “high impact factor” and allegedly “prestigious” journals.

                In the end, even after peer review, you need to have a good understanding of the field and to have developed and applied your critical appraisal skills.

                • barsoap@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  1 year ago

                  And TBF just getting on arxiv also means you jumped a bullshit hurdle: Roughly speaking you need to be in a position in academia, or someone there needs to vouch for the publication. At the same time getting something published there isn’t exactly prestigious so there’s no real incentive to game the system, as such the bar is quite low but consistent.

                • Zeth0s@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  1 year ago

                  Absolutely. One needs to know what is reading. That’s why pre prints are fine.

                  High impact factor journals are full of works purposely wrong, made because author wants the results that readers are looking for (that is the easiest way to be published in high impact factor journal).

                  https://www.timeshighereducation.com/news/papers-high-impact-journals-have-more-statistical-errors

                  It’s the game. Reader must know how to navigate the game. Both for peer reviewed papers and pre prints

  • tacosplease@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    1 year ago

    Just encountered a captcha yesterday that I had to refresh several times and then listen to the audio playback. The letters were so obscured by a black grid that it was impossible to read them.

  • Rhaedas@kbin.social
    link
    fedilink
    arrow-up
    20
    ·
    1 year ago

    So just keep the existing tests and change the passing ones to not get access. Checkmate robots.

    Just kidding, I welcome our robot overlords…I’ll act as your captcha gateway.

  • sprl@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I’ve had to do 15 different captcha tests one after the other and they still wouldn’t validate me today.

  • NathanielThomas@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I failed a captcha repeatedly until I discovered you can listen to a description and then enter it. Visually I could not figure out what I was looking at

  • dan1101@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    So is it time to get rid of them then? Usually when I encounter one of those “click the motorcycles” I just go read something else.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      It’s a double-edged sword. Just because it doesn’t work perfectly doesn’t mean it doesn’t work.

      To a spammer, building something with the ability to break a captcha is more expensive than something that cannot, whether in terms of development time, or resource demands.

      We saw with a few Lemmy instances that they’re still good at protecting instances from bots and bot signups. Removing captchas entirely means erasing that barrier of entry that keeps a lot of bots out, and might cause more problems than it fixes.

      • IAm_A_Complete_Idiot@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Problem is this assumes that everyone has to build their own captcha solver. It’s definitely a bare minimum standard barrier to entry, but it’s really not a sustainable solution to begin with.

    • panCatQ@lib.lgbt
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      1 year ago

      They were never a test to evade bots to begim with, most capchas were used to train machine learning algorithms to train the bots on ! Just because it was manual labour google got it done for free , using this bullshit captcha thingy ! We sort of trained bots to read obsucre texts , and kinda did the labour for corps for free !

    • brsrklf@compuverse.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      “Please complete the next 200 captchas so we can have a reasonably accurate estimate of your success rate”