• Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    5
    ·
    22 hours ago

    That’s the nuance of AI that anyone who has done any actual work with ML has known for decades now. ML is amazing. It’s not perfect. It’s actually pretty far from perfect. So you should never ever use it as a solo check, but it can be great for a double check.

    Such as with cancer. AI can be a wonderful choice to detecting a melanoma, if used correctly. Such as:

    • a doctor has already cleared a mole, but if you want to know if it warrants a second opinion by another doctor. You could have the model to have a confidence of say, 80% sure that the first doctor is correct in that it is fine.

    • if you do not have access to a doctor immediately, it can be a fine check, again only to a certain percentage. Say that in this case in the future you are worried but cannot access a doctor easily. A patient could snap a photo and in this case a very high confidence rating would say that it is probably fine, with a disclaimer that it is just an AI model and if it changes or you are still worried, get it checked.

    Unfortunately, all of that nuance in that it is all just probabilities is completely lost on both the creators of all of these AI tools, and the risks are not actually passed to the users so blind trust is the number one problem.

    We see it here with police too. “It said it’s them”. No, it only said to a specific confidence that it might be them. That’s a very different thing. You should never use it to find someone, only to verify someone.

    I actually really like how airport security implemented it because it’s actually using it well. Here’s an ID, it has a photo of a person. Compare it to the photo taken there in person, and it should verify to a very high confidence that they are the same person. If in doubt, there’s a human there to also verify it. That’s good ML usage.