• Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    5
    ·
    edit-2
    17 hours ago

    you’d have to store and process hours and hours of audio data that didn’t tell us much

    I mean that could be solved as simply as a local transcription service…

    • WalnutLum@lemmy.ml
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      edit-2
      17 hours ago

      And do what? Sentiment analysis on the conversation you were having?

      Remember semantically aware models are still fairly new and even they lack the context for a particular field of text. That’s something even the new fancy LLMs struggle with.

      Unnecessary when there’s way better targeted models trained on years of data that people willingly send as part of everyday smartphone use.

      • Ulrich@feddit.org
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        4
        ·
        17 hours ago

        Sentiment analysis on the conversation you were having?

        Among other things, sure. More simply, keyword analysis.

        Remember semantically aware models are still fairly new and even they lack the context for a particular field of text.

        All of these “models” are useless garbage but it doesn’t stop them from trying to absolutely cram them everywhere they can.

        Unnecessary

        None of what they do is “necessary”. They could just ask you what your relevant interests are and you could tell them, but they do it anyway. They go to great lengths for any seemingly insignificant amount of data they can get their hands on.