Just a guy shilling for gun ownership, tech privacy, and trans rights.

I’m open for chats on mastodon https://hachyderm.io/

my blog: thinkstoomuch.net

My email: [email protected]

Always looking for penpals!

  • 5 Posts
  • 78 Comments
Joined 2 年前
cake
Cake day: 2023年12月21日

help-circle
  • Fully agree. I tried to make the SC work and wrote off a lot of it as “I’m just not used to it”, but it really is asking a lot. In its defence, it was a first run product. The fact that it’s still ass usable and as weird is impressive enough to me. But it’s better as a piece of gaming history than a good product. It was just a good try.

    I also agree with the Steam deck controls being actually good. I want the SC2 that’s just a steam deck without the screen or computer.

    So I guess the opposite of the steam brick.

    I’d gladly pay $100 to have a steam deck like control scheme for my desktop. Rechargeable batteries and a Linux first design would be awesome. I don’t mind just using cables all the time, but I would like better wireless options for Linux gamepads (though to be fair, I haven’t tried connecting a wireless controller to a Linux box in 5 years).





  • For simply productivity like Copilot or Text Gen like ChatGPT.

    It absolutely is doable on a local GPU.

    Source: I do it.

    Sure I can’t do auto running simulations to find new drugs and protein sequencing or whatever. But it helps me code. It helps me digest software manuals. That’s honestly all I want

    Also, massive compute projects for the @home project are good?

    Local LLMs runs fine on a 5 year old GPU, a 3060 12 gig. I am getting performance on par with cloud ran models. I’m upgrading to a 5060ti just because I wanted to play with image Gen.




  • nagaram@startrek.websitetoLefty Memes@lemmy.dbzer0.comOpen Source washing
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    edit-2
    1 个月前

    Which is funny since that does solve a lot of the problems.

    If it’s completely open source at least.

    Like OS data sets and model that can be ran locally means it’s not trained on stolen data and it’s not spying on people for more data.

    And if it runs locally on a GPU, it’s no worse for the environment than gaming. Really the big problem with the data center compute is the infrastructure of getting that data around.



  • I am a fan of LLMs and what they can do, and as such have a server specifically for running AI models. However, I’ve been reading “Atlas of AI” by Kate Crawford and you’re right. So much of the data that they’re trained on is inherently harmful or was taken without consent. Even in the more ethical data sets it’s probably not great considering the sheer quantity of data needed to make even a simple LLM.

    I still like using it for simple code generation (this is just a hobby to me so Vibe coding isn’t a problem in my scenario) and corporate tone policing. And I tell people non stop that it’s worthless outside of these use cases and maybe as a search engine, but I recommend Wikipedia as a better start almost Everytime.