• 18 Posts
  • 2.4K Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle






  • Maybe theres confusing crossover?

    I’m of the opinion that Starfield, in particular, is unreasonably tolerated even though (from what I played) it’s a dreadful, archaic, boring and sluggish game. I’m of the opinion that FO76 released in a particularly bad state, and that Todd behaved in a smiley “tech bro” kind of way immediately after its release. And I will pound BGS all day over that.

    On the other hand, yeah, I’m all for devs re releasing games. It gives them visibility! BGS does it so much it’s kind of a meme, but it’s not bad.


    So, BGS deserves some skepticism. But not over Skyrim, really.





  • Friend, I’m going to be blunt: I think you may have spent time creating this with help from an LLM, and it told you too much of what you want to hear because that’s what they’re literally trained to do.

    As an example…”relativistic coherence?” Computational cycles and SHA512 checksums and bit flips and prime instances? You are mixing modern technical terms and highly speculative, theoretical concepts in a way that… just isn’t really compatible.

    And the text, from what I can parse, is similar. It mixes a lot of contemporary “anthropic” concepts (money, the 24 hour day, and so on), terms that loosely apply to text LLMs, and a few highly speculative concepts that may or may not even apply to the future.


    If you are concerned about AI safety, I think you should split your attention between contemporary, concrete systems we have now and the more abstract, philosophical research that’s been going on even before the LLM craze started. Not mix them together.

    Look into what local LLM tweakers are doing. With, for instance, alignment datasets, experiments on “raw” pretrains, or more cutting edge abliration like: https://github.com/p-e-w/heretic

    In other words, look at the concrete, and how actual safety systems can be applied now. Outlines like yours are interesting, but they can’t actually be applied or enforced.

    And on the philosophical side, basically ignore any institute or effort started after 2021, when all the “Tech Bro” hype and the release of ChatGPT 3.5 in 2022 muddied the waters. But there was plenty of safety research going on before then. There are already many documents/ideas similar to what you’re getting at in your outlines: https://en.wikipedia.org/wiki/AI_safety




  • Not going to get into logistical analysis (I am behind on that). Nor will I dispute the hypocrisy of focusing only on a “white war.” That’s fair.

    But I’m fervent that the justification for Russia’s action is total baloney. I can, and absolutely will, write it off.

    To put it another way: even if Mexico was provably 100% Nazi, and they worshipped China and drug cartels and whatever boogeyman we have like gods, I would still be ashamed if my country, the US, invaded them as Russia invaded Ukraine. It’s beyond preposterous to think they pose a military threat to the US, or that it’s our job to purify them, much less to breathlessly excuse such an invasion as (say) Russia’s fault.

    That’s what I mean by “Tankies.”