I’ve never had an account with these. Do I need to create an account with them to freeze my credits? And what kinds of information should I give / not give when I do?
I’ve never had an account with these. Do I need to create an account with them to freeze my credits? And what kinds of information should I give / not give when I do?
thanks for clarifying! that’s really helpful!
haha nice. I’ll try that next time
gotcha, thanks for clarifying :)
“NOPE” as in “not a dark pattern” or as in “I’m not touching this site”? if former, can you clarify on the reason?
can you clarify on the 7?
thanks for confirming my suspicion. as for your question, conda in general is good for installing non-python binaries when needed, and managing env. I don’t use anaconda but it provides a good enough interface for beginners and folks without much coding experience. It’s usually the easiest to use that than other variants for them, or the python route of setting up environments
If you’ve never worked before, this can be considered practice runs for the when you do.
Like one of the other commentors said, assume everything is accessible by Google and/or your university (and later, your boss, company, organization, …).
And not just you, but the people who interact with you through it. So that means you may be able to put up defenses, but if they don’t (and they most likely do not), the data that you interact with them would likely be accessible as well.
So here are some potential suggestions to minimize private-data access by Google/university while still being able to work with others (adjust things depending on your threat model of course):
You can also just post the 4-5 data items without claiming that this is low or high credibility or bias. Then let the people make the decision. Like this maybe:
“Based on source X, this source media bias is:
Methodology of X is at: “
I find it quite common (and confusing) for certain news types like policy, eg “party A reverses the disapproval to oppose the once-unacceptable ban”
I’m also curious. A quick search came up with these. Not sure which one is most reliable/updated
Many things are called “AI models” nowadays (unfortunately due to the hype). I wouldn’t dismiss the tools and methodology yet.
That said, the article (or the researchers) did a disservice to the analysis by not including a link to the report (and code) that outlines the methodology and how the distribution of similarities look. I couldn’t find a link in the article and a quick search didn’t turn up anything.
you should try to ask the same question using xAI / Grok if possible. May also ask ChatGPT about Altman as well
welp, guess you’re right. It’s not common but not just a few someone’s either.
tell me more about the “almost” part …
Based on this reddit comment, that website is not affiliated with the magic-wormhole
CLI tool
I believe experiments like these should move slower and with more scrutiny. As in more animal testing before moving on to humans, esp. due to the controversies surrounding Neuralink’s last animal experiments.
I think porn generation (image, audio and video) will eventually be very realistic and very easy to make with only a few clicks and some well crafted prompts. Things would just be a whole other level that what Photoshop used to be.
re: your last point, AFAIK, the TLDR bot is also not AI or LLM; it uses more classical NLP methods for summarization.
Wonder how the survey was sent out and whether that affected sampling.
Regardless, with -3-4k responses, that’s disappointing, if not concerning.
I only have a more personal sense for Lemmy. Do you have a source for Lemmy gender diversity?
Anyway, what do you think are the underlying issues? And what would be some suggestions to the community to address them?