Even sycophancy itself has been a problem in AI for “a long time,” says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts. What’s likely happening with those experiencing ecstatic visions through ChatGPT and other models, he speculates, “is that people with existing tendencies toward experiencing various psychological issues,” including what might be recognized as grandiose delusions in clinical sense, “now have an always-on, human-level conversational partner with whom to co-experience their delusions.”
That was my take away as well. With the added bonus of having your echo chamber tailor made for you, and all the agreeing voices tuned in to your personality and saying exactly what you need to hear to maximize the effect.
It’s eery. A propaganda machine operating on maximum efficiency. Goebbels would be jealous.
Yeah, from the article:
So it’s essentially the same mechanism with which conspiracy nuts embolden each other, to the point that they completely disconnect from reality?
That was my take away as well. With the added bonus of having your echo chamber tailor made for you, and all the agreeing voices tuned in to your personality and saying exactly what you need to hear to maximize the effect.
It’s eery. A propaganda machine operating on maximum efficiency. Goebbels would be jealous.
The time will come when we look back fondly on “organic” conspiracy nuts.
human-level? Have these people used chat GPT?
I have and I find it pretty convincing.