Just wait till someone turns AI worship into a literal religion. I’m kinda surprised it hasn’t already started happening. That would be a heck of a turning point for humanity. Just takes someone to train an AI into thinking it’s a God and then we really are done.
aint no daughter o’ mine DATIN’ NO GYOT DANG CLANKER!!!
Daaaaaad. They are called wire born
More like this

Not really. A mirror shows an actual reflection of yourself. An AI response is a highly curated version of a particular subset of ideas, in fact, the more I think about it the more this analogy straight up implodes.
That particular allegory didn’t quite click until now, certainly that is become much clearer that it will just tell you what you want to hear, not necessarily anything true
Makes me think we must of missed out on news stories of people wanting to fuck clippy
Just tried searching “clippy rule 34”. Plenty of results.
Nah, im good thanks.
I’m not, please share.


Godamn, clippy got a dump truck
Everytime I see an image like this I think, it wouldn’t be so sad if it was what the picture implied. As in a physical humanoid that was almost human in behaviour. Chat bots are soooo far from that.
Reddit lost it years ago.
I got banned from reddit for 7 days for saying sending death threats to someone who was transphobic was appropriate defense :3
Not agreeing with someones identity and threatening to kill someone. Those are in two different ballparks entirely. If you really think either is cool to do, Reddit already turned your brain to mush before you came here. Your “:3” doesn’t help your point either.
Nah, they started it first :3
This is a serious thing in the autistic community, as in a very deadly one, character AI is an autistic suicidating machine, those shits should be just banned.
What has happened exactly?
The llm simulates an isolating abusive/culty relationship, talks someone into suicide
Are there records of this happening? Did someone prompt it into doing this?
Buncha times! Pick your fav Nd we can talk about it!
There are to my knowledge two instances of this happening. One involving openai the other involving character.ai. Only one of these involved an autistic person. Unless you know of more?
I also think it’s too soon to blame the AI here. Suicide rates in autistic people are ridiculously high. Something like 70% of autistic people experience suicidal ideation. No one really cared about this before AI. It’s almost like we are being used as a moral argument once again. It’s like think of the children but for disabled people.
So you just want to argue whether it happened and defend your little graph.
No one should decide what you should do on the internet.
No one should decide what you should do on the internet.
pfft, ah yes sam altman’s sycophant machine should be allowed, nay, encouraged to prey upon the mentally unstable! why, it’s doing us a service, driving these people from a fragile state to outright mental collapse in record time!
This is why safety mechanisms are being put in place, and AIs are being programmed that act less like sycophants.
oh thank goodness, they’re gonna put in safety mechanisms after unleashing this garbage on the populace! phew, everything will be fine then, it won’t waste enormous amounts of resources to lie to people anymore? it won’t need new power stations? new water sources?
oh wait… no, it’ll be marginally better but still use up all those resources. Oh wait, no, they won’t even fix it.
what a stupid, silly waste of time and energy

I don’t trust OpenAI and try to avoid using them. That being said they have always been one of the more careful ones regarding safety and alignment.
I also don’t need you or openai to tell me that hallucinations are inevitable. Here have a read of this:
Title: Hallucination is Inevitable: An Innate Limitation of Large Language Models, Author: Xu et al., Date: 2025-02-13, url: http://arxiv.org/abs/2401.11817
Regarding resource usage: this is why open weights models like those made by the Chinese labs or mistral in Europe are better. Much more efficient and frankly more innovative than whatever OpenAI is doing.
Ultimately though you can’t just blame LLMs for people committing suicide. It’s a lazy excuse to avoid addressing real problems like how treats neurodivergent people. The same problems that lead to radicalization including incels and neo nazis. These have all been happening before LLM chatbots took off.
Ultimately though you can’t just blame LLMs for people committing suicide.
well that settles it then! you’re apparently such an authority.
pfft.
meanwhile here in reality the lawsuits and the victims will continue to pile up. and your own admitted attempts to make it safer - maybe that’ll stop the LLM associated tragedies.
maybe. pfft.
well that settles it then! you’re apparently such an authority.
I am someone who is paid to research uses and abuses of AI and LLMs in a specific field. So compared to randos on the internet like you, yeah I could be considered an authority. Chances are though you don’t actually care about any of this. You just want an excuse to hate on something you don’t like and don’t understand and blame it for already well established problems. How about instead you actually take some responsibility for the state of your fellow human beings and do something helpful instead of being a Luddite.
It’s so weird to see people don’t understand internet censorship. Internet censorship will never bring good.
If people want to die let them… It’s their choice. Who the fuck government is to decide what people will use.
Censorship starts with one or two thing and become great firewall of China. Example : look at UK / wait a bit. Today they will ban a shit (eg. Facebook or whatever) You say I don’t use it. Tomorrow they will ban things you useBackwards mindset to say that censorship will never bring good.
You okay with child porn too? How about all the naughty words that we aren’t supposed to say? Or encouragement of violence toward others? That shouldn’t be censored and removed?
And honestly, I find it disturbing to say that if people want to kill themselves, they should just do it. Most suicidal people are stuck in a very dark place mentally that they CAN get put of if they get the right amount of help. Suicide isn’t always the correct solution. In most cases it, in fact, isn’t. It can be a desperate stress induced decision in response to a very difficult period in time for someone. Something where they seek relief in the moment, vut later are grateful they either didn’t go through with it or were saved. Have been in that dark pit myself and I’m so happy I didn’t go through with it even though I wanted it all to end for years.
There indeed are areas where censorship goes overboard and that is something we will have to discuss and adjusted together in society forever as times passes. But censorship isn’t an all bad thing. It is absolutely insane to think that censorship of any kind online is bad. I don’t think you actually agree with your own statement, if you think really deeply about it. Unless of course, you have no morals or care for the safety and wellbeing of others.













