• Novamdomum@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    3 months ago

    Just wait till someone turns AI worship into a literal religion. I’m kinda surprised it hasn’t already started happening. That would be a heck of a turning point for humanity. Just takes someone to train an AI into thinking it’s a God and then we really are done.

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    3 months ago

    Everytime I see an image like this I think, it wouldn’t be so sad if it was what the picture implied. As in a physical humanoid that was almost human in behaviour. Chat bots are soooo far from that.

  • Una@europe.pub
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I got banned from reddit for 7 days for saying sending death threats to someone who was transphobic was appropriate defense :3

    • Kalladblog@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Not agreeing with someones identity and threatening to kill someone. Those are in two different ballparks entirely. If you really think either is cool to do, Reddit already turned your brain to mush before you came here. Your “:3” doesn’t help your point either.

  • A_Chilean_Cyborg@feddit.cl
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    This is a serious thing in the autistic community, as in a very deadly one, character AI is an autistic suicidating machine, those shits should be just banned.

            • NotANumber@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              3 months ago

              There are to my knowledge two instances of this happening. One involving openai the other involving character.ai. Only one of these involved an autistic person. Unless you know of more?

              I also think it’s too soon to blame the AI here. Suicide rates in autistic people are ridiculously high. Something like 70% of autistic people experience suicidal ideation. No one really cared about this before AI. It’s almost like we are being used as a moral argument once again. It’s like think of the children but for disabled people.

          • mojofrododojo@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            oh thank goodness, they’re gonna put in safety mechanisms after unleashing this garbage on the populace! phew, everything will be fine then, it won’t waste enormous amounts of resources to lie to people anymore? it won’t need new power stations? new water sources?

            oh wait… no, it’ll be marginally better but still use up all those resources. Oh wait, no, they won’t even fix it.

            what a stupid, silly waste of time and energy

            • NotANumber@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              0
              ·
              3 months ago

              I don’t trust OpenAI and try to avoid using them. That being said they have always been one of the more careful ones regarding safety and alignment.

              I also don’t need you or openai to tell me that hallucinations are inevitable. Here have a read of this:

              Title: Hallucination is Inevitable: An Innate Limitation of Large Language Models, Author: Xu et al., Date: 2025-02-13, url: http://arxiv.org/abs/2401.11817

              Regarding resource usage: this is why open weights models like those made by the Chinese labs or mistral in Europe are better. Much more efficient and frankly more innovative than whatever OpenAI is doing.

              Ultimately though you can’t just blame LLMs for people committing suicide. It’s a lazy excuse to avoid addressing real problems like how treats neurodivergent people. The same problems that lead to radicalization including incels and neo nazis. These have all been happening before LLM chatbots took off.

              • mojofrododojo@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                Ultimately though you can’t just blame LLMs for people committing suicide.

                well that settles it then! you’re apparently such an authority.

                pfft.

                meanwhile here in reality the lawsuits and the victims will continue to pile up. and your own admitted attempts to make it safer - maybe that’ll stop the LLM associated tragedies.

                maybe. pfft.

                • NotANumber@lemmy.dbzer0.com
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  3 months ago

                  well that settles it then! you’re apparently such an authority.

                  I am someone who is paid to research uses and abuses of AI and LLMs in a specific field. So compared to randos on the internet like you, yeah I could be considered an authority. Chances are though you don’t actually care about any of this. You just want an excuse to hate on something you don’t like and don’t understand and blame it for already well established problems. How about instead you actually take some responsibility for the state of your fellow human beings and do something helpful instead of being a Luddite.

      • Uri@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        It’s so weird to see people don’t understand internet censorship. Internet censorship will never bring good.
        If people want to die let them… It’s their choice. Who the fuck government is to decide what people will use.
        Censorship starts with one or two thing and become great firewall of China. Example : look at UK / wait a bit. Today they will ban a shit (eg. Facebook or whatever) You say I don’t use it. Tomorrow they will ban things you use

        • Nangijala@feddit.dk
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          Backwards mindset to say that censorship will never bring good.

          You okay with child porn too? How about all the naughty words that we aren’t supposed to say? Or encouragement of violence toward others? That shouldn’t be censored and removed?

          And honestly, I find it disturbing to say that if people want to kill themselves, they should just do it. Most suicidal people are stuck in a very dark place mentally that they CAN get put of if they get the right amount of help. Suicide isn’t always the correct solution. In most cases it, in fact, isn’t. It can be a desperate stress induced decision in response to a very difficult period in time for someone. Something where they seek relief in the moment, vut later are grateful they either didn’t go through with it or were saved. Have been in that dark pit myself and I’m so happy I didn’t go through with it even though I wanted it all to end for years.

          There indeed are areas where censorship goes overboard and that is something we will have to discuss and adjusted together in society forever as times passes. But censorship isn’t an all bad thing. It is absolutely insane to think that censorship of any kind online is bad. I don’t think you actually agree with your own statement, if you think really deeply about it. Unless of course, you have no morals or care for the safety and wellbeing of others.