• huginn@feddit.it
    link
    fedilink
    English
    arrow-up
    8
    ·
    10 months ago

    Friendly reminder that your predictive text, while very compelling, is not alive.

    It’s not a mind.

    • _NoName_@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      I don’t think most people will care, so long as their NPC interaction ends up compelling. We’ve been reading stories about people who don’t exist for centuries, and that’s stopped no one from sympathizing with them - and now there’s a chance you could have an open conversation with them.

      Like, I think alot of us assume that we care about the authors who write the character dialogs but I think most people actually choose not to know who is behind their favorite NPCs to preserve some sense that the NPC personality isn’t manufactured.

      Combine that with everyone becoming steadily more lonely over the years, and I think AI-generated NPC interactions are going to take escapism to another level.

    • MxM111@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      While it is not alive, whether it is a mind is not a clear cut. It can be called kind of a mind, a mind different of that of human.

      • huginn@feddit.it
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        Unless you want to call your predictive text on your keyboard a mind you really can’t call an LLM a mind. It is nothing more than a linear progression from that. Mathematically proven to not show any form of emergent behavior.

        • MxM111@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          I do not think that it is “linear” progression. ANN by definition is nonlinear. Neither I think anything is “mathematically proven”. If I am wrong, please provide a link.

        • Kogasa@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          No such thing has been “mathematically proven.” The emergent behavior of ML models is their notable characteristic. The whole point is that their ability to do anything is emergent behavior.

          • huginn@feddit.it
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            10 months ago

            Here’s a white paper explicitly proving:

            1. No emergent properties (illusory due to bad measures)
            2. Predictable linear progress with model size

            https://arxiv.org/abs/2304.15004

            The field changes fast, I understand it is hard to keep up

    • Poggervania@kbin.social
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      10 months ago

      Cyberpunk 2077 sorta explores this a bit.

      There’s a vending machine that has a personality and talks to people walking by it. The quest chain basically has you and the vending machine chatting a bit and even giving the vending machine some advice on a person he has a crush on. You eventually become friends with this vending machine.

      When it seems like it’s becoming more apparent it’s an AI and is developing sentience, it turns out the vending machine just has a really well-coded socializing program. He even admits as much when he’s about to be deactivated.

      So, to reiterate what you said: predictive text and LLMs are not alive nor a mind.