WormGPT Is a ChatGPT Alternative With ‘No Ethical Boundaries or Limitations’::undefined

  • Daemon Silverstein@thelemmy.club
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Not exactly related to technology, but I wished for a LLM that could talk with me (and giving me valuable insights) about things like black magick, chaos magick, summoning practices and rituals involving literal “demons” (as in Goetia and demonolatry), as well as very dark poetry and enchants (texts involving very sensible elements symbolically and metaphorically, such as very deep gory goth). These “ethical boundaries” also affects how LLMs can talk about such topics, because LLM deem them as “dangerous topics” (especially Claude, a very sensitive LLM that even refuses to talk about Lilith).

  • Chloett82@lemmy.ohaa.xyz
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 days ago

    WormGPT might appeal to those seeking fewer boundaries, but ethical considerations are crucial when using AI responsibly. For a secure and efficient business tool that keeps integrity in mind, check out netsuite login. It provides reliable solutions for managing your business needs while ensuring compliance and data security—because ethics matter!

  • KairuByte@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Everyone talking about this being used for hacking, I just want it to write me code to inject into running processes for completely legal reasons but it always assumes I’m trying to be malicious. 😭

    • Anslyer746@moist.catsweat.com
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      WormGPT is a chatbot with no ethical boundaries, enabling cybercriminals to generate malicious content easily for illegal activities​ and Ready to expand your AI expertise? Chatgptnorsk provides a wealth of resources to guide you. Explore our content and join a vibrant community. Visit site and discover the potential of AI now!

  • tree@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    A scary possibility with AI malware would be a virus that monitors the internet for news articles about itself and modifies its code based on that. Instead of needing to contact a command and control server for the malware author to change its behavior, each agent could independently and automatically change its strategy to evade security researchers.

  • StenSaksTapir@feddit.dk
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    So much for Elons new AI company. Wasn’t that supposed to be this? Like a ChatGPT that isn’t “woke”, so it can be a safe space for fascists, homo/transphobes and misinformation enthusiasts.

  • vrighter@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    As more people post ai generated content online, then future ai will inevitably be trained on ai generated stuff and basically implode (inbreeding kind of thing).

    At least that’s what I’m hoping for

    • Paralda@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      That’s not really how it works, but I hear you.

      I don’t think we can bury our heads in the ground and hope AI will just go away, though. The cat is out of the bag.