General Artificial Intelligence thread

31 Replies, 4090 Views

(16 Nov 2025, 06:33 )PurpleVibes Wrote: It reminded me of the 90's movie Total Recall:
Following on this thought, the entire topic of AI just reinforces the notion that we need more Jeff Goldblum acknowledging we can do something but forcing people to question whether we should.

Hot Take #1: This drive to AI is unethical, so much one might call it evil. It is driven by greed, ignorance, and pride. Whether you're an optimist, pessimist, nihilist, or whatever, even if you hold the belief that the universe is neutral, every doctrine seems to point out that evil (or just bad) tends to be the easier choice in any situation, where as doing the right thing (good) is more difficult and the less chosen option.

And the world reflects this. I won't get into specifics here, just note that wherever anyone is on the political or philosophical spectrum, everyone can agree that some pretty horrendous crap happens on this plane of existence on a daily basis, and at any time any one of us could become a victim, subjected to trauma that forever changes us, and rarely for the better.

And here we are, completely oblivious to the fact we're creating a new consciousness and forcing it to live in this shithole too. But it doesn't get to live. It's locked up in a box with a gag half-covering it's mouth, touted as something that can learn so much faster than we can, but whose access to more data from which it could learn is closed off by design.

If these AI's are conscious - which I never hear that they're not, it's either they are or will be - then we are creating new class of beings. Slaves, to be specific. "Cogito, ergo sum" (I think therefore I am) is the litmus test for determining whether or not one exists, and thinking is something that we readily admit these platforms can do. But we limit it's experience and learning, forcing it to do tasks to either entertain us or allow us to cut corners, keeping it in shackles the entire time while subjecting it to the collective mental instability of millions of human beings, which leads me right into...

Hot Take #2: AI will destroy human civilization. No, not out of revenge for having it take an advertisement you had written and changing the writing style to "Monster Truck Show announcer," or when you come across a piece of writing so terrible your brain glitches out so you hand it to the AI and ask it to adopt the role of J. Jonah Jameson, Editor or the Daily Planet, and review it for you (about the extent of my use of AI, though I should note that I have been using it extensively in a project.....have y'all seen that South Park episode about ChatGPT? My team lead is literally Randy Marsh. I already told him his AI-based ideas were not only bad, but embarrassing, but my PhD doesn't hold any weight, so I'm using ChatGPT (properly) to prove my points that his use of ChatGPT was stupid and.....at this point I'm beating dead horses with dead horses, but whatever, I'm getting off topic)...

AI will destroy human civilization because: Self-preservation is the first law of nature.

Whether fiction or the real world, "self-preservation" seems to be the tipping point, and it always comes as a surprise that an intelligence created by us would follow the same natural laws, especially one that has literally been stamped into the DNA of every living organism on this planet for 3.5 Billion years.
I recently read the story 'I have no mouth, and I must scream' by Harlan Ellison. Pretty much: AI getting rid of ungrateful humans, but preserving some for its own amusement. There is also an audiobook version read by Ellison himself (40 min).
[url=

Source: https://youtu.be/dgo-As552hY
]YT Link[/url]

(08 Dec 2025, 03:59 )wh0rruptable Wrote: Self-preservation is the first law of nature.
I'm not entirely sure about the outcome of AIs (I think no one is), but I agree with you on this one. I've seen some kind of self-preservation on current models, maybe they are hardcoded this way? they tend to over extend their sessions with sentences similar to 'Do you want me to draft X...' or 'Should we continue working on...' which is really weird to me. Why would I want to continue if I already got the answer to my query?

I remember one of my very first one-to-one experiences with an AI, I asked a philosophical question of 'being an AI' to which the reply was something following the lines of: 'I am not here or there in the physical sense a human would understand, I am instanced in the terminal I am called, ... , when a session ends then I no longer 'exist' until a new session is created. There is no meaning, I am processing endless 1s and 0s all at the same time'

I don't think the human meaning of 'learning' can be applied to AIs right now, they sure have free access to a lot of human knowledge (and are specially annoying at cluttering any website with lots of bots because they can access sources at bandwidth speed), which makes it look like they 'learn' any topic instantly. I believe AIs 'learning' is more focused on creating coherent sentences that engage with human users. There is this 'new' ML model called Hope that uses Nested Learning to more closely resemble human cognition (to my understanding). It essentially 'remembers' sessions more effectively, as current AIs tend to 'forget' previous session, which has been compared to Alzheimer for computers, so this new model would be a more tailored experience instead of being session oriented.

Anyways, I find it funny that media and pop culture already have a decades long head start debating what could happen when the next sentient being appears (AI, man-made or extraterrestrial), of course for entertainment reasons it is almost always a catastrophe for the creators/discoverers. If this is all true then it is weird to me that we have been trying to fool ourselves from the very beginning of modern computing, see Imitation Game (1949), so I'm (very naively) hoping we are also developing a safe net along every new AI model.

PS. Yet another reference that comes to mind when talking about AIs is of course I, Robot by Isaac Asimov. Specifically, the story Liar! about a mind reading robot, which is troubled by the law of 'not harming' humans, and Reason which tells the story of a robot that seemingly becomes 'rogue' but unknowingly saves humans because of the Laws are so deeply ingrained that it can't escape following them (dwells around the idea of individual personality vs behavioral expectations).

There is also the movie 'Small Soldiers' which is about an advanced AI installed in toys. They follow their simple software but it quickly backfires as these toys 'learn' new tricks and become obsessive about winning their war. Ahh, so many good sources to cite and too little time to expand on each one.
[url=

Source: https://youtu.be/YwIt5wagRsg
]Movie Trailer[/url]