• It's time to vote for the bikes you want to see in the 2025 XS650 Calendar! Vote here!

Should we be afraid of Artificial Intelligence?

Mailman

Hardly a Guru
Messages
10,380
Reaction score
50,780
Points
813
Location
Surprise Az
Terminator movies aside, I have always had rather deep rooted mistrust of AI. I just don’t feel that it’s healthy for computers to be “too smart”. Some will say that they will never be more than just a machine that follows its programming. But is it? Is it possible for a super computer or maybe a network of super computers to gain so much knowledge that it begins to think of itself as “self aware”? We program them to think like us, so is it such a stretch ,that it could actually begin to think of it’s self as a living thing? In the story below, someone performed an experiment that in my opinion is incredibly reckless. But the frightening thing, to me, is that the computer then set about the task of researching atomic weapons and how to procure them. 😳
Have we gone too far?

“Someone Asked an Autonomous AI to 'Destroy Humanity': This Is What Happened

ChaosGPT has been prompted to "establish global dominance" and "attain immortality." This video shows exactly the steps it's taking to do so.”

https://apple.news/ArnDy8oOEQfaUK9M5L6cLxQ

3858DA87-FAFA-46A1-9650-75C68A7A8AB3.jpeg
 
It's an interesting question. AI is in it's infancy. How we proceed it or allow it to evolve now will have an impact on it's future and whether or not "Skynet" remains a Hollywood fantasy or becomes a dark reality.
For those of you not following AI or dismissing it's future implications, our (US) military is already testing AI in air combat. Don't know about you guys, but I think that's both awesome and terrifying at the same time.
Alex Hollings, Sandboxx News:


 
Outtake from Popular Mechanics…….

https://apple.news/A34BZXDIwShm2C4fn_sB3hw

“A Chernobyl for AI May Be Imminent, Scientist Says”
AI expert Stuart Russell reiterates the need for a pause in AI expansion before humanity loses control…….
Leaders are calling on AI creators to ensure the safety of AI systems before releasing them to the public…….

Russell, a computer science professor at the University of California, Berkeley, has spent decades as a leader in the AI field. He's also joined other prominent figures, like Elon Musk and Steve Wozniak, in signing an open letter calling for a pause on development of powerful AI systems—defined as anything more potent than OpenAI's GPT-4.

AI labs continue an “out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”
 
From the book DUNE, by Frank Herbert (and the "Orange Catholic Bible") : "Thou shalt not create machines in the likeness of man's mind."
Not sure of the definition of AI
You were well out on the ledge when you started googling every darned question you have.
You've been counting on the gatekeepers of information (AI), to make choices in your best interest....
All they are doing now is cloaking the output to mime human discourse?
 
The race to advance and procure AI dominance by big tech firms is happening way too quickly and without significant consideration for safety. As we have witnessed endlessly throughout the tech boom, the “can we” always outweighs the “should we”. It is easy to look at technological advancement as innocuous, cell phones were wonderful until they weren’t, and by then it was too late. I sit atop my motorcycle at a 4 way stop waiting for the light to change. I scan the surroundings and as expected, this never changes much, almost every individual, young and old, gazes into a cell phone screen, their eyes dart up to the light every 15 seconds or so if they desire to avoid an auditory assaulting horn blare. AI is too young and it’s ability to sculpt so much of our existence is being greatly underestimated and understated. If we choose to avoid mistakes of the past, we should attempt to navigate it’s growth with a bit more patience and respect.
 
Back
Top