Terminator movies aside, I have always had rather deep rooted mistrust of AI. I just don’t feel that it’s healthy for computers to be “too smart”. Some will say that they will never be more than just a machine that follows its programming. But is it? Is it possible for a super computer or maybe a network of super computers to gain so much knowledge that it begins to think of itself as “self aware”? We program them to think like us, so is it such a stretch ,that it could actually begin to think of it’s self as a living thing? In the story below, someone performed an experiment that in my opinion is incredibly reckless. But the frightening thing, to me, is that the computer then set about the task of researching atomic weapons and how to procure them.
Have we gone too far?
“Someone Asked an Autonomous AI to 'Destroy Humanity': This Is What Happened
ChaosGPT has been prompted to "establish global dominance" and "attain immortality." This video shows exactly the steps it's taking to do so.”
https://apple.news/ArnDy8oOEQfaUK9M5L6cLxQ
Have we gone too far?
“Someone Asked an Autonomous AI to 'Destroy Humanity': This Is What Happened
ChaosGPT has been prompted to "establish global dominance" and "attain immortality." This video shows exactly the steps it's taking to do so.”
https://apple.news/ArnDy8oOEQfaUK9M5L6cLxQ