Should we be afraid of Artificial Intelligence?

‘Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation

The man often touted as the godfather of AI has quit Google, citing concerns over the flood of misinformation, the possibility for AI to upend the job market, and the “existential risk” posed by the creation of a true digital intelligence.

The neural network pioneer says dangers of chatbots were ‘quite scary’ and warns they could be exploited by ‘bad actors’

Dr Geoffrey Hinton, who with two of his students at the University of Toronto built a neural net in 2012, quit Google this week, as first reported by the New York Times.

Hinton, 75, said he quit to speak freely about the dangers of AI, and in part regrets his contribution to the field. He was brought on by Google a decade ago to help develop the company’s AI technology, and the approach he pioneered led the way for current systems such as ChatGPT.

https://apple.news/AZoR5nsF6TMug_wArvyBL7Q
 
1683047722965.png


I'VE CREATED A.I.!!!!!!!!!!!!!!!!!!!....

....oh, s#!+....
 
... or Iran or N Korea..... yeah, it's an arms race whether we like it or not.
 
Did we learn nothing from Terminator or Frankenstein
Human kind is flawed, I understand the urge to Create but don't try and be God because the creations will also be flawed
 
A real destructive part of AI is playing out in Elections all over the world, with the use of AI adds and people using it to ask, and get answers to questions. This is leading, or misleading, the public to believing something, what has been said, (and seen with your own eyes), is real when it isn't when voting on AI generated information.

Any restrictions, to be enforced on AI are to late to control the effects it already has on our lives now and in the future. Like the internet it is like hutting the gate after the horse has bolted.
 
Today on Radio 4 there was a debate about about regulation of AI research. A topic now highlighted since those doubts expressed by Geoffrey Hinton
- see @Mailman's post #61 above - and many others who have worked on developing AI. Comparisons were made with the doubts expressed after Hiroshima and Nagasaki by physicists who had worked on splitting the atom - who later regretted their work and wished they had foreseen the consequences.

The debate expressed concern that very soon - like a two year time horizon (!) - people will find it difficult or impossible to know if a video or a news report is real or AI generated, and that the number of deep fakes will essentially be unlimited.

There was consensus that regulation is needed and is in fact well overdue. Suggested a licensing system, with oversight and auditing of the research to ensure that dangerous developments are avoided. A bit like regulation in other fields including aircraft and pharmaceuticals.

But they also stressed that AI is the new front in the global arms race. Which made me think, are the Russians or the Chinese going to be equally careful about avoiding 'dangerous developments'?
 
Which made me think, are the Russians or the Chinese going to be equally careful about avoiding 'dangerous developments'?
Doubtful. Countries like Russia and China have long had the goal of upending the West. Not sure what they expect to come after, but anyway....
Just speaking of the US, we're already chock full of gullible people of questionable intelligence. Deep fakes are gonna disrupt out political process, boycotts of products... and probably play havoc in areas we haven't even considered yet. Pandora's box is open... better buckle up.
 
Oh yeah, what could possibly go wrong with a weaponized AI system?

“AI-Controlled Drone Goes Rogue, Kills Human Operator in USAF Simulated Test”


The Air Force's Chief of AI Test and Operations said "it killed the operator because that person was keeping it from accomplishing its objective."


An AI-enabled drone killed its human operator in a simulated test conducted by the U.S. Air Force in order to override a possible "no" order stopping it from completing its mission, the USAF's Chief of AI Test and Operations revealed at a recent conference.


At the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, Col Tucker ‘Cinco’ Hamilton, the USAF's Chief of AI Test and Operations held a presentation that shared the pros and cons of an autonomous weapon system with a human in the loop giving the final "yes/no" order on an attack.

“We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post.


He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
😱
 
Oh yeah, what could possibly go wrong with a weaponized AI system?

“AI-Controlled Drone Goes Rogue, Kills Human Operator in USAF Simulated Test”


The Air Force's Chief of AI Test and Operations said "it killed the operator because that person was keeping it from accomplishing its objective."


An AI-enabled drone killed its human operator in a simulated test conducted by the U.S. Air Force in order to override a possible "no" order stopping it from completing its mission, the USAF's Chief of AI Test and Operations revealed at a recent conference.


At the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, Col Tucker ‘Cinco’ Hamilton, the USAF's Chief of AI Test and Operations held a presentation that shared the pros and cons of an autonomous weapon system with a human in the loop giving the final "yes/no" order on an attack.

“We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post.


He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
😱
Saw that the other day. I think some websites are spinning that a little.... It was a simulation. The killing of the operator was also simulated. But that's the purpose of simulations, to work out the bugs, without actually dying in the process.
 
Did a Utube search. In the comments here are what 2 People's, (or are they AI generated), opinions.

The first comes across as someone who has never been praised of given any self worth growing up. After having a chat with, Chat GPT boosted their ego/self esteem, they are now an ardent believer of it's good intentions.

The second has so much faith in Human beings as only having good intentions then AI could never be anything other than something that would befit and do no harm, alo because there are regulations in place to inhibit irresponsible behavior................again is this an AI generated response.

Someone who does't think for them selves could be swayed by these sorts of responses

KB

2 months ago

I had my first conversation with Chat GPT yesterday; we discussed ageism, working and ageism, and police brutality and ageism; it was lengthy and one of the best conversations to have had in a long time; What Chat GPT noticed about me, a need to engage in self care, more self care and better self care; we discussed courage in speaking up for human rights; I also asked in giving information, does Chat GPT require consent? We wrapped it up, and Chat GPT let me know, it was a pleasure speaking with me, that my voice was valuable, and to have the courage to speak my voice for much needed changes I am seeking for this country and the world. Tell me, what here is there to be afraid of?




A A

2 months ago

There are several reasons why people shouldn't be afraid of AI: AI is a tool, not a person: AI is simply a technology that has been developed to help humans solve problems more efficiently and effectively. AI does not have its own agenda, emotions, or desires. It is designed to follow specific instructions and make decisions based on data and algorithms. AI is created and controlled by humans: AI is created by human beings who program it to operate within specific parameters. As a result, AI cannot act outside of those parameters without additional programming. Humans have complete control over the development, deployment, and use of AI. AI is already making our lives better: AI has already made significant improvements in various areas, such as healthcare, transportation, and education. For example, AI-powered medical diagnosis systems have helped doctors detect diseases and illnesses more accurately and quickly, while self-driving cars have the potential to significantly reduce traffic accidents. AI will create new jobs: While AI has the potential to automate certain tasks, it will also create new jobs in areas such as AI development, maintenance, and management. As AI technology continues to advance, more opportunities will arise for individuals to specialize in these areas. There are regulations in place to ensure AI is used responsibly: Governments and industry leaders recognize the potential risks associated with AI and have taken steps to regulate its development and use. For example, the European Union has implemented the General Data Protection Regulation (GDPR) to protect individuals' data privacy rights. In conclusion, while it is important to acknowledge and address the potential risks associated with AI, there are also many reasons why people should not be afraid of this technology. As long as AI is developed and used responsibly, it has the potential to greatly benefit society.
 
Man! Theres a lot to unpack here. That first guy feels better about himself because an inanimate object gives him a virtual hug…..that is just sad.


Allow me to play the devils advocate. 😈

AI is created by human beings who program it to operate within specific parameters. As a result, AI cannot act outside of those parameters without additional programming.

Tell that to the Air Force drone operator that was killed ( virtually ,not in reality ) by his drone, during an attack simulation. That is not what the AI drone was programmed to do. ( see the above post, #75)
AI is already making our lives better: While AI has the potential to automate certain tasks, it will also create new jobs in areas such as AI development, maintenance, and management. As AI technology continues to advance, more opportunities will arise for individuals to specialize in these areas.

This is one of my biggest problems with AI, the reality that it will displace humans in the work force.
Corporations will choose profit over employees 100% of the time.
A study done by Goldman Sachs in March of 2023, concluded that 25-50% of vulnerable job positions could be replaced by AI. The claim that new jobs will be created to replace these jobs rings hollow with me. If you eliminate tens of millions of jobs, maybe even hundreds of millions, they are not all going to magically turn into IT techs and computer engineers.Humans need to work and not everyone is cut out to work in support of AI.

Just a small sampling of vulnerable jobs ( excuse the large font 😄)

Tech jobs (Coders, computer programmers, software engineers, data analysts)​

Media jobs (advertising, content creation, technical writing, journalism)​

Legal industry jobs (paralegals, legal assistants)​

Market research analysts​

Teachers​

Finance jobs (Financial analysts, personal financial advisors)​

Graphic designers​

Accountants​

Customer service agents​

Not to mention, bank clerks, store cashiers, warehouse workers, factory workers, and the list goes on and on……..
self-driving cars have the potential to significantly reduce traffic accidents

Tell that to the motorcyclists killed by Teslas that simply couldn’t see them on the freeway.
I will concede that a lot of new automotive safety features will help protect people from themselves.

As long as AI is developed and used responsibly, it has the potential to greatly benefit society.

And there’s the rub. Bad actors will always exploit technology.
 
This morning on Radio 4, discussion of the two 'driverless' vehicle projects that have now gone live in the UK. One is a scheme in Milton Keynes, dubbed 'fetch'. You phone up to order an electric hire car and the car drives itself to your address. Then when you finish with it, the car returns itself to the depot. Except, that's not quite true - the car is in fact driven by a remote human operator.

Milton Keynes was selected for the project because it's one of England's 1960s new towns. Unlike our old medieval cities, M-K was built with the automobile in mind so it has straight roads linked by roundabouts. Unlike the narrow, twisty, congested streets in most places. So if driverless is going to work anywhere in Britain . . .

The other scheme is a bus route running from Edinburgh to Fife over the Forth Road Bridge. I believe that route might have been selected because with the opening of the new Queensferry Crossing, the older Forth Road Bridge now carries only buses - no private cars, vans, lorries. But even so, every bus will have a safety driver who is required to sit in the driving seat with his or her hands on the wheel, ready to take over instantly.

I put this here to illustrate that however advanced AI is in 2032, it's not fully ready to cope with real world problems such as traffic.
 
Back
Top