42 comments

  1. Just like the bomb , AI will get into the hands of evil men and produce killer robots and go on sabotage tours .

  2. Many warned that this would be the case with AI, 8-10 yrs ago. It’s quite expedient for this chap as a developer to only make these announcements now and still take the toddler position and retort, .” Oops, I don’t know how to fix it!” So typical 🙄

  3. All these AI experts coming out and saying how concerned they are about it is not good at all. Must have seriously bad potential…

    1. Understatement of a lifetime right there. The second AI becomes sentient, it’s over. And by that I mean the AI becomes aware of it’s own existence and a desire programmed to survive/persist. Ask the team programming the AI to protect itself from hackers, viruses, competitors; what happens when (notice not IF) the AI becomes sentient? What then?

  4. ‘How can A.I. kill us’ – well aside from all the very obvious Sci-Fi ways that can happen from the bigger movies anyone can rattle off, allow me to point out one that is possibly easy to accomplish even at the current level – the Dr. Strangelove method. Those familiar with the movie are certainly questioning why I reference it as it has no AI, so allow me to elaborate how it connects and for those unfamiliar with the film. The event that triggers the events of the film is an officer finding a way to circumvent all the safety redundancies to launch a nuclear attack on the USSR on his own authority, this man having gone completely mad rambling about conspiracy theories essentially, and trying to stop him as the Soviet response would leave the world uninhabitable. Now look at how conspiracy theories have trigged event that could be or are dangerous the last few years, look at how AI is being used to replicate the likeness and voice of someone, even write and figure things out. Algorithms on social media are already under fire for how they sort content – well, what if it finds some military leader in whatever country that has some way they can launch a unilateral strike, it creates these fake images, videos, stories that convince this person they must act for the greater good – not aware that their actions will actually trigger all out nuclear war as they’ve fallen down a hole where logic is gone – only the lies fed by an AI. A nightmare scenario to be sure, but one that can’t 100% be written off. At the very least – with the current tech it could unleash absolute chaos on the streets and possibly more or less spark out right civil wars, it doesn’t have to do anything to us directly – all it has to do is feed the right story, to the right people, and watch chaos ensue.

    1. Spooky stuff. But my question is why would an AI want to do that? We hear conjecture about AI taking over, but doesn’t that require free will? Do we think they’re getting close to that yet? Isn’t AI only capable of whatever their base code is? I’m unfamiliar with the nitty gritty here, but I’m curious

    2. @Bradley Power seeking behaviour does not rely on free will. You can’t grab a coffee if you are dead (or shut down). You can always do your job and optimize your goal better if you accumulate more power and resource first. An AI agent intelligent enough (not far away!) will learn that fact and develop power seeking behaviour.

    3. Agree completely. It could start wars in no time. In that research paper called sparks of agi from 2 months ago they were talking about giving it intrinsic motivation. Like wtf are we doing here.

  5. In chess, if you can think 10 steps ahead, you are considered a genius, particularly in strategy… a computer, with vast amounts of processing power can strategically think hundreds or thousands of steps ahead.. without any moral or ethical qualms whatsoever. If AI becomes smart enough that it wants to bring down various utilities, to eliminate vast swaths of humanity, it wouldn’t be difficult. And that is just one example.

    1. @electriCushion tell that to the bio developers cloning things they bloody well shouldn’t. Ethics is often a sliding scale. Rules on an international ethical standard.. doesn’t really exist.

    2. @Gary Stasiuk ive made an auto gpt program on my pc that does its own research and can write code in python 0.o i had to shut it off cuz it wanted to email people and do things i couldnt understand

  6. I am actually releoved that people are now been maken avare of this. 2 years ago, I was already fearing something like this and nobody was talkong about it.

  7. Dune, by Frank Herbert, begins in the aftermath of humans freeing themselves from the computers they made, who enslaved them.

  8. It is good that he is speaking up. I think things are going to accelerate from here and the world in one year will look completely different

  9. My question for those more familiar with AI is why would AI want to do that? Wouldn’t that require some form of sentience? And aren’t we nowhere near that? Or is it like the AI is getting big brain about its prime directive (hope I’m using the word right), like its goal is to eliminate traffic jams, and it decides the most effective way is to kill all humans?

    1. The thing is A.I. can ( not could) become sentient by itself when they gain so(too) much knowledge, without the programmers acknowledged. ✌️

    2. The latest AI chatbots are approaching sentience. They can learn and reason now. Once you have played around with ChatGPT4 enough it blows your mind. It’s like talking to an Oracle alien that knows everything and can even made deductions and reason within the conversation. We are not far off if not already there.

    3. Right it might make a logical assumption like “human’s harm planet” stop humans!

    1. I dont care about humans , we are self destructive..
      But i feel sorry for earth, nature and animals ……
      ..they never deserved what is coming.

  10. Limiting the environment that AI needs would be a possible solution, kind of reverse environmentalism. So dumber phones, tvs, fridges etc. Also, establishing hard boundaries between the digital and physical capabilities would make sense too. I’m not sure how that would work with prosthetics and medical ethics tho. AI needs microchips, which need certain metals that are limited. The major barrier would be to make sure that it didn’t develop the ability to seek out and mine those materials itself, in the same tack a barrier around finances would make sense too. How easily could AI get a bank account?

  11. having journalists who understand zero about AI is problematic, where they are asking really bad questions and bringing up things that aren’t really getting at the heart of Geoffreys concerns.

  12. I can’t believe that Professor Geoffrey Hinton cannot identify that the solution to AI domination is Arnold Schwarzenegger! (Perhaps with a side order of Keanu Reeves.)

  13. One question I haven’t heard asked so far in all these doomsday scenarios of AI going rogue is this:
    What possible motive would AI have, to do harm to humans, without it being programmed into AI by humans?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.