No, I haven’t been watching too many science fiction movies, although I did enjoy watching the new Blade Runner film last week…
This article is covering a presentation Elon Musk made to his Neuralink employees a couple of months back. Apparently, he believes that artificial intelligence is dangerous, and there’s a 95% chance of it someday exterminating humanity…
When Artificial Intelligence Finds Superiority
Only a matter of time?
Machines are getting more and more adaptable by the day, learning new avenues to crunch data, and even learning from their mistakes.
They will inevitably challenge human thinking and human way of life. But will their advancement lead to their domination? Will they be the final nail in the coffin of the human race?
We all know that modern warfare relies heavily on technology – what happens when we put an AI into that environment?
It will become a threat…to us all.
In a speech to his trusted employees, Elon Musk claimed that artificial intelligence is a “fundamental risk to the existence of human civilization.”
He believes that governments are looking at AI through rose tinted glasses – they only see the benefits to their defence sector…and not the real threats at hand.
He also believes that these same governments need to comply with regulations – they must be educated properly before the technology overwhelms them.
Musk believes that the time for these regulations is right now, as most regulations don’t come about until something bad happens.
If something bad happens with artificial intelligence – there won’t be a chance for twelve old men to sit around a table and argue…because we’d all be wiped out!
The War Masters
Elon Musk is petrified by the idea of artificially intelligent super weapons – and rightly so. He touched on a scenario where the computers controlling the defence weapons actually manage to outsmart the human race, starting a war for their own means.
It sounds a little SkyNet and Terminator – but are we closer to that point than we realize?
The famous Russian weapons firm, Kalashnikov, have apparently already got to the point where they can provide weaponry that learns from each battle it is involved in…
What happens if they learn too much?
What happens when they only fire when they decide on the target?
As we touched on above – these machines are learning from us, day by day – what happens if we ever need to stop them, or fight them?
Whatever we try, no matter how successful, will be computed, and beaten within seconds.
It won’t work on them again.
If you have any thoughts or opinion on the subject we have covered here today, please leave them in the comment section below.