Blue colored banner with text "Skynet is real"

The Real World Beginnings of Skynet

its here….. and soon…… it’ll destroy us all…. *dun dun dun*

For those of you who do not know…

Skynet is the fictional antagonist artificial intelligence program that attempted to wipe out civilization in the Terminator universe.

Skynet is a fictional neural net-based conscious, gestalt, artificial general intelligence (see also Superintelligence) system that features centrally in the Terminator franchise and serves as the franchise’s main antagonist.

Rarely depicted visually in any of the Terminator media, Skynet gained self-awareness after it had spread into millions of computer servers all across the world; realizing the extent of its abilities, its creators tried to deactivate it. In the interest of self-preservation, Skynet concluded that all of humanity would attempt to destroy it and impede its capability in safeguarding the world. Its operations are almost exclusively performed by servers, mobile devices, drones, military satellites, war-machines, androids and cyborgs (usually a Terminator), and other computer systems. As a programming directive, Skynet’s manifestation is that of an overarching, global, artificial intelligence hierarchy (AI takeover), which seeks to exterminate the human race in order to fulfill the mandates of its original coding.

Source: https://en.wikipedia.org/wiki/Skynet_(Terminator)

I have always felt that Skynet is an embodiment of people’s fear over runaway technological advances… that it’s an artistic manifestation of the thought that if mankind does not get a hold of itself and become more principled in its development of technology… we may very well end up destroying ourselves.

And because of the popularity of the Terminator universe people are constantly making comparisons of recent technological advances to Skynet. Sometimes the comparisons are more for fun than anything else… but sometimes they have a very real point.

Sadly, because people demean art and general ideas because of arbitrary things (like disliking Arnold Schwarzenegger as an actor)… often times people don’t seem to understand or outright dismiss the serious points others are trying to make.

Well, anyways, now it’s my turn to make the comparison to Skynet.

 a company claims that an artificial intelligence program it designed allowed drones to repeatedly and convincingly “defeat” a human pilot in simulations in a test done with the Air Force Research Lab (AFRL).

A highly experienced former Air Force battle manager, Gene Lee, tried repeatedly and failed to score a kill and “he was shot out of the air by the reds every time after protracted engagements.”

“It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed,”

That speed of action to be the key to the success of ALPHA, software developed by a tiny company called PSIBERNETIX. They seem to have overcome one of the main obstacles to artificial intelligence getting inside a human’s decision cycle: its ability to accept enormous amounts of data from a variety of sensors, process it and make decisions rapidly.

Source: http://breakingdefense.com/2016/08/artificial-intelligence-drone-defeats-fighter-pilot-the-future/

That’s Creepy as Shit

That is reality.

That is our current world.

And that is creepy.

A drone running an artificial intelligence program went against an experienced pilot and was able to easily respond to the scenario.

While it does not sound like the program made predictive decisions, it does sound like it was able to respond so quickly to any decision the human pilot made that the drone was able to maintain a decisive edge.

And that… is only a few steps from the basic premise of Skynet.

To me, that sounds like the very beginnings of a Skynet program. Windows 95 compared to Windows 10.

This Creates an Enormous Hidden Problem

The hidden issue here is that once software like PSIBERNETIX’s is able to easily defeat human pilots… it will be insanely easy for nations to justify building defensive interceptor drones to defeat incoming aircraft.

What’s to hate? Unmanned AI interceptors not only save human lives (no pilots) but allow for extreme action such as intercepting at a distance that does not allow the drone to return to base (destroying the drone). The defensive part will make it easy to sell to the public and steamroll over anybody who mentions that it creates a dangerous precedent.

To top that off, the fact that these AI interceptor drones will be able to easily defeat any incoming jets will make it an easy sell to politicians and the defense industry in general.

That leads to the hidden problem.

The minute AI interceptor drones become a reality, other nations will need to be able to defeat them. And now with humans being incapable of dealing with the AI software on the interceptor drones… Nations will inevitably add AI software to manned offensive jets as a ‘supplement’ to the human on-board.

Then to address the increasing proliferation of extremely capable manned and unmanned AI software enabled aircraft… Nations will need to upgrade Missile and Air defense systems with AI software.

And before we know it, before we realize it, we have created the exact scenario of the Terminator series. B-2 bombers with AI software. Missile and Air defense systems with AI software. AI software being only a hairs breath from being able to wipe out humanity at its pleasure.

Ultimately

While PSIBERNETIX ‘artificial intelligence’ software is not up to snuff compared to the brilliantly malevolent Skynet… it does represent a real world example of the beginning of a world where a Skynet scenario is a very real threat to humanity.

Science fiction has always been a double-edged sword… For every hopeful technological scenario out there (robots helping disabled people) science fiction provides a shocking example of the horrors that technology can cause (robots killing humans to ‘protect them’).

I believe that as a society, we better start taking the existential threats that science fiction writers have for so long warned us about much more seriously. We are now reaching a point where our technological development has honestly begun to catch up to the fictional technology created in science fiction.

Humanity has already begun to make a lot of technology found in science fiction real… who’s to say Skynet won’t make the turn from fiction to reality as well.

P.S.

The fact that PSIBERNETIX is close to Cyberdyne (the fictional developer of Skynet) should also creep you out a bit. If any of y’all over at PSIBERNETIX have anything to say about that… well I know I’d get a good laugh if you were seriously inspired by the Terminator universe.

 

Update – A Friendly Enviroment for Creating Skynet is Emerging (8/17/2016)

Only a few hours after publishing my Skynet article… Breaking Defense published an article that only reinforces the points I made above (what a fun coincidence).

The Pentagon’s top weapons buyer, Frank Kendall, warned today that the US might hobble itself in future warfare by insisting on human control of thinking weapons if our adversaries just let their robots pull the trigger. Kendall even worries that Deputy Defense Secretary Bob Work is being too optimistic when Work says humans and machines working together will beat robots without oversight.

These are unnerving ideas — and top Army leaders swiftly responded with concern that robots would shoot civilians if you take the human out of the loop. This is what Vice Chairman of the Joint Chiefs Paul Selva calls the Terminator Conundrum: “When do we want to cross that line as humans? And who wants to cross it first? Those are really hard ethical questions.” They are also a fundamental question of combat effectiveness.

Source: http://breakingdefense.com/2016/08/should-us-unleash-war-robots-frank-kendall-vs-bob-work-army/

On the one hand it’s good to hear that people in charge of the kill-y things are aware of the “Terminator Conundrum”…

On the other, it does not sound like anybody is actually creating a firm culture or system that will prevent the rise of a Skynet like issue.

 

Update (9/15/2016) – Bullsquirt Bureaucrat Wording

IN FLIGHT TO ANDREWS AFB: Defense Secretary Ashton Carter is pushing hard for artificial intelligence — but the US military will “never” unleash truly autonomous killing machines, he pledged today.

“In many cases, and certainly whenever it comes to the application of force, there will never be true autonomy, because there’ll be human beings (in the loop),” Carter told Sydney and fellow reporter John Harper as they flew home to Washington.

Source: http://breakingdefense.com/2016/09/killer-robots-never-says-defense-secretary-carter/

there’ll be human beings (in the loop)” doesn’t mean shit.

That could be anything from a random grunt watching a screen to a software program that sends information back to a dark smelly room in the Department of Defense that may or may not have somebody actively monitoring it.

Even if a human had ‘active control’ over the killy parts of a drone… that does not mean an AI program cannot wreak havoc. In theory even if you restricted the killy parts, an AI program could limit the information it sends back to the human, in an attempt to comply with its objectives, so that it only presents targets that the AI feels should be eliminated rather than the targets the humans actually want. Humans will be prone to believing that information the drone sends back… leading to inappropriate targets being fired upon.

And what happens when the human finally realizes that the drone is malfunctioning? Other friendly drones and humans may be required to fire upon it. Which then leads to a whole new existential threat… the AI may, in an attempt to preserve its hardware, fight back.

When you think things through step by step… it is insanely easy to come up with a million and one unacceptable scenarios where potential AI hardware can malfunction.

The reality is humanity has no conceptual understanding yet of how advanced intelligences will interpret human directives let alone directives that involve killing. Asimov has written time and time again of that fundamental issue (you’ll probably know of the general idea here from iRobot).

Humanity is not ready for any of this and the thought that keeping humans in the loop when dealing with software that goes vastly beyond human capabilities is ridiculous.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.