Nefarious actors, Predictive AI, and Injustice

[Special tags: possible spoilers & NSFW topics]

Thursday, August 28, 2019

Article Part I: Minority Report
In the movie Minority Report a special police unit works with three psychics to predict future behavior based on three psychics, basically specially enhanced humans that can foresee a vision of criminal behavior and stop them before it happens. The police sweep in and enforce a bust and the criminals are hauled off and imprisoned in a “virtual reality” cell. However SPOILER ALERT below if you’ve never seen the movie.

It is revealed that these psychics also envision sometimes an alternate future meaning that time marches on but there are multiple time streams and basically multiple realities, that one is not predestined to fail or crime. And that one has the power to choose even if one is aware of what one’s supposed destiny that they have the power to deviate and choose an alternate future. Think Charles Dickens and Ebeneezer Scrooge with the Ghost of Christmas Future. Think of Oedipus Rex and think of King Lear and Othello. These are all tragic figures and tragically flawed. Even Macbeth or Lady Macbeth has the power to pick their destiny and they do not have to choose evil or a bad outcome.

Article Part II: Injustice
Secondly, if you pay attention to the comics phenomenon, and we do of course because it’s unavoidable these days with DC and Marvel competing full throttle and even Amazon getting into the mix with The Boys. Everyone has a superhero that they root for because inside we want someone to look up to and emulate and are in awe on the one hand, but also like in The Boys, we have a sort of Lex Luthor moment or Billy Butcher moment where we like to see them fall. It’s probably no secret that we love seeing superhumans crash into building, break stuff and make huge frickin’ catastrophic messes and have to figure out how to deal with it. It strokes our ego, gives us something to pass the time with storytelling and we’ve been doing it since the beginning of the time with Hercules and mythology to Journey to the West and all the various superheroes, Megamen, Astroboys, and even He-Man. But the old saying is that the bigger you are, the harder you fall eventually. And all empires fall eventually and crumble with time even the Romans did.

Article Part III: Linking it all together to AI
So, what does this have to do with our article title?
Well we read an article entitled “AI-powered cyber-attacks will make fighting hackers even harder” by Danny Palmer published on December 14, 2016. We read that there was a massive data breach that affected at least 22 million federal employees in an article dated July 9, 2015 by Mike Levine and Jack Date from ABCNews. And we saw an article about an annual fiscal report about how there were deficiencies in cybersecurity in parts of administrations. These include contractors, law enforcement, financial sectors, public policy makes and taxation.

It doesn’t take a lot of genius or reading between the lines as artificial intelligence is moving into everything that it will also be used in predicting employee behavior and shape other areas of our lives. You’ve seen heuristics in antivirus software which is a predictive technology. Now imagine this on a larger spread out scale. One can imagine that if a network program is now used to monitor employee behavior or watch for employees doing bad things on their computers then it could be further extrapolated into both good and bad. On the one hand artificial intelligence can prevent accidentally releasing of data or consciously intentional data leaks. You’ve seen a lot of that in the news about data leaks and other information published for politics and to embarrass certain parties. So in one sense this can be good.

But let’s also take the flipside. The Injustice, Minority Report version. The hacked Google car version. There was a case in the news about a hardware failure where an Asian man was convicted of manslaughter in 2006 due to “unintended acceleration” if you do a web search on “Toyota car crash unintended acceleration murder trial” you will find this case about a 1996 Toyota car crash. There were many recalls about unintended acceleration about mechanical failures that also involved similar cars and the man was later set free due to this new information. This was an old news article which you may find online.

What does this all mean? Quite simple that we place too much faith in our devices, our phones and are overly reliant on our technology. Elon Musk had been in the news at the end of 2017 warning about the chilling effects (scratch that term)… the potentially dangerous fallout or repercussions that could happen if we rely too heavily on robots, artificial intelligence and our technology. While 2017 was a boon year for technology with several references to Boston Dynamics, references to the future in Rick and Morty, and also the first citizen granted robot, this article is more about relying on technology to get it right and make a moral judgment which is still subject to flaws based on programming. See Robocop and other “Judgment Day” devices.

If you’re not familiar with Injustice, a distraught Superman is taunted by Joker to such a point that he goes rogue, insane, and crosses the line becoming a despot, disillusioned and a malcontent. Injustice basically is when something so powerful and godly-like goes rogue. And as seen in The Boys episode of the Amazon series causes a normal guy’s day to turn upside down. And thus is born the Lex Luthors, the Billy Butchers and the Jack Quaids after his girlfriend is hit by the “A-Train”.

And thus we come to our scariest part of the article. This is NSFW or not safe for work if you’re squeamish. If you do a search for “Dailystar sex robot coding” you will see an article entitled “Sex robots with ‘coding errors’ prone to ‘violence and could strangle humans'” dated August 25, 2019 by David Rivers.
In the article you see a warning to us humans about over reliance on humans to the extreme in our most vulnerable moments of intimacy. Our robots are being made to drive Miss Daisy. They’re being made to entertain us. They’re being made to teach us. They’re being made to be companionship. As Ken Jenner of Jeopardy fame stated, “I for one welcome our new computer overlords” when he was built by IBM’s Jeopardy question solving robot. The original quote is actually taken from a 1977 adaptation of H.G. Wells’s short Empire of the Ants and also popularized by the Kent Brockman character on the Simpsons.

Which brings us back to the quote from Jurassic Park, “Your scientists were so preoccupied with whether or not they could do it, they never stopped to think if they should.” And that’s were we get the flip-side of things such as RoboCop of 1987 and RoboCop of 2014 starring Michael Keaton where Alex Murphy is literally stuck in a hell on earth in metal and flesh.

One day Will Smith’s I, Robot is coming today where self driving cars and the call for where does moral judgment begin or end. The concept of delegating out a group accepted moral and ethical framework based on society changes. (After all it used to be marriage for racially dissimilar people was declared illegal or same sex. But then laws changed and flipped over time). What is to say what is good? Will Smith’s character struggles with guilt about a machine’s program directive to spare the many rather than save the few and doesn’t have the additional judgment to realize that an adult may be able to save himself. A program error.

Are we someday going to entrap an employee and say they committed a leak or data breach. Are we going to end up with a precog world where thought crimes exist and you are busted basically by a program that says you did a crime or are going to commit a crime. Programmers will have to thorough document and provide flexibility for many scenarios, overrides and “unthought of” situations and account for them. Because why we may be smart and the robots and artificial intelligence may be getting close to approximating us, it will never truly cover the creativity, the madness, and uniqueness of a human being, at least not in this stage of evolution.

Humans have the ability to get tired, to get cranky, to feel empathy for others AND the ability to make a decision to show mercy and moderate and change course and change its mind. Will a machine if show enough evidence do so? Can a robot listen to reason and facts and pull a King Solomon, or will it simply… “do as it is told no questions” and run with it. The thing unique about humans are we can be “woke” and unpredictable. There’s a Batman vs Terminator or RoboCop vs Terminator video online where someone comments that the advantage that the former has over the latter is they have the uniqueness of being able to adapt their own “mental programming” and override their state of nurture and nature to adapt, evolve, and be resourceful enough to take advantage of the situation. It is the reason that you see when Batman or Rambo or some other character is caught with their back against the wall they get this survival mode and use whatever is at its disposal to survive or come out on top.

Humans are no longer held hostage to fighting animals and tigers and bears. Our survival is the business jungle at the office. And that’s why if we are to get it right and have our machines evolve to become like us and a “cherished” member of our household like Asimo, Aibo, Qrio then we have to remember that humans aren’t perfect and neither are robots as they are subject to the influences of humans and their habits and indoctrination and any pre-existing flaws … and also disorders or flawed thinking. We cannot have robots and software and AI but our sole thought police but must rely on something more. This is a question that will take a lot of dialogue and discussion, but it’s an important one to have.

[Our site tries to interpret events as they go on and give views. They are our own.]

Author: savvywealthmedia

Leave a Reply

Your email address will not be published. Required fields are marked *