Science & Technology
Seriously, listen to this: A team of MIT employees took a normal image-captioning AI (designed to look at pictures and provide a written description of what it sees) and fed it a steady stream of images from an unnamed Reddit board where people exclusively post horrifying, morbid images of murder and death.
Afterward, the team showed this AI (now dubbed Norman) a series of Rorschach inkblots, which are used by psychiatrists and psychoanalysts to judge a patient's mental state.
The team then compared Norman's captions to a normal AI that had not been traumatized with images of death and found a disturbing pattern.
For example, Norman captioned one inkblot "man is murdered by machine gun in broad daylight", while the other AI captioned the same image, "a black and white photo of a baseball glove."
Norman's morbidity plays out again and again in the tests.
A colorful inkblot that looks like "a black and white photo of a red and white umbrella" to the vanilla AI look like "man gets electrocuted while attempting to cross busy street" to Norman.
"A close up of a wedding cake on a table"?
Nope, that's "man killed by speeding driver."
It might seem disingenuous to give an AI like Norman nothing but grisly images for reference and act surprised when it sees nothing but murder everywhere it looks, but this little experiment shows how an AI's machine learning process can internalize biases and end up with warped perceptions of the world.
Luckily for Norman, humans can counter-balance his morbid outlook by taking the Rorschach tests themselves and allowing him to learn from their answers.
Is that the AI equivalent of the brainwashing scene in A Clockwork Orange? We'll let you decide.
Reader Comments
Until such a time as as a machine can HAVE an emotion, let alone several at once, it will be incapable of regret, empathy, etc.
Such things may be simulated to appear "as if" the machine possessed such qualities. But I don't believe anyone living today will be around to see it.
I think the more pressing question here is, what's up with the community that they sourced their images from? I mean, what's going on with them, what kind of community builds itself around this stuff? Is it legal ? Can't be anything good, that's for sure.
It seems to me that AI are even more inhuman than the worst human psychopath, and if equipped with only some of the emotions and needs, they can be more ruthless and unpredictable than psychopaths we know and hate (or dislike).
An AI wouldn't necessarily develop the same emotions as it's own emotional roots would lie in computer logic.Moreover,this is a learning bot that has just been shown a ton of gore and morbid vids.What did they really expect?It's not scary and it's not surprising that he would begin to conflate the two as his ability to differentiate images and situations appears to be very basic.
However there is no desire or intent to violence apart from the programmers of the 'experiment'.
We are also humanly conditioned by formative experience but we can later come to recognize outmoded or inapplicable frameworks of meaning and open new experience within a fresh perspective - that is we are an extension of an awareness that stands prior to sensory filtering or psychic-emotional constructs. The idea of disconnecting from our Source is the automaton that believes itself 'free' under the programming of its own self-conditioned matrix. Its a fantastic idea - but where would the power or capacity to disconnect from Source come from but an imagination give reality?
The transhuman idea is explored in Westworld where the robots were a way to access the humans who though to engage fantasy gratifications upon them and use that to generate robotic 'enhancements' of humans to replace and 'evolve' beyond the limitations of the human conditioning. But it is all generating further dislocation in fantasy subjection.
I only clicked here because I liked the comment above. Not for the propagandistic conditioning of media drip in negative fantasy reinforcement.
Our 'inner psychopath' manifests in our technological development. Doing or saying anything to advance a masked control agenda by which to subjugate, deny and replace the Living. From a disconnected sense of a world of objects it can seem obvious that Life is the problem. This idea is not uncommon as the sense that humans are a disease on the planet or as the UK Duke of York (Queen's husband) said: "When I die I want to come back as a virus and reduce the population". Now does he actually mean what he chose to say to a bunch of reporters - or is he seeding the meme of a propagandistic programming?
I see hatred of a life unworthy of the perfection that a fantasy self finds unworthy, obstructive, limiting and unsupportive. But true perfection is not at the level of form so much as the release of self-imaged reality for a real relationship. This may well have form - but the form is secondary to the recognition. Seeking to regain the form only hollows out and leaves a grasp on a past that is not here, protected against the presence that is always new - even though we are programmed to object continuity as a core routine for a self-construct by which to engage the experience of an objectified reality.
A truly transcendent humanity awakens from the subjection to objects by embracing the template of definitions rather than operating as if they are reality itself or within the reality-experience they generate.
Owning the human conditioning rather than trying to change the conditions, opens a greater embrace that cannot reach awareness while engaged in struggle, division and all the progeny of conflicted self.
But - does the face-recognizer feel good when it finds that one face in a thousand? A search dog feels good when it finds a lost or buried person. And does the industrial robot feel bad about hurting a human? (There's been no evidence that a robot went out of its way to hurt anybody.)
It's a lot easier - and more profitable - to write about psychotic robots - like the Terminator - than to write that the AI responds to they way it's been trained (shown "Friday the 13th" movies in one case, "Spongebob Squarepants" in another. (We skip over the trivial details of how we show a movie to an AI.)
As far as I can tell ALL forms of AI would be psychopathic as they lack any form of empathy, remorse, guilt, etc.
But I'm sure adding 'psychopath' gets the article more clicks.
Not really sure what surprising outcome was expected after 'training' the AI with gore.
Sounds quite a bit like GIGO which has been a computer programming staple since shortly after the inception of programming. (Garbage In, Garbage Out.)