When most people think about the potential risks of artificial intelligence and machine learning, their minds immediately jump to "the Terminator" - a future where robots, according to a dystopian vision
once articulated by Elon Musk, would march down suburban streets, gunning down every human in their path.
But in reality, while AI does have the potential to sow chaos and discord, the manner in which this might happen is much more pedestrian, and far less exciting than a real-life "Skynet". If anything, risks could arise from AI networks that can create fake images and videos - known in the industry as "deepfakes" - that are indistinguishable from the real think.
Who could forget
this video of President Obama? This never happened - it was produced by AI software - but it's almost indistinguishable from a genuine video.
Well, in the latest vision of AI's capabilities in the not-so-distant future, a columnist at
TechCrunch highlighted a study that was presented at a prominent industry conference back in 2017. In the study, researchers explained how a Generative Adversarial Network - one of the two common varieties of machine learning agents -
defied the intentions of its programmers and started spitting out synthetically engineered maps after being instructed to match aerial photographs with their corresponding street maps.
Comment: Interesting that the study doesn't mention similar 'attacks' reported by diplomats in other countries: