Apparently, this system is currently being installed in Boston, and has already been implemented in Chicago and Washington. In the event you live in these cities, I bet you've never heard of AISight, and more importantly, I bet there's been little to no public debate.
The most disturbing part about this platform is that this artificial intelligence defines what is "normal" behavior and anything that falls outside of that narrow band can be flagged for "pre crime" potential. Ultimately, if these things are allowed to proliferate, it will condition humans to behave like zombie automatons fearful that anything interesting or creative might be viewed as criminal.
The NYPD recently engaged in such behavior when it arrested a street artist unlawfully. Now imagine if a computer could do the work without human involvement.
The entire sad incident was caught on video.
For more details on AISight, we turn to ITProPortal:
Imagine a major city completely covered by a video surveillance system designed to monitor the every move of its citizens. Now imagine that the system is run by a fast-learning machine intelligence, that's designed to spot crimes before they even happen. No, this isn't the dystopian dream of a cyber-punk science fiction author, or the writers of TV show "Person of Interest". This is Boston, on the US East Coast, and it could soon be many more cities around the world.Just what the world needs.
Behavioral Recognition Systems, Inc. (BRS Labs) is a software development company based out of a nondescript office block in Houston Texas, with the motto: "New World. New security."
BRS Labs' AISight is different because it doesn't rely on a human programmer to tell it what behaviour is suspicious. It learns that all by itself.
The system enables a machine to monitor is environment, and build up a detailed profile of what can be considered "normal" behaviour. The AI can then determine what kind of behavior is abnormal, without human pre-programing.
Oh, but wait, it gets even better...
What's more, AISight permanently learns and registers when changes in normal behavior occur, so no ongoing programing is required from human operators. In order to do this, it employs a technology known as "artificial neural networks", which mimics the function of the human brain.So fast the public won't have a chance for public debate!
What's more, BRS Labs' system is extremely easy to implement even across huge, disparate networks of outdated camera equipment. The company claims that it needs maximum of only a few days for the complete hardware and software installation.
After that, the system sets about "autonomously building an ever-changing knowledge base of activity seen through every camera on your video network."How about laugher, is that banned yet? How about thinking?
The software is already in place in other cities around the United States, such as Chicago and Washington.
"Our system will figure out things you never thought of looking for," said Wesley Cobb, BRS' chief science officer. "You never thought to look for a car driving backwards up the entrance of a parking garage, for example. Our system will find that and alert on it, because it's different from what it usually sees. It's taught itself what to look for."
The inevitable security concerns have already been raised. While BRS claims to be "concerned about the privacy rights of individuals everywhere," it's not hard to imagine a future where our every move is assessed, quantified and judged by ever-smarter generations of artificial intelligence.Have fun serfs.
There's one security camera for every 11 people in the UK, and it has been reported that the average British citizen is recorded on camera over 300 times every day.
Now check out the promotional video. How completely creepy is the voice on it...
Full article here.
In Liberty,
Michael Krieger
and take the taxpayer's money.
Just look at the head of the company. Former secret service man. Stinks to high heaven when you think how "former" he really is... And just think what hit this is to the promise of american values to have this kind of thing in Chicago, Boston, Washington...
This neural network is a machine and will naturally miss(interpret) certain actions. For example, how good will it do be against some psychopaths and some normal, yet sly men, who tend to 'infiltrate' through the front door and not through the 'intruder entrance' as it was the case in one comedy movie.
As a bottom line, I am glad that the machine will learn by itself and not have a human tell it what is dangerous. As I said, many actions may be interpreted as something else, despite what this paid man is advertising to the big government. It will be our task to have fun with these machines in the future. So if you know how to Make and Break software, you are good to go and in the best case your data will only be sold to the highest bidder.