China is at the forefront of public surveillance deployment with a half a billion cameras installed in the streets. But how effective is that system in reality?
Sweet Slumbers of Dystopia
Today’s cameras are capable of more than just observing. They can detect liveness, remove masks, analyze the way we walk, and alert the authorities about a potential offense. The talents that CCTV has are exploited with extreme fervor in China, which symphonizes with the communist sentiment of omnipresent supervision.
The country has about 540 million cameras put on a 24/7 watch, which makes the China potentially one of the safest country in the world — hunting after grandiose titles is a tradition inherited from the Soviet Union where every technology, even garbage collecting, would be titled as “the most advanced in the world”.
As a success demonstration, we can Mediaposts mention Hangzhou — a prosperous city flourishing with technology. There, an AI-powered system named City Eye is on vigil identifying illegal street vendors, beggars, litter scattered on the ground, or cases of criminal behavior.
The AI, reportedly, did help to minimize illegal street trading from 1,000 instances to just 30 over a year. Besides, the all-seeing City Eye has been reported to successfully dilute traffic jams, monitor food safety — a measure that prevents diseases from spreading — and guide the nearest patrol forces to the crime scenes.
It all sounds good, of course. But In China’s case public surveillance has a dystopian undertone. Cameras are used to subdue an Uighur minority who are denied their rights to choose a religion.
With the object recognition algorithms it’s automatically possible to detect religious paraphernalia, items that look like weapons, and even clothes with insulting imprints: in China everything related to Winnie the Pooh is unofficially banned because it’s seen as a comedic stab at Xi Jinping.
What makes matters worse, the enormous camera clusters are connected to a specialized app that the Chinese police are supplied with. As NY Times reports, this app allows patrol officers to tag suspicious people who avoid using the front door of their house, stopped carrying smartphones or “have refueled someone else’s car”. Allegedly, later this tagging enters the surveillance database, which begins paying closer attention to the ‘unreliable elements’ to quote Soviet legal vocabulary.
GULAG 2.0? Not necessarily
It would be wrong to blame the tool, not the one who drives it. While facial recognition is used for tracking and identifying protesters in countries like Russia, it can do a lot of good too.
An encouraging example is child trafficking prevention. That’s when AI algorithms in unison with a network of surveillance cameras are capable of locating missing, lost or even abducted kids. Even if the child’s face flickers at least once within a camera’s area, this can be enough for the algorithm to capture a screenshot and send it to the cops.
Organization Thorn offers a similar project eblogz Spotlight. So far, it’s focused on ads in the deep web that feature abducted kids and does quick matching with the police databases — the system has already proved its worth and helped some children safely return to their families.
Facial recognition is neither good nor bad. It’s just a tool that requires proper legislation, control, and accountability. With these things in place, it can bring a welter of benefits in multiple areas: healthcare, public security, law enforcement, pollution control, and much more.