Telegram (AI) YouTube Facebook X
Ру
Don't look at me

Don’t look at me

AI-powered street surveillance spreads, and the pushback—and slip-ups—mount.

In the world’s biggest cities, humble security cameras have given way to sprawling systems powered by artificial intelligence. For better or worse, many of us now accept what once read like dystopian fiction. 

Krzysztof Shpak examines how AI street-surveillance systems work and why governments rushed to roll them out. 

Flock police

In September 2025 a Denver police officer stood at the door of Krisanna Elzer with a court summons. The homeowner was accused of stealing a package from a neighbour’s porch in a nearby town.

The evidence came from Flock Safety, a commercial automated video-surveillance system that had recorded Elzer’s car. The officer refused to share details with the suspect, telling her to raise any objections in court. 

“You know we have cameras in that town. Not even a breath of fresh air will leave there without our knowledge,” the officer explained.

Convinced of her innocence, Elzer began gathering her own evidence. She had indeed been nearby that day — visiting a tailor — but had not stolen anyone’s parcel. 

She compiled GPS data from apps on her phone and in her car, dashcam footage, witness statements and even photos of the clothes she wore on the day of the alleged crime. 

After repeated failed attempts to hand the information to the authorities, the suspect wrote directly to the police chief. He praised her work and said the summons had been annulled. 

As of December 2025, Flock Safety offered access to 80,000 cameras across 49 American states. 

From monitor vans to crime prediction

Cameras on streets, in shops and in public buildings have long been ubiquitous. Today’s smart cameras and data-processing methods, however, are a different beast. 

CCTV in the analogue era 

Once upon a time, closed-circuit television (CCTV) meant a private network of cameras feeding a dozen monitors watched by a bored mall guard.

The technology amounted to image sensors, screens and recording equipment. 

Security services have experimented with surveillance systems since at least the mid-20th century. 

image
Testing a CCTV system by British police in Trafalgar Square, 1960. Source: The National Archives

In 1960 British police tested two cameras monitoring Trafalgar Square during a visit by the king and queen of Thailand. Monitors sat in a van nearby. The trial exposed several technical issues and drew mixed reactions.

In 1979 the government research outfit Police Scientific Development Branch developed ANPR using then-available optical character recognition methods. 

By the 1990s, cameras on intersections and façades were the norm. Law enforcement folded CCTV and ANPR into everyday policing.

Smart cameras

With miniaturised computing, pervasive connectivity and the rise of AI, the traditional CCTV rig has been supplanted by smart cameras tied to central databases and automated analytics.

Such a device comes with its own processor and OS, storage, and interfaces for local and internet connectivity, and sometimes a microphone for audio. 

image
A Flock Safety camera with ANPR functions. Source: Wikimedia

Some makers embed AI accelerators and NPU (Neural Processing Unit) modules to process data on the device in real time. Others offload AI tasks to external hardware. 

These systems can identify objects, recognise licence plates and faces, and store summaries of what they see. Capabilities vary with software configuration and vendor preferences. 

A brain behind the scenes

On its own, a smart camera can recognise objects and record their identifiers — a car’s plate, a face, or a person’s gait. An analytics hub aggregates camera feeds, fuses them with other sources and sends findings to an operator.

Flock Safety offers something similar in Nova, a “public-safety data platform” that ingests not only surveillance footage but also information from leaks, data-broker databases and other commercially available sources. 

Such a system builds dossiers with movement maps, preferences, browsing history, habits, police records and anything else available. 

Armed with such troves, AI can hypothesise about people’s behaviour and alert operators to situations it deems suspicious. That option is already available to Flock’s customers.

According to the company, Nova lets agencies close cases “in one click”. 

According to critics, it allows warrantless tracking and invites sweeping privacy abuses.

Colourful hair and code injection 

Many people shrug at mass surveillance, seeing it as a tool for solving and preventing crime. Others are less sanguine about the bounds of personal freedom. 

The tug-of-war between smart cameras and those seeking privacy plays out on several fronts.

Beyond public-policy battles, enthusiasts turn to camouflage — and to classic hacking.

Spoofing

The most intriguing attacks on such devices are spoofing, or “presentation attacks”, methods that manipulate the image a camera receives.

These include masks, retroreflectors, specialised textures and other ways to “spoil” an image so the system fails to detect or correctly identify an object.

In 2016 designer Scott Urban’s Reflectacles project offered eyewear with a retroreflector that bounces a surveillance camera’s infrared light back, blowing out the face image.

image
Reflectacles on surveillance video. Source: Kickstarter.

This brute-force tactic can deprive a single camera of usable data, but it falters when multiple angles are in play.

Berlin-based researcher and artist Adam Harvey developed a series of CV Dazzle countermeasures to facial recognition.

His early-2010s looks relied on asymmetric hairstyles and makeup elements designed to fool the then-popular Viola–Jones algorithm, which detects under-eye and nasal shadows, facial symmetry and nose-bridge location. 

His answer: unconventional shadow placements and colours that contrast with skin tone. 

image
CV Dazzle Look 5. Source: Adam.harvey.studio.

With the rise of AI facial-recognition systems, the old tricks dated; in 2020 Harvey proposed an updated makeup approach.

image
CV Dazzle Look 6 and 7. Source: Adam.harvey.studio.

He stressed he was demonstrating a technique, not fixed patterns; the best solution depends on the surveillance context. 

Similar methods apply to plate recognition. American enthusiast Benn Jordan described how to craft “adversarial” textures for ANPR detectors.

Using open recognition models, Jordan trained a neural network to generate visual noise that, when overlaid on a licence plate, makes the model read incorrect characters or fail to “see” the plate at all.

The problem with visual tricks is their fragility. Efficacy depends on conditions and on camera coverage. Meanwhile, surveillance vendors keep expanding the cues used for recognition, such as a person’s distinctive gait or a car’s colour and unique modifications.

Researchers continue to probe advanced models for weaknesses, but a clearer threat to smart-camera systems comes from hackers. 

Device compromise and network attacks

Like any internet-connected computer — AI or not — smart cameras and their back-end infrastructure are vulnerable to hacking. 

Over the years, numerous vulnerabilities have been documented.

In 2021 a code-injection vulnerability was found in Hikvision surveillance cameras. It allowed full device takeover, software installation and access to other cameras on the network.

In 2023 the operating system of Axis smart cameras contained a flaw enabling arbitrary command execution during installation of ACAP applications.

In 2025 surveillance systems by Dahua were found to have two vulnerabilities related to remote command execution and buffer overflow. Both allowed an attacker full control of a camera.

A separate attack vector is direct interaction with hardware often mounted outdoors in public places. An attacker can use service interfaces, access local storage or modify a device for their own ends. 

To counter direct attacks, manufacturers encrypt data, use hardware verification of firmware and add cryptographic signatures to video files. 

A properly configured device cannot simply be “reflashed” or have data pulled off it. But mistakes happen. 

In 2025, 404 Media reported that at least 60 Flock Safety Condor AI cameras with people-tracking features were left open to unauthorised access.

Cybersecurity specialist John Gaines and the above-mentioned researcher Benn Jordan found the devices’ IP addresses using the Shodan search engine and discovered they could connect with no login or password.

A journalist in the frame of a Flock surveillance camera. Source: 404 Media.

Anyone could watch live streams, download 30 days of archives, change settings and read system logs. 

The vendor blamed “a misconfiguration affecting a limited number of devices” and said the problems had been fixed.

The same researchers said another Flock camera model exposes an open Wi-Fi access point if certain buttons on the case are pressed, granting full control over the device and its software.

Gaines published an analysis of these and other flaws in Flock’s system in a separate document covering 55 items.

In its official response the company said the issues were already known, and that would-be attackers rely on direct access to cameras and “deep knowledge of the equipment’s internals”. 

The vendor stressed that all necessary updates are delivered without customer action and that system operations face no threat.

Tackling the “partially competent”

Automated surveillance systems, especially with AI, have become a convenient tool for law enforcement.

Vendors sell their wares on promise: here’s the suspect’s car and a map of its movements; here’s the address. Cases can now be closed with a click. 

That ease is habit-forming. People tend to over-rely on automated outputs — a classic cognitive bias

AI-powered automation follows the same pattern: many users assume ChatGPT’s answers are correct and ignore contradictions. In everyday contexts this can distort perceptions and, in some cases, lead to psychoses.

Even under an engineering ideal and in the hands of authorised operators, a large AI-driven surveillance system can do harm.

In 2025 American authorities opened an investigation into possible use of Flock Safety’s technologies for unlawful tracking. Police were suspected of using the system to find immigrants and to monitor women crossing state lines in search of jurisdictions where abortions are legal.

In this case the system worked as designed: no one tricked detectors, hacked cameras or swapped in deepfakes.

Don’t break it, improve it

CCTV is already ubiquitous. AI-enhanced analytics for surveillance is the new reality. 

Even the boldest asymmetric mask and fully obscured licence plates will not secure privacy amid total data collection. 

Like any powerful tool, AI-driven surveillance needs regulation to prevent unlawful use and sloppy security by vendors and operators.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Found a mistake? Select it and press CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK