Artificial Intelligence Surveillance Poses Threat To US Civil Liberties - Advocacy Group

WASHINGTON (Pakistan Point News / Sputnik - 14th June, 2019) US policymakers need to protect Americans from the development of a network that connects 50,000 surveillance cameras with artificial intelligence (AI) algorithms that will attempt to predict criminal behavior by analyzing body language, the American Civil Liberties Union (ACLU) warned in a new report on Thursday.

"These cameras won't just record us, but will also make judgments about us based on their understanding of our actions, emotions, skin color, clothing, voice, and more," the ACLU said in a press release announcing the publication of the report, titled "The Dawn of Robot Surveillance."

New automated technologies could change the nature of surveillance in a way that threatens civil liberties and raises privacy concerns, as well as worsen existing racial disparities, the ACLU said.

"The end result, if left unchecked, will be a society where everyone's public movements and behavior are subject to constant and comprehensive evaluation and judgment by what are essentially AI security guards," said senior ACLU policy analyst Jay Stanley.

The report called on US policymakers "to contend with the technology's enormous power, prohibit its use for mass surveillance, narrow its deployments, and create rules to minimize abuse."

While the cameras today are largely passive and in unconnected local networks, the future is being molded by developments in artificial intelligence such as�anomaly detection, physiological measurements - including heart and breathing rates and eye movements - wide-area tracking of the patterns of movements, and emotion recognition, the report said.

For example, the report cited an effort by the New York City Police Department and microsoft to equip more than 6,000 surveillance cameras with algorithms that developers claim identify shoplifters before they commit a crime based on "fidgeting, restlessness and other potentially suspicious body language."

The report also cited a financial lender in China that is using emotion recognition to purportedly evaluate customers' creditworthiness.