As I See It: Beware Of Beware
February 15, 2016 Victor Rozek
At some point in our lives, a transformative event or some perceived injustice propels many of us to become active in social or political movements. At least for a period of time. Regardless of left/right orientation, where politics and social justice matters are concerned, emotions tend to run high and rhetoric flows. And since there are few, if any, centrist movements committed to nothing nobler than nurturing the status quo, activists typically congregate on the margins, doing and saying things they think will best publicize their cause.
The more outrageous and controversial their actions, the more media coverage they are likely to receive. The more toxic and hyperbolic the speech, the more times it is bound to be re-tweeted.
We know that prospective employers scour social media to determine the suitability of an applicant, but what if a history of activism could get you shot.
Historically, the official reaction to protest has not always been kind. Beatings, teargas, pepper spray, rubber bullets, and arrests have been among the responses to non-violent protest. Potentially violent situations are usually met with overwhelming force. But not all responses are equal. For the armed militants who have been occupying the Malheur National Wildlife Refuge in Oregon, for example, the Federal response has been nothing at all, at least for the first three weeks.
But what determines whether protesters are met with open hand or closed fist? What might sway authorities toward a patient, rather than a forceful response?
In recent times, the degree of police ferocity was thought to be a result of context, coupled with prejudice and militarization. Certainly those influences cannot be ignored. But what if the trajectory of the response could be predetermined, not solely by human proclivities but by software? Could software predict who is likely to be dangerous and who is not?
It’s a concept that has been partially applied elsewhere, albeit with less than noble intent. Long before 9-11 changed the way we track people, the North Korean government tracked the activities of its citizens and assigned a score to each person based on how big a threat to the regime they were imagined to be. Peter Van Buren, writing for We Meant Well, notes that some years ago Taiwan also implemented a system that encoded a threat estimate into every national ID card. Not surprisingly, according to Van Buren, “. . . every interaction with the government and police force was shadowed by those scores.” In other words, the scores–whether reflective of reality or paranoia–had a predictably prejudicial effect.
It’s an article of faith that the National Security Agency can intercept our calls, read our emails, and track our activities on social media. But that information is seldom shared with local authorities except in the most extreme cases. But now, says Van Buren, “a new generation of technology is being used by local law enforcement that offers them unprecedented power to peer into the lives of citizens.”
An example of this technology is an application called Beware (arguably itself a prejudicial name). The software is designed to scour billions of data points, “including arrest reports, property records, commercial databases, deep Web searches, and social media postings.” Then the software assigns a “threat score” to the target. In theory, the score provides a threat assessment. Police and other first responders can get some idea of what to expect from the suspect(s) when they arrive.
There is no arguing that having more information is better than having too little. The jobs of first responders are risky enough without adding ignorance to the equation. Still, there are a number of issues with apps like Beware that should be carefully weighed by both users and developers of the product.
By itself, a threat score is not evidence of a threat, and past behavior is not always an accurate predictor of future action. In any event, it’s impossible for an application to predict intent. A high threat score would instantly prejudice a responder and eclipse the innocent-until-proven-guilty tenet that is the foundation of our judicial system. As a result, response to a high-threat-score event may well prove to be more violent than necessary. Nor is it clear how, or if, a wrongly inflammatory score could be discovered and corrected. Fifty-year-olds may discover they are being judged by their twenty-year-old behaviors.
But the bigger question is: Who gets to decide what constitutes a threat? Are you more of a threat if you frequent left-wing websites or right-wing websites? Does participating in a protest make you a threat? Does speaking truth to power? If you’re working in some way to change the status quo, does that immediately qualify you as threatening? How about if you want to end abortion; or you’re against “free trade;” or against gay marriage; or for a single-payer health system? What if you’re pro union, or pro right to work; pro gun, or anti war? Who decides which end of the political spectrum is more likely to pose a threat to first responders?
As has always been the case, one man’s patriot is another man’s terrorist, and the recent events in Oregon are a case in point. If software were to be used to assess the threat potential of the Malheur Wildlife Refuge takeover, who decides whether the militia are patriots making a stand against government overreach, or out-of-state domestic terrorists? What criteria determine whether the occupiers are a bunch of frontier drama queens who pose little threat and can therefore be treated with patient indulgence? Or, are they armed thugs willing to die–and therefore presumably kill–for their demands, thereby necessitating immediate intervention?
Who decides if the algorithm designed to differentiate between a concerned citizen and a potential fanatic will assign a higher negative value to posting pro or anti militia rhetoric?
If the only answer is “the programmers” decide, then they are simply encoding their prejudices, and the danger is that their bias becomes an expectation for the end user.
In the Oregon case, authorities determined that the self-proclaimed militia presented a minimal threat and the FBI has been exquisitely patient with the occupiers. For weeks they were allowed extraordinary privileges, including freedom of movement, and ample opportunity to make their point and go home. Finally, 11 were arrested on a rural highway, and one was shot, apparently preferring death to jail. As of this writing four people are still holed up in the reserve, demanding to be allowed to leave without consequence.
That ship, however, has sailed.
How the occupiers would have been coded and, by extension, what action a software package like Beware would have recommended is not clear. But I somehow doubt that in the face of an armed takeover of Federal property it would counsel patience. Such an approach requires a degree of compassion and the willingness to withstand enormous community and media pressure. In short, it requires more than scouring billions of data points; it requires wisdom.