Prevent

The home of the Magna Carta continues its transition into Airstrip One, as the University of Reading warns students reading a Marxist essay on political violence that the authorities might be watching:

Part of a larger anti-terrorism strategy, Prevent was designed to prevent radicalization and seeks to monitor supposedly vulnerable people for evidence of extremism in the materials they peruse and the ideology they express. The idea is that, once identified, these individuals can be steered by authorities away from negative outcomes. […]

Primarily targeted at potential recruits to Islamist terrorist groups, but also at Northern Ireland-style sectarian violence and extreme right-wing terrorism, Prevent suffered mission-creep pretty much right out of the gate. In 2015, a politics student at the University of East Anglia was interrogated by police after reading assigned material in an ISIS-related publication.

The kid clicked a problematic link, which was thereafter removed from the course materials.

Younger students are being scooped up for alleged radicalization, too. In 2016-17, 272 children under 15 years of age and 328 youngsters between ages 15 and 20 were flagged under the Prevent program “over suspected right-wing terrorist beliefs.” The proportion of individuals referred to government officials “as a result of far-right concerns has risen from a quarter in 2015 to 2016 to over a third in 2016 to 2017,” according to Britain’s Home Office, so that likely represents only a fraction of young people questioned and “mentored” for their suspected ideological deviance.

Under 15 years of age? Guess you have to nip these things in the bud.

Where do these referrals come from? Well, anybody can contact the authorities, but the situation is complicated by the duty the law imposes on both public and private institutions to report people seen as being at risk of radicalization, with very little guidance as to what that means beyond cover-your-ass. The imposition of the duty resulted in a surge in referrals by schools to the authorities.

Informing on your fellow citizens for potential thoughtcrimes is just part and parcel of living in a country full of extremists. Comrade Pavlik would have approved.

“Laws such as this restrict the core democratic right to freedom of expression,” a legal analysis published last year in the Utrecht Journal of International and European Law charges. It “indicates a concerning trend of liberal States embracing opportunities to impose severe restrictions on ‘extreme’ speech.” […]

Parliament is currently considering a Counter Terrorism and Border Security Bill that would go beyond monitoring people for extremist ideology and hauling them in for questioning. The proposed legislation would criminalize voicing support for banned organizations, and even make it illegal to view or otherwise access information “likely to be useful to a person committing or preparing acts of terrorism.”

I would say this defies belief, but sadly, it all fits a familiar pattern. Outlawing speech in defense of an organization is the sort of thing one would normally associate with, say, Cuba or North Korea, but it seems the British have met Big Brother and he is them. Seriously, what is happening in Britain is almost as bad as the garden-variety repression seen in certain dictatorships. Not quite as bad, but moving in that direction fast.

If hauling students in for questioning because they clicked a link to “extremist” material sounds like something out of Orwell, Facebook’s AI monitoring system could have been ripped out of a Philip K Dick story:

A year ago, Facebook started using artificial intelligence to scan people’s accounts for danger signs of imminent self-harm.

Facebook Global Head of Safety Antigone Davis is pleased with the results so far.

“In the very first month when we started it, we had about 100 imminent-response cases,” which resulted in Facebook contacting local emergency responders to check on someone. But that rate quickly increased.

“To just give you a sense of how well the technology is working and rapidly improving … in the last year we’ve had 3,500 reports,” she says. That means AI monitoring is causing Facebook to contact emergency responders an average of about 10 times a day to check on someone — and that doesn’t include Europe, where the system hasn’t been deployed. (That number also doesn’t include wellness checks that originate from people who report suspected suicidal behavior online.) […]

In the U.S., Facebook’s call usually goes to a local 911 center, as illustrated in its promotional video.

I don’t see how the quantity of emergency calls proves that the system is working well. It could just as easily indicate rampant false positives.

More importantly, is this a technology that we really want to work “well”? As the article points out, “There may soon be a temptation to use this kind of AI to analyze social media chatter for signs of imminent crimes — especially retaliatory violence.”

There is a well-known story and movie that explores the concept of pre-crime. Do we really want to go there? And just as AIs patrol Facebook for signs of suicidal tendencies and Community Standards-violating speech, will AIs also be used to augment the growing efforts by governments in Britain and elsewhere to flag, investigate and prosecute people who read the wrong materials and think the wrong thoughts?

Facebook Zuckerberg VR dystopia

Leave a Reply

Your email address will not be published. Required fields are marked *