Prevent

The home of the Magna Carta continues its transition into Airstrip One, as the University of Reading warns students reading a Marxist essay on political violence that the authorities might be watching:

Part of a larger anti-terrorism strategy, Prevent was designed to prevent radicalization and seeks to monitor supposedly vulnerable people for evidence of extremism in the materials they peruse and the ideology they express. The idea is that, once identified, these individuals can be steered by authorities away from negative outcomes. […]

Primarily targeted at potential recruits to Islamist terrorist groups, but also at Northern Ireland-style sectarian violence and extreme right-wing terrorism, Prevent suffered mission-creep pretty much right out of the gate. In 2015, a politics student at the University of East Anglia was interrogated by police after reading assigned material in an ISIS-related publication.

The kid clicked a problematic link, which was thereafter removed from the course materials.

Younger students are being scooped up for alleged radicalization, too. In 2016-17, 272 children under 15 years of age and 328 youngsters between ages 15 and 20 were flagged under the Prevent program “over suspected right-wing terrorist beliefs.” The proportion of individuals referred to government officials “as a result of far-right concerns has risen from a quarter in 2015 to 2016 to over a third in 2016 to 2017,” according to Britain’s Home Office, so that likely represents only a fraction of young people questioned and “mentored” for their suspected ideological deviance.

Under 15 years of age? Guess you have to nip these things in the bud.

Where do these referrals come from? Well, anybody can contact the authorities, but the situation is complicated by the duty the law imposes on both public and private institutions to report people seen as being at risk of radicalization, with very little guidance as to what that means beyond cover-your-ass. The imposition of the duty resulted in a surge in referrals by schools to the authorities.

Informing on your fellow citizens for potential thoughtcrimes is just part and parcel of living in a country full of extremists. Comrade Pavlik would have approved.

“Laws such as this restrict the core democratic right to freedom of expression,” a legal analysis published last year in the Utrecht Journal of International and European Law charges. It “indicates a concerning trend of liberal States embracing opportunities to impose severe restrictions on ‘extreme’ speech.” […]

Parliament is currently considering a Counter Terrorism and Border Security Bill that would go beyond monitoring people for extremist ideology and hauling them in for questioning. The proposed legislation would criminalize voicing support for banned organizations, and even make it illegal to view or otherwise access information “likely to be useful to a person committing or preparing acts of terrorism.”

I would say this defies belief, but sadly, it all fits a familiar pattern. Outlawing speech in defense of an organization is the sort of thing one would normally associate with, say, Cuba or North Korea, but it seems the British have met Big Brother and he is them. Seriously, what is happening in Britain is almost as bad as the garden-variety repression seen in certain dictatorships. Not quite as bad, but moving in that direction fast.

If hauling students in for questioning because they clicked a link to “extremist” material sounds like something out of Orwell, Facebook’s AI monitoring system could have been ripped out of a Philip K Dick story:

A year ago, Facebook started using artificial intelligence to scan people’s accounts for danger signs of imminent self-harm.

Facebook Global Head of Safety Antigone Davis is pleased with the results so far.

“In the very first month when we started it, we had about 100 imminent-response cases,” which resulted in Facebook contacting local emergency responders to check on someone. But that rate quickly increased.

“To just give you a sense of how well the technology is working and rapidly improving … in the last year we’ve had 3,500 reports,” she says. That means AI monitoring is causing Facebook to contact emergency responders an average of about 10 times a day to check on someone — and that doesn’t include Europe, where the system hasn’t been deployed. (That number also doesn’t include wellness checks that originate from people who report suspected suicidal behavior online.) […]

In the U.S., Facebook’s call usually goes to a local 911 center, as illustrated in its promotional video.

I don’t see how the quantity of emergency calls proves that the system is working well. It could just as easily indicate rampant false positives.

More importantly, is this a technology that we really want to work “well”? As the article points out, “There may soon be a temptation to use this kind of AI to analyze social media chatter for signs of imminent crimes — especially retaliatory violence.”

There is a well-known story and movie that explores the concept of pre-crime. Do we really want to go there? And just as AIs patrol Facebook for signs of suicidal tendencies and Community Standards-violating speech, will AIs also be used to augment the growing efforts by governments in Britain and elsewhere to flag, investigate and prosecute people who read the wrong materials and think the wrong thoughts?

Facebook Zuckerberg VR dystopia

“These people are complete narcissists”

Google leadership seminar

Google leadership seminar (source)

I enjoyed this rant against Big Tech, which besides being funny, also contains the kernel of a very interesting idea for how to address the growing crisis around data privacy and ownership:

Bannon also added this gem about Tesla:

I do not have a dog in this fight, but Musk seems increasingly unhinged to me, and the little stunt he pulled with his abandoned buyout plan was undeniably shady. But… are you not entertained?

Faceborg’s war on human nature

Two items on the metastasizing, Borg-like entity known as Facebook recently caught my eye.

First:

Facebook just announced sweeping changes to fix significant problems with its newsfeed, the main conduit for news and information for over 2 billion people. However, the problems with Facebook’s newsfeed won’t be fixed with these tweaks. In fact, they are likely to get much worse as Facebook attempts to fix them. […]

To see why failure was (and will continue to be) inevitable, let me recast the situation:

  • Facebook is actively micromanaging the information flow and social interactions of over 2 billion people, and insanely complex and highly uncertain task.
  • Facebook is making the sweeping decisions on how to micromanage the newsfeed centrally (with a small team of young executives empowered to relentless tweak the system by the dictatorial fiat of the company’s CEO).
  • Facebook’s goals are a selfish utopianism (in its version utopia, the world revolves around Facebook).

The Current Year is very weird, when you think about it. The idea of a “small team of engineers in Menlo Park,” led by this guy –

– controlling the main spigot of news and information for over one-quarter of the human race is like something out of a cheesy sci-fi movie. Yet, it is not far from the reality.

The right thing for Facebook to do here would be to drop all the micromanagement and simply let each user control his/her own News Feed experience by default, with a full set of tools and filters. No shady algorithm controlling what you see. No censorship except of spam and illegal content.

This would probably require some adjustments to Facebook’s business model, as the News Feed accounts for 85% of the company’s revenue. I suspect, though, that the core reason Facebook insists on controlling that spigot has nothing to do with money.

Second:

In everyday life, we tend to have different sides of ourselves that come out in different contexts. For example, the way you are at work is probably different from the way you might be at a bar or at a church or temple. […] But on Facebook, all these stages or contexts were mashed together. The result was what internet researchers called context collapse. […]

In 2008, I found myself speaking with the big boss himself, Facebook CEO Mark Zuckerberg. I was in the second year of my PhD research on Facebook at Curtin University. And I had questions.

Why did Facebook make everyone be the same for all of their contacts? Was Facebook going to add features that would make managing this easier?

To my surprise, Zuckerberg told me that he had designed the site to be that way on purpose. And, he added, it was “lying” to behave differently in different social situations.

Up until this point, I had assumed Facebook’s socially awkward design was unintentional. It was simply the result of computer nerds designing for the rest of humanity, without realising it was not how people actually want to interact.

The realisation that Facebook’s context collapse was intentional not only changed the whole direction of my research but provides the key to understanding why Facebook may not be so great for your mental health.

To me, the experience of using Facebook is akin to being in a room filled with everyone I know, yammering away at high volume. It’s unpleasant, and I avoid it as much as possible.

I remember when Zuckerberg infamously said that “Having two identities for yourself is an example of a lack of integrity.” I recall being very creeped out by that sentiment. It’s deeply totalitarian, similar to the argument that “If you’ve got nothing to hide, you’ve got nothing to fear”; i.e. that only criminals or bad people desire privacy. It also flies in the face of some basic observations about human behavior.

The question is, will users put up with forced “context collapse” and micromanagement of the News Feed over the long run, or will they revolt against this form of paternalistic social engineering? I’m betting on the latter.