Prevent

The home of the Magna Carta continues its transition into Airstrip One, as the University of Reading warns students reading a Marxist essay on political violence that the authorities might be watching:

Part of a larger anti-terrorism strategy, Prevent was designed to prevent radicalization and seeks to monitor supposedly vulnerable people for evidence of extremism in the materials they peruse and the ideology they express. The idea is that, once identified, these individuals can be steered by authorities away from negative outcomes. […]

Primarily targeted at potential recruits to Islamist terrorist groups, but also at Northern Ireland-style sectarian violence and extreme right-wing terrorism, Prevent suffered mission-creep pretty much right out of the gate. In 2015, a politics student at the University of East Anglia was interrogated by police after reading assigned material in an ISIS-related publication.

The kid clicked a problematic link, which was thereafter removed from the course materials.

Younger students are being scooped up for alleged radicalization, too. In 2016-17, 272 children under 15 years of age and 328 youngsters between ages 15 and 20 were flagged under the Prevent program “over suspected right-wing terrorist beliefs.” The proportion of individuals referred to government officials “as a result of far-right concerns has risen from a quarter in 2015 to 2016 to over a third in 2016 to 2017,” according to Britain’s Home Office, so that likely represents only a fraction of young people questioned and “mentored” for their suspected ideological deviance.

Under 15 years of age? Guess you have to nip these things in the bud.

Where do these referrals come from? Well, anybody can contact the authorities, but the situation is complicated by the duty the law imposes on both public and private institutions to report people seen as being at risk of radicalization, with very little guidance as to what that means beyond cover-your-ass. The imposition of the duty resulted in a surge in referrals by schools to the authorities.

Informing on your fellow citizens for potential thoughtcrimes is just part and parcel of living in a country full of extremists. Comrade Pavlik would have approved.

“Laws such as this restrict the core democratic right to freedom of expression,” a legal analysis published last year in the Utrecht Journal of International and European Law charges. It “indicates a concerning trend of liberal States embracing opportunities to impose severe restrictions on ‘extreme’ speech.” […]

Parliament is currently considering a Counter Terrorism and Border Security Bill that would go beyond monitoring people for extremist ideology and hauling them in for questioning. The proposed legislation would criminalize voicing support for banned organizations, and even make it illegal to view or otherwise access information “likely to be useful to a person committing or preparing acts of terrorism.”

I would say this defies belief, but sadly, it all fits a familiar pattern. Outlawing speech in defense of an organization is the sort of thing one would normally associate with, say, Cuba or North Korea, but it seems the British have met Big Brother and he is them. Seriously, what is happening in Britain is almost as bad as the garden-variety repression seen in certain dictatorships. Not quite as bad, but moving in that direction fast.

If hauling students in for questioning because they clicked a link to “extremist” material sounds like something out of Orwell, Facebook’s AI monitoring system could have been ripped out of a Philip K Dick story:

A year ago, Facebook started using artificial intelligence to scan people’s accounts for danger signs of imminent self-harm.

Facebook Global Head of Safety Antigone Davis is pleased with the results so far.

“In the very first month when we started it, we had about 100 imminent-response cases,” which resulted in Facebook contacting local emergency responders to check on someone. But that rate quickly increased.

“To just give you a sense of how well the technology is working and rapidly improving … in the last year we’ve had 3,500 reports,” she says. That means AI monitoring is causing Facebook to contact emergency responders an average of about 10 times a day to check on someone — and that doesn’t include Europe, where the system hasn’t been deployed. (That number also doesn’t include wellness checks that originate from people who report suspected suicidal behavior online.) […]

In the U.S., Facebook’s call usually goes to a local 911 center, as illustrated in its promotional video.

I don’t see how the quantity of emergency calls proves that the system is working well. It could just as easily indicate rampant false positives.

More importantly, is this a technology that we really want to work “well”? As the article points out, “There may soon be a temptation to use this kind of AI to analyze social media chatter for signs of imminent crimes — especially retaliatory violence.”

There is a well-known story and movie that explores the concept of pre-crime. Do we really want to go there? And just as AIs patrol Facebook for signs of suicidal tendencies and Community Standards-violating speech, will AIs also be used to augment the growing efforts by governments in Britain and elsewhere to flag, investigate and prosecute people who read the wrong materials and think the wrong thoughts?

Facebook Zuckerberg VR dystopia

Networked Confucianism

Masonic Eye of Providence

John Robb describes China’s now-infamous social credit system as the world’s first “networked tyranny.” Allow me to coin the term “networked Confucianism” to describe the same system.

Combine Confucian ethics with modern surveillance technology and social networking, and this is what you get:

China’s plan to judge each of its 1.3 billion people based on their social behavior is moving a step closer to reality, with Beijing set to adopt a lifelong points program by 2021 that assigns personalized ratings for each resident.

The capital city will pool data from several departments to reward and punish some 22 million citizens based on their actions and reputations by the end of 2020, according to a plan posted on the Beijing municipal government’s website on Monday. Those with better so-called social credit will get “green channel” benefits while those who violate laws will find life more difficult.

The Beijing project will improve blacklist systems so that those deemed untrustworthy will be “unable to move even a single step,” according to the government’s plan. Xinhua reported on the proposal Tuesday, while the report posted on the municipal government’s website is dated July 18.

According to the Party, the overall social credit system will “allow the trustworthy to roam freely under heaven while making it hard for the discredited to take a single step.”

China’s greatest philosopher might have approved:

When Confucius was asked 2,500 years ago what a ruler needed to govern a country, he said 信credit, faith, or sincerity; food 食; and an army 兵. But if he could only have one, it would be the first 信. The Chinese character we translate as “credit” has thus long been a core concept of Chinese governance. […]

At first glance, the official goal of the SCS appears to have little to do with financial credit. It is “construction of sincerity in government affairs, commercial sincerity, social sincerity, and judicial credibility” (State Council 2014), which is more a call to embrace traditional Confucian moral virtues than a vision for high-tech governance. The plan document cites a laundry list of social ills that stem from the lack of trust and trustworthiness at all levels of a fragmented Chinese society. These include tax evasion, factory accidents, food and drug safety scares, fraud, academic dishonesty, and rampant counterfeiting of goods.

How to lose a war without firing a shot

I’m a little rusty on my Sun Tzu and Clausewitz, so I don’t recall what those great military theorists had to say about the bold strategy of allowing your most sensitive defense technology to be sold to your chief geopolitical rival:

China has obtained the big screen software used by Nato and the United States for war room mapping, putting its forces on an equal organisational footing with some of the West’s elite military operations.

Luciad, a defence contractor based in Leuven, Belgium, is selling the Chinese government high performance software used for situational awareness by the military commands of the North Atlantic Treaty Organisation, according to information from Chinese government contractors verified by the South China Morning Post.

The package includes LuciadLightspeed, a program that can process real-time data, including that from fast-moving objects, with speed and accuracy. […]

The same software is used by the United States Special Operations Command at MacDill Air Force Base in Tampa, Florida, where covert missions for the US government – including the raid that assassinated al-Qaeda leader Osama bin Laden in Pakistan in 2011, originated. […]

“Luciad is the Ferrari of GIS software. It comes to the right place at the right time,” said a geospatial information engineer from an aerospace company in Beijing.

Truly, this is a level of cunning strategery that makes Alexander the Great look like Sergeant Klinger! But seriously, what’s the point of having NATO if a Belgian company is blithely selling off crucial military technology to the People’s Liberation Army? Why bother even having a military at all? Wouldn’t it be easier and more profitable to disband NATO, dismantle all the Western armed forces and auction off our technology and weaponry to the highest bidder?

Has the US weighed in on this reported sale?

Revelation 13:16-17

Swedish commuters chip implants

Because cell phones weren’t convenient enough

Swedes have learned to stop worrying and love the chip:

In Sweden, a country rich with technological advancement, thousands have had microchips inserted into their hands.

The chips are designed to speed up users’ daily routines and make their lives more convenient — accessing their homes, offices and gyms is as easy as swiping their hands against digital readers.

They also can be used to store emergency contact details, social media profiles or e-tickets for events and rail journeys within Sweden.

What’s remarkable to me is how marginal the benefit is here. O, the hardship of having to carry around a key card! The agony of having to swipe your phone to get on a train! We are approaching the reductio ad absurdum of modern convenience, where you will be able to go anywhere without having to walk –

Robotic exoskeleton

Get in

– talk to anyone without having to flap your gums, and do anything without having to move a muscle. At that point, the only remaining challenge will be how think without needing to use your brain.

Proponents of the tiny chips say they’re safe and largely protected from hacking, but one scientist is raising privacy concerns around the kind of personal health data that might be stored on the devices.

Around the size of a grain of rice, the chips typically are inserted into the skin just above each user’s thumb, using a syringe similar to that used for giving vaccinations. The procedure costs about $180.

So many Swedes are lining up to get the microchips that the country’s main chipping company says it can’t keep up with the number of requests.

Dystopia now. Seriously, why would anyone want this? It’s physically invasive and has all the privacy and security risks of a Yahoo email account – only it’s under your freakin’ skin!

“Having different cards and tokens verifying your identity to a bunch of different systems just doesn’t make sense,” he says. “Using a chip means that the hyper-connected surroundings that you live in every day can be streamlined.”

You know what? No.

Internet voting is insane

I have a bad feeling about this:

West Virginia is about to take a leap of faith in voting technology — but it could put people’s ballots at risk.

Next month, it will become the first state to deploy a smartphone app in a general election, allowing hundreds of overseas residents and members of the military stationed abroad to cast their ballots remotely. And the app will rely on blockchain, the same buzzy technology that underpins bitcoin, in yet another Election Day first.

“Especially for people who are serving the country, I think we should find ways to make it easier for them to vote without compromising on the security,” said Nimit Sawhney, co-founder of Voatz, the company that created the app of the same name that West Virginia is using. “Right now, they send their ballots by email and fax, and — whatever you may think of our security — that’s totally not a secure way to send back a ballot.”

But cybersecurity and election integrity advocates say West Virginia is setting an example of all the things states shouldn’t do when it comes to securing their elections, an already fraught topic given fears that Russian operatives are trying again to tamper with U.S. democracy.

“This is a crazy time to be pulling a stunt like this. I don’t know what they’re thinking,” said David Jefferson, a computer scientist at Lawrence Livermore National Laboratories who is on the board of Verified Voting, an election security advocacy group. “All internet voting systems, including this one, have a host of cyber vulnerabilities which make it extremely dangerous.”

I demand paper ballots! What is so hard about this? A security expert weighs in:

This is crazy (and dangerous). West Virginia is allowing people to vote via a smart-phone app. Even crazier, the app uses blockchain — presumably because they have no idea what the security issues with voting actually are.

As for what those security issues are:

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper. […]

Last year, the Defcon hackers’ conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It’s important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.