“Google” by William Blake
Now we know why Google has scrubbed almost all mention of “Don’t be evil” from its code of conduct:
Google bosses have forced employees to delete a confidential memo circulating inside the company that revealed explosive details about a plan to launch a censored search engine in China, The Intercept has learned.
The memo, authored by a Google engineer who was asked to work on the project, disclosed that the search system, codenamed Dragonfly, would require users to log in to perform searches, track their location — and share the resulting history with a Chinese partner who would have “unilateral access” to the data. […]
The memo identifies at least 215 employees who appear to have been tasked with working full-time on Dragonfly, a number it says is “larger than many Google projects.” It says that source code associated with the project dates back to May 2017, and “many infrastructure parts predate” that. Moreover, screenshots of the app “show a project in a pretty advanced state,” the memo declares.
Most of the details about the project “have been secret from the start,” the memo says, adding that “after the existence of Dragonfly leaked, engineers working on the project were also quick to hide all of their code.”
It’s pretty simple, if you want to operate in China you have to play by the CPC’s rules. There is no way for Google to do that while successfully upholding the values it pretends to care about. Hence the secrecy.
Every age has its rituals. In the Age of Google, we have the Ritual of the reCAPTCHA, a compulsory visual test that requires a carbon-based organism to prove its sentience to a computer by selecting squares that seem to contain grainy images of a specified object. The organism must do this correctly in order to demonstrate to the computer’s satisfaction that it (the organism) possesses the mental faculties of invariant recognition, segmentation, and parsing, in which attributes humans tend to excel over computers. If the organism passes the test, it is permitted to continue with its intended task on the website.
That problem is that many human beings who are more or less sentient find the average reCAPTCHA to be hard and frustrating, owing to the intentionally crappy quality of the images, poor visibility of the objects, as well as certain definitional problems that the average internet user is ill-equipped to deal with. For example, should the user, tasked with identifying “street signs,” click on a square that contains part of a sign post? Then there are questions of process. Does the user click Verify immediately after clicking all the relevant squares, or wait for new images to materialize in the squares that have been clicked? None of this is clear, none of it is explained. The user twists in a fog of doubt and confusion, and frequently fails the test.
The reCAPTCHA is the reductio ad absurdum of modern life, a grudging surrender of countless man-hours of labor (over 100 million reCAPTCHAs are displayed every day) to feed the ravenous maw of an emerging artificial superintelligence. Because, of course, by completing these image recognition tasks, the human user is training Google’s vast machine learning datasets. TechRadar thanks you for your service in helping develop self-driving cars.
But while we are training Google’s neural networks, the machines are simultaneously training us — teaching us to be more compliant, more deferential to the machines, and more conversant in machine logic… in short, remaking humanity in their own image. The future is a slouched hominid clicking on a fuzzy image of a taco shop — forever.
Google leadership seminar (source)
I enjoyed this rant against Big Tech, which besides being funny, also contains the kernel of a very interesting idea for how to address the growing crisis around data privacy and ownership:
Bannon also added this gem about Tesla:
I do not have a dog in this fight, but Musk seems increasingly unhinged to me, and the little stunt he pulled with his abandoned buyout plan was undeniably shady. But… are you not entertained?