by Audrey Westby
In 1964, Martin Luther King Jr. received an anonymous letter which began: “In view of your low-grade, abnormal personal behavior I will not dignify your name with either a Mr. or a Reverend or a Dr. And your last name calls to mind only the type of King such as King Henry the VIII and his countless acts of adultery and immoral conduct lower than that of a beast.” The letter was written by the first director of the FBI, J. Edgar Hoover. Using a wiretap of King’s office, hotel room and home, Hoover had discovered the minister’s involvement in extramarital affairs and was attempting to use the information to force King out of power.
This entire blackmail scheme was enacted legally. The attorney general at the time personally signed off on the warrant because, in the eyes of the government, Martin Luther King was not a nonviolent protestor for civil rights. He was a threat to national security.
Today, the FBI wouldn’t even need a warrant. All the information necessary is harvested for them for free by Silicon Valley companies, namely Facebook and Google. The government wouldn’t need a warrant because all that information already exists in a new sphere, the Internet, which has yet to be effectively regulated by law in the U.S. With virtually no legal guidelines, American tech businesses can collect large quantities of consumer information and use it however they like. The U.S. government can then harvest that information from the servers of these businesses. The Patriot Act, which was passed in response to 9/11, gave the government sweeping authority to spy on citizens. An NSA program titled PRISM, unearthed as part of the Snowden leaks, mines that data specifically from major Internet companies. While Silicon Valley businesses may collect data for the relatively harmless purpose of advertising, the centralization of the data itself is dangerous. Facebook, Google and similar tech companies are well aware of that fact, but have decided to ignore it.
The current reality of Silicon Valley, great power without repercussions, eerily resembles the unregulated world of Wall Street in the 1920s. In recent years, San Francisco has surpassed New York City as home to the most expensive real estate in the country. People picture the tech industry as a young, liberal hub with hammock-adorned campuses and nerdy college grads creating AI robots. But in truth, Silicon Valley is yet another breeding ground for wealthy (mostly) white men whose shortsighted actions produce repercussions the rest of us bear the brunt of. The difference is that instead of taking our money, these companies take our privacy. And though many people claim privacy shouldn’t matter if you’re doing nothing wrong, they should consider that the consumers aren’t the ones defining ‘wrong.’
Many people see tech businesses like Google differently than they see the firms on Wall Street largely because these businesses used to be different, or at least claimed to be. Google’s first unofficial slogan was “Don’t be evil.” They pitted themselves against the profit-driven model that big businesses recklessly follow, and it was largely everyday people who invested in their IPO, not the wealthiest in America. Google claimed they were a tool for everyone and would therefore fight for everyone. But they are a profit-driven business and they make that money largely by farming data from users. It is no exaggeration to say that every single action you take through Google—your searches, emails, calendars—is stored and sold for the purpose of customized advertising. Last year, the six biggest Silicon Valley companies spent a total of $44 million on federal lobbying to keep the creation of Internet privacy laws at bay. Out of those six, Google spent the most.
What’s more, Google’s data harvesting directly enables surveillance and censorship regimes around the world. Take Google’s relationship with China. In the mid-2000s, when Google first arrived on mainland China, the corporation censored citizen’s search results in compliance with demands from the Chinese government. This meant no results would show if you searched, for example, “Guo Quan,” the name of a famous Chinese activist who once wrote: “To make money, Google has become a servile Pekinese dog wagging its tail at the heels of the Chinese Communists.” In 2010, Google claimed they would not stand for censorship any longer and moved off mainland China. However, this move seems suspect, considering that the company had been happily censoring their services for years. Some suggest that Google moved because they were scared. The Chinese government had hacked their servers to access the Gmail accounts of a few Chinese activists, and Google didn’t want a spotlight on their faulty security.
Google’s power is data. They weren’t going to give up the information of Chinese Internet users, a population more than twice that of U.S. Internet users, so easily. Google Analytics, a service that many websites use to collect data, stayed on the Chinese mainland. If an individual accesses a website with Google Analytics, a cookie is inserted in their browser that Google (and therefore China) can use to track them. Businesses like Google argue that they have a right to users’ information because they store data in exchange for a service. But, in this case, Google wasn’t even providing a service to the Chinese.
Recently Google has fully returned to mainland China, claiming that their new encryption of Internet searches defends against surveillance. However, encryption hardly means anything as long as Google Analytics still exists. All of this is at a time when a Black Mirror-esque reality is in the works for the Chinese people. BBC news has reported that the Chinese government may be working to create a social credit system which would rate the trustworthiness of every citizen and would be used to revoke privileges from and assign higher surveillance to certain individuals. They would create the metric by compiling all the data the government has on a person—from tickets for running a red light to criticizing the ruling Chinese Comunist Party online—into a single number.
Of course the surveillance state exists in the U.S., too. Thanks to Edward Snowden, many now know (though seem unconcerned) that the NSA participates in an international surveillance alliance called Five Eyes, made up of Australia, Canada, New Zealand, the U.K. and the U.S. Facebook feeds much of that surveillance alliance, which is especially worrisome in light of the social media giant’s facial recognition software “DeepFace,” which tags users in photos automatically. DeepFace is 98 percent effective and can recognize an individual’s hairstyle and body language as well as facial features. It could be used to create a photo-searchable database of Facebook users (meaning any stranger could take a picture of you and find you on the Internet). Facebook claims to allow users to opt-out of the automatic tagging service, but the opt-out only means they won’t use the service to tag you in a photo, not that they won’t still use manually tagged photos to map your face.
The feature was shut down in Europe due to privacy concerns. In the U.S. only one state, Illinois, has argued that the software is illegal. Illinois passed a law in 2008 called the Biometric Information Privacy Act (BIPA), which sets limits on how companies can store and use a person’s biometric data, which includes things like fingerprints, voiceprints, iris scans and facial structures. Facebook dismissed the law, literally. In response, the people of Illinois brought a class action lawsuit against the company. In response, Facebook filed a “motion to dismiss” their concerns. However, the courts did not allow Facebook off the hook so easily. The case is currently in litigation. Facebook’s argument is based on a technicality: if they are using a generic facial recognition algorithm rather than storing individual facial information, then they aren’t in violation of BIPA. In the meantime, the FBI has already employed DeepFace technology on multiple occasions.
U.S. citizens don’t often think about surveillance until something unforeseeable happens, like the assassination of an activist, a war breaking out or the election of an unpredictable commander-in-chief. The scariest part of Facebook’s advances in facial recognition software is that Trump has expressed interest in using facial recognition software to track immigrant and Muslim communities. Many states are trying to pass legislation that will keep Trump from using their public data records to target vulnerable communities, but most data on citizens in the U.S. is stored not in government offices but in giant servers belonging to Facebook, Google and others. The nature of online data collection is unsafe: even highly-encrypted information can be accessed by anyone with the technical know-how. And government agencies like the FBI certainly have that know-how. As Daniel Miessler, a technology blogger and information security professional, tweeted in regards to the Trump presidency: “This is why you don’t build secretive, all-powerful surveillance tools. You never know who’s going to get the keys.”
As Silicon Valley companies attract more and more users, the data that government surveillance agencies can access will only increase in detail and quantity. Take Google as an example. Since 2010, Google has bought a company a week, including many that produce technologies that further invade our privacy. These include facial recognition systems, mapping, wearable devices (like glucose-sensing contact lens or GoogleGlasses) and “the Internet of Things,” which are in-home devices that will be connected to the Internet, like smart refrigerators or intelligent washing machines. Even if the information these devices collect does not end up in the hands of the government, it is still troubling how much power this gives companies like Google.
Social media sites regularly manipulate our newsfeeds for experimental purposes. Every member of Facebook has been a guinea pig in one experiment or another. In 2012, for three months prior to Election Day, Facebook changed the News Feeds of 1.9 million people, and in doing so claimed to increase voter turn out within that population by three percent. While that may not seem so daunting a statistic, it is frightening to think of the immense impact Facebook could have on the election if they decided to fully use their capabilities. Without regulations on what these companies are allowed to do, users can only hope they will remain on their side. But given the overriding concern with profit margins, this hope may be baseless. Silicon Valley businesses are already opaque about what data they collect and how they use it. Nowhere on any Google or Facebook site does it clearly lay out what information they take from users and where they send it. In 2015, Google actually reorganized and changed their name to Alphabet, a giant conglomerate under which Google is just one facet, in order to further expand.
When looking into the privacy and surveillance in the U.S., many come to realize most Internet users are apathetic about security. The question is not why we should care about our privacy but why we don’t. Even though many are aware of the extent of data being collected, it seems unlikely anyone is planning a massive sit-in in Silicon Valley’s equivalent of Zucotti Park. Maybe we don’t care about privacy because it’s not a human right—at least not in the same way food or water or shelter is. When we consider that humanity used to live without walls, or that the popularization of silent reading was once a radical advancement in privacy, then it becomes hard to see privacy as a fundamental part of being human. As the 19th century journalist E.L. Godkin put it, “privacy is a distinctly modern product.” It is inherently modern because the need for privacy has always come directly out of technological advances. The first record of the ‘Right to Privacy’ appeared in the Harvard Law Review in 1890 regarding the invention of the camera. The rise of the first U.S. surveillance organization, Black Chamber, grew out of the creation of that same telegraph.
But while privacy may have been invented, it was invented for a reason. Surveillance is an inherently oppressive power, and privacy is the defense against it. In today’s world of the Silicon Valley technology boom, the state of surveillance grows exponentially faster than humanity’s ability to appreciate and recognize the necessity of a right to privacy. We like to believe we would take action if the worst were to happen, but the worst may have already happened: the NSA created a worldwide surveillance organization that mines data from our social media accounts and President Trump has full access to it. If this scares us, we need to start caring about our privacy. The first step is to see Google, Facebook and the rest for who they are: thieves.
Part of the Ritual Issue