Freedom and Justice in our Technological Predicament

This is the thesis I wrote for my Master of Arts in philosophy.
It can also be dowloaded as a PDF.


As the director of an NGO advocating for digital rights I am acutely aware of the digital trail we leave behind in our daily lives. But I too am occasionally surprised when I am confronted with concrete examples of this trail. Like when I logged into my Vodafone account (my mobile telephony provider) and—buried deep down in the privacy settings—found a selected option that said: “Ik stel mijn geanonimiseerde netwerkgegevens beschikbaar voor analyse.”1 I turned the option off and contacted Vodafone to ask them what was meant by anonymized network data analysis. They cordially hosted me at their Amsterdam offices and showed me how my movement behaviour was turned into a product by one of their joint ventures, Mezuro:

Smartphones communicate continuously with broadcasting masts in the vicinity. The billions of items of data provided by these interactions are anonymized and aggregated by the mobile network operator in its own IT environment and made available to Mezuro for processing and analysis. The result is information about mobility patterns of people, as a snapshot or trend analysis, in the form of a report or in an information system.2

TNO had certified this process and confirmed that privacy was assured: Mezuro has no access into the mobility information of individual people. From their website: “While of mobility patterns is of great social value, as far as we’re concerned it is certainly not more valuable than protecting the privacy of the individual.”3

Intuitively something about Vodafone’s behavior felt wrong to me, but I found it hard to articulate why what Vodafone was doing was problematic. This thesis is an attempt to find reasons and arguments that explain my growing sense of discomfort. It will show that Vodafone’s behavior is symptomatic for our current relationship with technology: it operates at a tremendous scale, it reuses data to turn it into new products and it feels empowered to do this without checking with their customers first.

The main research question of this thesis is how the most salient aspects of our technological predicament affect both justice as fairness and freedom as non-domination.

The research consists of three parts. In the first part I will look at the current situation to understand what is going on. By taking a closer look at the emerging logic of our digitizing society I will show how mediation, accumulation and centralization shape our technological predicament. This predicament turns out to be one where technology companies have a domineering scale, where they employ a form of data-driven appropriation and where our relationship with the technology is asymmetrical and puts us at the receiving end of arbitrary control. A set of four case studies based on Google’s products and services deepens and concretizes this understanding of our technological predicament.

In the second part of the thesis I will use the normative frameworks of John Rawls’s justice as fairness and Philip Pettit’s freedom as non-domination to problematize this technological predicament. I will show how data-driven appropriation leads to injustice through a lack of equality, the abuse of the commons, and a mistaken utilitarian ethics. And I will show how the domineering scale and our asymmetrical relationship to the technology sector leads to unfreedom through our increased vulnerability to manipulation, through our dependence on philanthropy, and through the arbitrary control that technology companies exert on us.

In the third and final part I will take a short and speculative look at what should be done to get us out of this technological predicament. Is it possible to reduce the scale at which technology operates? Can we reinvigorate the commons? And how should we build equality into our technology relationships?

Part 1: What is going on?

The digitization of our society is continuing with a rapid pace.4 The advent of the internet, and with it the World Wide Web, has been a catalyst for the transition from an economy which was based on dealing with the materiality of atoms towards one that is based on the immateriality of bits.

The emerging logic of our digitizing world

This digitization has made internet technology omnipresent in our daily lives. For example, 97.1% of Dutch people over 12 years old have access to the internet, 86.1% use it (nearly) every day and 79.2% access the internet using a smartphone (that was 40.3% in 2012).5 12.1 million Dutch citizens have WhatsApp installed on their phone (that is around 90% of the smartphone owners) and 9.6 million people use the app daily. For the Facebook app these figures are 9.2 million and 7.1 million respectively.6 This turns out to have three main effects.

The digitization of society means that an increasing number of our interactions are technologically mediated. This mediation then enables a new logic of accumulation based on data. Together these two effects create a third: a centralizing force making the big become even bigger.

From mediation …

It would be fair to say that many if not most of our interactions are technologically mediated7 and that we are all becoming increasingly dependent on internet based technologies.

This is happening with our social interactions, both in the relationships with our friends and in the relationships at work. Between two people speaking on the phone sits T-Mobile, between a person emailing their friends sits Gmail, to stay professionally connected we use LinkedIn, and we reach out to each other using social media like Facebook, Twitter, Instagram and WhatsApp.

It is not just our social interactions which are mediated in this way. Many of our economic or commercial interactions have a third party in the middle too. We sell the stuff that we no longer want using online market places like eBay (and increasingly through social media like Facebook too), cash is slowly but surely being replaced by credit and debit cards and we shop online too. This means that companies like Amazon, Mastercard, and ING sit between us and the products we buy.

Even our cultural interactions are technologically mediated through the internet. Much of our watching of TV is done online, we read books via e-readers or listen to them as audio books, and our music listening is done via streaming services. This means that companies like YouTube, Netflix, Amazon, Audible, and Spotify sit between us and the cultural expressions and products of our society.

… to accumulation …

This global architecture of networked mediation allows for a new logic of accumulation which Shoshana Zuboff—in her seminal article Big Other—calls “surveillance capitalism.”8 I will opt for the slightly more neutral term “accumulation”. Throughout her article, Zuboff uses Google as an example, basing a lot of her argument on two articles by Hal R. Varian, Google’s chief economist. According to Varian, computer mediated interaction facilitates new forms of contract, data extraction9 and analysis, controlled experimentation, and personalization and customization.10 These are the elements on which Google bases its playbook for business.

Conceptualizing the way that (our) data flows in these data economies, it is convenient to align with the phases of the big data life cycle. In Big Data and Its Technical Challenges, Jagadish et al. split the process up in: data acquisition; information extraction and cleaning; data integration, aggregation, and representation; modeling and analysis; and interpretation.11 Many authors collapse these phases into a three-phase model: acquisition, analysis and application.12 From the perspective of individual citizens or users this is a very clean way of looking at things: data is13 acquired, then something is done to it and finally it is applied.14 Not all data flows in the same way in this accumulation ecosystem. It is therefore relevant to qualify the different ways in which this happens. For each of the phases, I will touch on some of the distinctions that can be made about the data.

Phase 1: Acquisition (gather as much data as possible)

Being the intermediary—the third party between users and the rest of their world—provides for a privileged position of surveillance. Because all the services of these intermediaries work with central servers, the accumulator can see everything their users do. It is trivial for Amazon to know how much time is spend on reading each book, which passages are the most highlighted, and what words are looked up the most in their dictionary. Amazon knows these things for each customer and at the aggregate level. Similarly, Spotify knows exactly what songs each individual likes to play most, and through that has a deep understanding about the current trends in music.

The costs (and size) of sensors is diminishing.15 This means that over the last couple of years it has become feasible to outfit users with products that have sensors (think microphones, cameras, GPS chips and gyro sensors). Every voice command that the user gives, every picture that is taken, and every route assisted trip is more data for the accumulator. These companies are now even starting to deliver sensors that go on (or in) our bodies, delivering data about sleep patterns, glucose levels, or general activity (like steps taken).

Some accumulators manage to get people to actually produce the data for them. Often this is data that can be turned into useful content for other users (like reviews of books on Amazon), or helps in solidifying trust in the network (reviews of Airbnb hosts and guests), and occasionally users are forced to provide data before they get access to a website (proving that you are human by clicking on photos).

Accumulators like Google and Facebook retain an enormous amount of data for each individual user,16 and even when they are forced to delete this personal data, they often resort to anonymization techniques in order to retain as much of the data as possible.17

Qualifying acquisition

The first distinction is whether the data relates to human beings at all. For most data that is captured via the internet or from our built environment this is the case, but there are domains where the data has nothing to do with us. It is assumed in what follows that we are talking about data that relates to humans.18

A dimension that will come back in all three phases is transparency. In this phase the question to ask is whether the person is aware that data is being collected and what data that is. This question can be asked for each individual, but it can also be asked in a more general way: is it possible to know what is being collected?

Another important distinction to make is whether the data is given voluntarily. Does the person have a choice about whether the data is given? This has an absolute side to it: is it possible for the person not to give this data? But more often there is some form of chained conditionality: given the fact that the person has decided to walk on this street, can they choose to not have their data collected? Has the person given their permission for the data to be acquired?

Often (but not always) related to this voluntariness is whether the data is collected as part of a private relationship between the person and the collector or whether the collection is done in the public sphere.

Furthermore, it is relevant to consider whether the data can be collected only once or whether it can be collected multiple times. A very similar question is whether it can only be collected by a single entity or whether others can collect it too.

Finally, it is worthwhile to think about whether the particular data is collected purposefully and with intent or whether the collection is a by-product of delivering another service.

Making the distinction between personal data (defined in Europe’s General Data Protection Regulation as relating to an identified or identifiable individual19) and non-personal data probably isn’t helpful in this phase. This data relates to human beings, and because it is very hard to anonymize data20—requiring an active act by the collector of the data—it is probably best to consider all the data at this point in the process as personal data.

Phase 2: Analysis (use data scientists, experiments and machine learning to understand how the world works)

When you have a lot of data in a particular domain, you can start to model it to see how the domain works. If you collect a lot of movement data through people who use your mapping software to find their way, then you will gain a lot of insight into traffic patterns: Where is it busy at what time in the day? What happens to the traffic when a particular street gets broken up? If you also track news and events, then you would be able to correlate certain events (a concert in a stadium) with certain traffic patterns.

You no longer need to make an explicit model to see how the world works. Machine learning algorithms can use statistical methods to find correlational patterns. Chris Anderson (in)famously predicted that the tremendous amount of data that is being collected and available for analysis will make the standard scientific method—of making a hypothesis, creating a model, and finally testing the model—obsolete:

The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.21

In certain domains it is possible to speed up the development of these machine learning algorithms by running experiments. If you have the ability to change the environment (hard to do with traffic, easy to do with a web interface), you can see how the behavior changes when the environment changes in a certain way. According to Anderson “Google and like-minded companies are sifting through the most measured age in history, treating this massive corpus as a laboratory of the human condition.”22

Qualifying analysis

There are basically three possible results when it comes to a person’s data at the end of this phase:

  1. The person is still identifiable as that person.
  2. The data is pseudonymized or anonymized, but is still individualized. There is still a direct relationship between the collected data from the person and how it is stored.
  3. The data is aggregated into some form that no longer relates to an individual. The person has become part of a statistic or a weight in some probabilistic model.

Of course it can also be a combination of these three results. They are in no way mutually exclusive.

Once again, it might be also be a relevant distinction to see how transparent it is for the person as to how their data is being stored.

Phase 3: Application (use the model to create predictions and sell these)

When you understand how a particular domain works, you can use that understanding to predict the future. If you know the current circumstances and you know what usually happens in these circumstances, you can start to make predictions and sell them on the market.

The dominant market for predictions at this point in time is advertising. Companies like Google and Facebook use this logic of accumulation to try and understand buying intent and sell this knowledge as profiles to advertise against. Facebook for example, allows you to target on demographics, location, interests (“Find people based on what they’re into, such as hobbies, favourite entertainment and more.”) and behaviours (“Reach people based on their purchasing behaviours, device usage and other activities”).23 Some marketeers have gone through the trouble to list out all of Facebook’s ad targeting options.24 These include options like “Net Worth over $2,000,0000”, “Veterans in home”, “Close Friends of Women with a Birthday in 0-7 days”, “Likely to engage with political content (conservative)”, “Active credit card user”, “Owns: iPhone 7” and “African American (US) Multicultural Affinity.”25 It is important to note that many of these categories are not based on data that the user has explicitly or knowingly provided, but are based on a calculated probability instead.

Advertising isn’t the only market where predictions can be monetized, the possibilities are endless. Software predicts crime and sells these predictions to the police,26 software predicts the best performing exchange-traded funds in each asset class and sells these predictions as automatic portfolio management to investors,27 and software predicts which patients will most urgently need medical care and sells these predictions to hospitals.28 Some people label this moment in time as “the predictive turn”.29

Qualifying application

This is the phase where the data that has been acquired and analyzed is used back into the world. The first relevant distinction is whether the use of the data (directly) affects the person from which the data was acquired. Is there a direct relationship?

Next, it is important to look at whether the data is applied in the same domain (or within the same service) as where it was acquired. Or is it acquired in one domain and then used in another? If that is the case, then often the application itself is part of the acquisition of data in some other process.

The distinction between private use or public use of the data is interesting too. Sometimes this distinction is hard to make, so the cleanest way to draw the line is between proprietary data and data that can be freely used and shared. Another way of exploring the distinction between private and public use is to ask where (the majority of) the value accrues. Closely related to this point is the question of whether the use of the data aligns with what the person finds important. Is the use of the data (socially) beneficial from the person’s perspective?

Of course it is again relevant whether it is transparent to the person how the data is applied.

Data appropriation

Having looked at the three phases of accumulation it becomes possible to create a working definition of data appropriation. To “appropriate” in regular use, means to take something for one’s own use, typically without the owners permission30 or to take or make use of without authority or right.31 The definition of “data appropriation” can stay relatively close to that meaning. Data is appropriated from a person when all three of the following conditions are true:

  1. The data originates with that person.
  2. The organization that acquires, analyses or applies the data isn’t required by law to collect, store, or use the data.
  3. Any one of the following conditions is true:
    • The data is acquired against their volition (i.e. involuntarily).
    • The data is acquired without their knowledge.
    • The data is applied against their volition.
    • The data is applied without their knowledge.

It is important to note that what is done to the data in the analysis phase—whether the data is pseudonymized, anonymized or used at an aggregate level—has no bearing on whether the use is to be considered as appropriative. So the fact that there might not have been a breach of privacy (or of contextual integrity) does not mean there was no appropriation. And similarly, it doesn’t matter for what purposes the data is applied. Even if the application can only serve towards a public social benefit, it might still have been appropriation that enabled the application.

… to centralization

Mediation and accumulation create a third effect: they lead to centralization. Initially we thought that the internet would be a major source of disintermediation and would remove the intermediaries from our transactions. Robert Gellman’s 1996 article Disintermediation and the Internet is illustrative of this idea. He wrote:

The Internet offers easy, anonymous access to any type of information […]. The traditional intermediaries—newsstands, book stores, and video stores—are not necessary. […] With the Internet the traditional intermediaries are swept away. Anyone of any age who can click a mouse can access any public server on the network. The limitations that were inherent in traditional distribution methods are no longer there.32

Allowing for direct (often even peer-to-peer) connections, we would be able to decrease our dependence on companies earning their money through offering different options to their customers. We no longer needed travel agents to book holidays, or real estate agents to buy and sell houses. And news would find us directly rather than having to be bundled into a newspaper.

A more truthful description of what turned out to be happening is that we switched out one type of intermediary for another. Rather than being dependent on travel agents, realtors and newspapers, we became dependent on companies like Google, Facebook, and Amazon. According to Ben Thompson, to be successful in the pre-internet era you either had to have a monopoly or you needed to control distribution. The internet has changed this. Distribution of digital goods is free and transaction costs are zero (meaning you can scale to billions of customers):

Suppliers can be aggregated at scale leaving consumers/users as a first order priority. […] This means that the most important factor determining success is the user experience: the best distributors/aggregators/market-makers win by providing the best experience, which earns them the most consumers/users, which attracts the most suppliers, which enhances the user experience in a virtuous cycle.33

Thompson calls this “aggregation theory”, and uses it to explain the success of Google’s search, Facebook’s content, Amazon’s retail goods, Netflix’s and YouTube’s videos, Uber’s drivers, and Airbnb’s rooms. Aggregation theory has a centralizing effect:

Thanks to these virtuous cycles, the big get bigger; indeed, all things being equal the equilibrium state in a market covered by Aggregation Theory is monopoly: one aggregator that has captured all of the consumers and all of the suppliers.34

It is interesting to note that the aggregators don’t create their monopoly by limiting the options the internet user has. It could even be said that the user chooses to be inside the aggregator’s monopoly because of the better user experience.35 However, it is the monopolist which in the end has the singular ability to fully shape the user’s experience.

Our technological predicament

It is now clear how mediation allows for a new logic of accumulation which then keeps on accelerating through centralization. Each of these effects results in a particular salient characteristic of our technological predicament. Mediation leads to asymmetric relationships with arbitrary control, accumulation leads to data-driven appropriation, and centralization leads to a domineering scale.

Asymmetric relationships with arbitrary control

The relationship between technology companies and their users is one where the former can afford to make unilateral and completely arbitrary decisions. It is the company that decides to change the way a product looks or works, and it is the company that can decide to give the user access or to block their account. This leads to a loss of control (the company making the choices instead of the user), often with few if any forms of redress in case something happens that the user doesn’t like.

There is also a clear asymmetry in transparency. These companies have a deep knowledge about their users, and the users most times can only know very little about the company.

Data-driven appropriation

The technology companies base their services—and get their quality from—the data that they use as their input. Often this data is given by the user through using the product or through giving their attention, sometimes the user is actively turned into a data collector, and occasionally these companies are free-riders on other services that are open enough to allow them to use their data.

It is important to accentuate the nontransparent nature of much of what these companies do. Often the only way to try to understand the way they use data, is through a black box methodology, trying to see what goes into them and what comes out of them, and using that information to try and piece together the whole puzzle. The average user will have little insight or knowledge about how the products they use every day work, or what their larger impact might be.

Even if there is the option not to share your data with these companies, then there still is what Solon Barocas and Helen Nissenbaum call the tyranny of the minority: “The willingness of a few individuals to disclose information about themselves may implicate others who happen to share the more easily observable traits that correlate with the traits disclosed.36

Technology companies have the near classic feature of capitalism: they manage to most of the costs and the negative societal consequences that are associated with the use of their products, while also managing to hold on to a disproportionate amount of the benefits that accrue. These costs that have to be borne by society aren’t spread out evenly. The externalities have disparate impacts, usually strengthening existing divisions of power and wealth.

Domineering scale

Centralization is the reason why these technology companies can operate at a tremendous scale. Their audience is the (connected) world and that means that a lot of what they do results in billions of interactions. The technology giants that are so central to our lives mostly have a completely dominant position for their services or products. In many cases they have a de facto monopoly, with the accompanying high level of dependence for its users.

The fact that information based companies have very recently replaced oil companies in the charts listing the largest companies in the world by market value37 is clear evidence of this dominance.

Four Google case studies

So far, the discussion about our technological predicament has stayed at an abstract level. I will use a set of four case studies to make our predicament more concrete and both broaden the conceptions about what can be done through accumulated data, and deepen the understanding about how that is done.

All of these case studies are taken from the consumer product and services portfolio of Google,38 as one of the world’s foremost accumulators. Most readers will be familiar with—and users of—these services. I want to highlight some of the lesser known aspects of these products and show how all of them have the characteristics of asymmetrical relationships, data-driven appropriation and a domineering scale. Although the selection of these cases is relatively arbitrary,39 together they do span the territory of practices that will turn out to be problematic in the second part of this thesis.

Google is completely dominating the search engine market. Worldwide—averaging over all devices—their market share is about 75%.40 But in certain markets and for certain devices this percentage is much higher, often above 90%.41 Every single one of the more than 3.5 billion daily searches42 is used by Google to further tweak its algorithms and make sure that people find what they are looking for. Search volume drives search quality,43 and anybody who has ever tried any other search engine knows that Google delivers the best results by far.44

A glance at anyone’s search history will show that Google’s search engine is both used to look up factual information (basically it is a history of things that this person didn’t know yet), as well as the transactional intentionality of that user (what that person is intending to do, buy, or go to). On the basis of this information, Google is able to infer many things about this person, including for example what illnesses the person might have (or at least their symptoms), what they will likely vote at the next election, and what their job is. Google even knows when this person is sleeping, as these are the moments when that person isn’t doing any searching.

Google makes some of its aggregated search history available for research through Google Trends.45 You can use this tool to look up how often a particular search is done. Google delivers this data anonymously, you can’t see who is searching for what. In his book Everybody Lies, Seth Stephens-Davidowitz has shown how much understanding of the world can be gleaned through this tool. He contends that Google’s search history is a more truthful reflection of what people think than any other way of assessing people’s feelings and thoughts. People tell their search engines things they wouldn’t say out in the open. Unlike our on social media like Facebook, we don’t only show our good side to Google’s search engine. Stephens-Davidowitz became famous for his research using the of Google search queries that include racist language to show that racism is way more prevalent in the United States than most surveys say it is. He used Google data to make it clear that Obama lost about 4 percentage points in the 2008 vote, just because he was black.46 Stephens-Davidowitz is “now convinced that Google searches are the most important dataset ever collected on the human psyche.47 We shouldn’t forget that he was able to do his research though looking at Google search history as an outsider, in a way reverse engineering the black box.48 Imagine how much easier it would be to do this type of research from the inside.

It often feels like Google’s search results are a neutral representation of the World Wide Web, algorithmically surfacing what is most likely to be the most useful information to deliver as the results for each search, and reflecting what is searched by the searching public at large. But it is important to realize two things: Firstly, what Google says about you, frames to a large extent how people see you. And secondly, the search results are not neutral, but are a reflection of many of society’s biases.

The first page of search results when you do a Google search for your full name, in combination with the way how Google presents these results (do they include pictures, videos, some snippets of information), have a large influence on how people initially see you. This is even more true in professional situations and in the online space. You have very little influence about what information is shown about you on this first page.

This fact is the basis of the now famous case at the European Court of Justice, pitting Google against Mario Costeja González and the Spanish Data Protection Authority. Costeja González was dismayed at the fact that a more than ten year old piece of information, from a required ad in a newspaper, describing his financial insolvency, was still ranking high in the Google search results for his name, even though the information was no longer directly relevant. The court realized the special nature of search engine results:

Since the inclusion in the list of results, displayed following a search made on the basis of a person’s name, of a web page and of the information contained on it relating to that person makes access to that information appreciably easier for any internet user making a search in respect of the person concerned and may play a decisive role in the dissemination of that information, it is liable to constitute a more significant interference with the data subject’s fundamental right to privacy than the publication on the web page.49

The Court told Google to remove the result at the request of Costeja González. This allowed him to exercise what came to be called “the right to be forgotten”, but what should really be called “the right to be delinked”. In her talk Our Naked Selves as Data – Gender and Consent in Search Engines, human rights lawyer Gisela Perez de Acha talks about her despair at Google still showing the pictures of her topless FEMEN-affiliated protest from a few years back. Google has surfaced her protest as the first thing people see about her when you look up her name. In the talk, she wonders what we can do to fight back against private companies deciding who we are online.50

That Google’s search results aren’t neutral, but a reflection of society’s biases, is described extensively by Safiya Umoja Noble in her book Algorithms of Oppression. The starting point for Noble is one particular moment in 2010:

While Googling things on the Internet that might be interesting to my stepdaughter and nieces, I was overtaken by the results. My search on the keywords “black girls” yielded as the first hit.51

For Noble this is a reflection of the way that black girls are hypersexualized in American society in general. She argues that advertising in relation to black girls is pornified and that this translates itself into what Google decided to show for these particular keywords. The reflection of this societal bias can be found in many more examples of search results. For example when searching for “three black teenagers” (showing inmates)52 or “unprofessional hairstyles for work” (showing black women with natural hair that isn’t straightened).53

The lack of a black workforce at Google,54 and the little attention that is paid to the social in the majority of engineering curriculums, don’t help in raising awareness and preventing the reification of these biases. Usually Google calls these results anomalies and beyond their control. But Noble asks: “If Google isn’t responsible for its algorithm, then who is?”55


YouTube is the second largest search engine in the world (after Google’s main search engine).56 More than 400 hours of video are uploaded to YouTube every minute,57 and together we watch more than a billion hours of YouTube videos every single day.5859 It is safe to say that YouTube is playing a very big role in our lives.

I want to highlight three central aspects about YouTube. First, I will show how Google regulates a lot of our expression through the relatively arbitrary blocking of YouTube accounts. Next, I will show how the data-driven business model, in combination with the ubiquity and commodification of artificial intelligence, leads to some very surprising results. Finally, I will show how Google relies on the use of free human labor to increase the quality of its algorithmic machine.

Women on Waves is an organization which “aims to prevent unsafe abortions and empower women to exercise their human rights to physical and mental autonomy.”60 It does this through providing abortion services on a ship in international waters. In recent years, they’ve also focused on providing women internationally with abortion pills, so that they can do medical abortions. Women on Waves has YouTube videos in many different languages showing how to do this safely.61 In January 2018, their YouTube account was suspended for violating what YouTube calls its “community guidelines”. Appeals through the appeals process didn’t help. After creating some negative media attention about the story, their account got reinstated and Google issued a non-apology for an erroneous block. Unfortunately since then a similar suspension happened at least two more times with similar results. YouTube refuses to say why and how these blocks happen (hiding behind “internal information”), and says that they have to take down so much content every day that mistakes are bound to be made.62 This is of course just one example of legion. According to Evelyn Austin, the net result of this situation is that “users have become passive participants in a Russian Roulette-like game of content moderation.”63

Late 2017, artist James Bridle wrote a long essay about the near symbiotic relationship between younger children and YouTube.64 According to Bridle, children are often mesmerized by a diverse set of YouTube videos: from nursery rhymes with bright colours and soothing sounds to surprise egg unboxing videos. If you are a YouTube broadcaster and want to get children’s attention (and the accompanying advertising revenue), then one strategy is to copy and pirate other existing content. A simple search for something like “Peppa Pig”, gives you results where it isn’t completely obvious which are the real videos and which are the copies. Branded content usually functions as a trusted source. But as Bridle writes:

This no longer applies when brand and content are disassociated by the platform, and so known and trusted content provides a seamless gateway to unverified and potentially harmful content.65

YouTube creators also crank up their views through using the right keywords in the title. So as soon as something is popular with children, millions of similar videos will be created, often by bots. Bridle finds it hard to assess the degree of automation, as it is also often real people acting out keyword driven video themes. The vastness of the system, and the many languages in which these videos are available,, creates a dimensionality that makes it hard to think about and understand what is actually going on. Bridle makes a convincing point, that for many of these videos neither the creator or the distribution platform has any idea of what is happening. He then goes on to highlight the vast number of videos that use similar tropes, but contain a lot of violence and abusive scenes. He can’t find out who makes them and with what intention, but it is clear that they are “feeding upon a system which was consciously intended to show videos to children for a profit” and for Bridle it is also clear that the “system is complicit in the abuse.”66 He thinks YouTube has a responsibility to deal with this, but can’t really see a solution other than dismantling the system. The scale is too big for human oversight and there is no nonhuman oversight which can adequately address the situation that Bridle has described. To be clear, this is not just about children videos. It would be just as easy to write a completely similar narrative about “white nationalism, about violent religious ideologies, about fake news, about climate denialism, about 9/11 conspiracies.”67

The conspiratorial nature of many of the videos on YouTube is problematic for the platform. It therefore announced in March 2018, that they would start posting information cues to fact-based content alongside conspiracy theory videos. YouTube will rely on Wikipedia to provide this factual information.68 It made this announcement without consulting with Wikimedia, the foundation behind Wikipedia. As Louise Matsakis writes in Wired:

YouTube, a multibillion-dollar corporation flush with advertising cash, had chosen to offload its misinformation problem in part to a volunteer, nonprofit encyclopedia without informing it first.69

Wikipedia exists because millions of people donate money to the foundation and because writers volunteer their time into making the site into what it is. Thousands of editors monitor the changing contents of the encyclopedia, and in particular the pages that track conspiracy theories usually have years of active work inside of them.70 YouTube apparently had not considered what impact the linking from YouTube would have on Wikipedia. This is not just about the technological question of whether their infrastructure could handle the extra traffic, but also what it would do to the editor community if the linking would lead to extra vandalism for example. Wikipedian Phoebe Ayers tweeted: “It’s not polite to treat Wikipedia like an endlessly renewable resource with infinite free labor; what’s the impact?”71


Whenever I do a presentation somewhere in the Netherlands, I always ask people to raise their hand if they have used Google Maps to reach the venue. Most times a large majority of the people have done exactly that. It has become so ubiquitous that it is hard to imagine how we got to where we needed to be, before it existed. The tool works so well that most people will just blindly follow its instructions most of the time. Google Maps is literally deciding what route we take from the station to the theatre.

Even though maps are highly contentious and deeply political by nature,72 we still assume that they are authoritative and in some way neutral. I started doubting this for the first time when I found out that Google Maps would never route my cycle rides through the canals of Amsterdam, but would always route me around them, even if this was obviously slower.73 One of my friends was sure that rich people living on the canals had struck a deal with Google to decrease the traffic in front of their house. I attributed it to Google’s algorithms being more attuned to the street plan of San Francisco than to those of a World Heritage site designed in the 17th century.

But then I encountered the story of the Los Angeles residents living at the foot of the hills that harbor the Hollywood Sign. They’ve been on a mission in the past couple of years to wipe the Hollywood Sign of the virtual map, because they don’t like tourists parking in the streets.74 And for a while they were successful: when you were at the bottom of the hill and asked Google Maps for a route, the service would tell you to walk for one and a half hours to a viewing point at the other end of the valley, instead of showing that a walking path exists that will take you up the hill in 15 minutes. This tweak to the mapping algorithm is just one of the countless examples of where Google applies human intervention to improve their maps.75 As users of the service, we can’t see how much human effort has gone into tweaking the maps to give the best possible results. This is because (as I wrote in 2015) “every design decision, is completely mystified by a sleek and clean interface that we assume to be neutral.”

Next to human intervention, Google also uses algorithms based on artificial intelligence to improve the map. The interesting thing about internet connected digital maps is that they allow for the map to change on the basis of what their users are doing. Your phone and all the other phones on the road are constantly communicating with Google’s services, and this makes it possible for Google to tell you quite precisely when you are going to hit a traffic jam. In 2016, Google rolled out an update to its maps to highlight in orange “areas of interest […], places where there’s a lot of activities and things to do.”76 Google decides on which areas are of interest through an algorithm (with the occasional human touch in high-density areas): “We determine ‘areas of interest’ with an algorithmic process that allows us to highlight the areas with the highest concentration of restaurants, bars and shops.”77

This obviously begs the question: interesting for whom? Laura Bliss found out that the service didn’t highlight streets that were packed with restaurants, businesses and schools in relatively low-income and predominantly non-white areas. Real life divides are now manifested in a new way according to Bliss. She asks the largely rhetorical questions: “Could it be that income, ethnicity, and Internet access track with ‘areas of interest’” with the map literally highlighting the socio-economic divide? And isn’t Google actually shaping the interests of it map readers, rather than showing them what is interesting?78


The World Wide Web is full of robots doing chores. My personal blog79 for example, gets a few visits a day from Google’s web crawler coming to check if there is anything new to index. Many of these robots have nefarious purposes. For instance, there are programs on the loose filling in web forms all over the internet to try and get their information listed for spam purposes or to find a weak spot in the technology and break into the server.80 This is why often you have to prove that you are a human by doing a chore that is relatively easy for humans to do, while being difficult for robots. Typically reading a set of distorted letters and typing those in a form field. These challenges are named CAPTCHAs.81

In 2007, the computer scientist Luis von Ahn invented the reCAPTCHA as part of his work on human-based computation (in which machines outsource certain steps to humans). He thought is was a shame that the effort that people put into CAPTCHAs was wasted. In a reCAPTCHA people were shown two words out of old books that had been scanned by the Internet Archive: one word that reCAPTCHA already understood (to check if the person was indeed a human) and another that reCAPTCHA wasn’t yet too sure about (to help digitize these books).82

Google bought reCAPTCHA in 200983 and kept the service free to use for any website owner. They also switched the digitization effort from the open Internet Archive to its own proprietary book scanning effort. More than a million websites have currently integrated reCAPTCHA into their pages to check if their visitors are human. Google has a completely dominant market position for this service, as there are very few good alternatives. In 2014, Google changed reCAPTCHA’s slogan from “Stop Spam, Read Books” to “Tough on Bots, Easy on Humans,”84 and at the same time changed the problem from text recognition to recognition. In the current iteration, people have to look at a set of photos and click on all the images that have a traffic sign or have store fronts on them (see fig. 1 for an example).

Figure 1: Google’s reCAPTCHA asking to identify store fronts

Figure 1: Google’s reCAPTCHA asking to identify store fronts

With the switch to images, you no longer are helping Google to digitize books, you are now a trainer for its image recognition algorithms. As Google notes on its reCAPTCHA website under the heading “Creation of Value. Help everyone, everywhere – One CAPTCHA at a time.”:

Millions of CAPTCHAs are solved by people every day. reCAPTCHA makes positive use of this human effort by channeling the time spent solving CAPTCHAs into digitizing text, annotating images, and building machine learning datasets. This in turn helps preserve books, improve maps, and solve hard AI problems.85

Gabriela Rojas-Lozano has tried to sue Google for making her do free labor while signing up for Gmail without telling her that she was doing this labor.86 She lost the case because the judge was convinced that she still would have registered for a Gmail account even if she had known about giving Google the ten seconds of labor.87 Her individual “suffering” was indeed ludicrous, but she did have a point if you look at society at large. Every day, hundreds of millions of people fill in reCAPTCHAs for Google to prove that they are human.88 This means that all of us give Google more than 135.000 FTE of our labor for free.89 Google’s topnotch image recognition capability is partially enabled—and has certainly been catalysed—by this free labor.

In June of 2018, Gizmodo reported that Google had contracted with the United States Department of Defense to “help the agency develop artificial intelligence for analyzing drone footage.”90 This led to quite an outrage among Google employees, who weren’t happy with their company offering surveillance resources to the military. I personally was quite upset from the idea that all my clicking on store fronts (as I am regularly forced to do, to access the information that I need, even on services that have nothing to do with Google), is now helping the US with its drone-based assassination programs in countries like Afghanistan, Yemen and Somalia.91

Part 2: How is that problematic?

Now that we have a clear idea about our technological predicament, we can start to explore the potential effects that this might have on the structure of our society. It is obvious that these effects will be far-reaching, but at the same time they are undertheorized. As Zuboff writes:

We’ve entered virgin territory here. The assault on behavioral data is so sweeping that it can no longer be circumscribed by the concept of privacy and its contests. This is a different kind of challenge now, one that threatens the and political canon of the modern liberal order defined by principles of self-determination that have been centuries, even millennia, in the making. I am thinking of matters that include, but are not limited to, the sanctity of the individual and the ideals of social equality; the development of identity, , and moral reasoning; the integrity of contract, the freedom that accrues to the making and fulfilling of promises; norms and rules of collective agreement; the functions of market democracy; the political integrity of societies; and the future of democratic sovereignty.92

I will look at the three features of our technological predicament through the lens of as fairness and freedom as non-domination. In both cases, I come to the conclusion that the effects are deleterious. Data-driven appropriation leads to injustices, whereas the domineering scale and the asymmetrical relationships negatively affect our freedom.

Injustice in our technological predicament

To assess whether our technological predicament is just, we will look at it from the perspective of Rawls’s principles of justice. There are three central problems with the basic structure in our digitizing society. The first is a lack of equality in the division of the basic liberties, the second is an unjust division of both public and primary goods, and a final problem is tech’s reliance on utilitarian ethics to justify their behavior.

The demands of justice as fairness

For John Rawls, the subject of justice is what he calls the “basic structure of society”, which is “the way in which the major social institutions distribute fundamental rights and duties and determine the division of advantages from social cooperation.”93 Major social institutions are the principal economic and social arrangements and the political constitution.

The expository and intuitive device that Rawls uses to ensure that his conception of justice is fair is the “original position”. He writes: “One conception of justice is more reasonable than another, or justifiable with respect to it, if rational persons in the initial situation would choose its principles over those of the other for the role of justice.”94 The restrictions that the original position imposes on the arguments for principles of justice help with the justification of this idea:

It seems reasonable and generally acceptable that no one should be advantaged or disadvantaged by natural fortune or social circumstances in the choice of principles. It also seems widely agreed that it should be impossible to tailor principles to the circumstances of one’s own case. We should insure further that particular inclinations and aspirations, and persons’ conceptions of their good do not affect the principles adopted. The aim is to rule out those principles that it would be rational to propose for acceptance […] only if one knew certain things that are irrelevant from the standpoint of justice. […] To represent the desired restrictions one imagines a situation in which everyone is deprived of this sort of information. One excludes the knowledge of those contingencies which sets men at odds and allows them to be guided by their prejudices. In this manner the veil of ignorance is arrived at in a natural way."95

The parties in the original position, and behind this veil of ignorance, are to be considered as equals. Rawls: “The purpose of these conditions is to represent equality between human beings as moral persons, as creatures having a conception of their good and capable of a sense of justice.”96

According to Rawls, there would be two principles of justice that “rational persons concerned to advance their interests would consent to as equals when none are known to be advantaged or disadvantaged by social and natural contingencies.”97 The first principle requires equality in the assignment of basic rights and duties:

Each person is to have an equal right to the most extensive total system of equal basic liberties compatible with a similar system of liberty for all.98

Whereas the second principle holds that social and economic inequalities are only just if they result in compensating benefits for everyone and the least advantaged in particular:

Social and economic inequalities are to be arranged so that they are both:

  1. to the greatest benefit of the least advantaged, consistent with the just savings principle, and
  2. attached to offices and positions open to all under conditions of fair equality of opportunity.99

These principles are to be ranked in lexical order. This means that the basic liberties can only be restricted for the sake of liberty (so when the less extensive liberty strengthens the total system of liberties shared by all, or when the less than equal liberty is acceptable to those with the lesser liberty), and that the second principle of justice goes before the principle of efficiency and before the principle of maximizing the sum of advantages.100

For Rawls, the second principle expresses an idea of reciprocity. Even though the principle initially looks biased towards the least favored, Rawls argues that “the more advantaged, when they view the matter from a general perspective, recognize that the well-being of each depends on a scheme of social cooperation without which no one could have a satisfactory life; they recognize also that they can expect the willing cooperation of all only if the terms of the scheme are reasonable. So they regard themselves as already compensated […] by the advantages to which no one […] had a prior claim.”101

Lack of equality

To show how data-driven appropriation leads to inequality, I will use the investigative journalism of political science professor Virginia Eubanks. She has published her research in Automating Inequality.102 According to Eubanks:

Marginalized groups face higher levels of data collection when they access public benefits, walk through highly policed neighborhoods, enter the health-care system, or cross national borders. That data acts to reinforce their marginality when it is used to target them for suspicion and extra scrutiny. Those groups seen as undeserving are singled out for punitive public policy and more intense surveillance, and the cycle begins again. It is a kind of collective red-flagging, a feedback loop of injustice.103

She argues that we have forged “a digital poorhouse from databases, algorithms, and risk models,”104 and demonstrates this by writing about three different government programs that exhibit these features: a welfare reform effort, an algorithm to distribute subsidized houses to homeless people, and a family screening tool. The latter gives the most clear example of the possible unjust effects of recursively using data to create models.

The Allegheny Family Screening Tool (AFST) is an algorithm—based on machine learning—that aims to predict which families are at a higher risk of abusing or neglecting their children.105 The Allegheny County Department of Human Services has created a large warehouse combining the data from twenty-nine different government programs, and has bought a predictive modelling methodology based on research in New Zealand106 to use this data to make predictions of risk.

There is a lot of room for subjectivity when deciding what is to be considered neglect or abuse of children. “Is letting your children walk to a park down the block alone neglectful?”, Eubanks asks.107 Where to draw the line between neglect and conditions of poverty is particularly difficult.108 Eubanks is inspired by Cathy O’Neil, who says that “models are opinions embedded in mathematics,”109 to do a close analysis of the AFST algorithm. She finds some serious design flaws that limit its accuracy:

It predicts referrals to the child abuse and neglect hotline and removal of children from their families—hypothetical proxies for child harm—not actual child maltreatment. The data set it utilizes contains only information about families who access public services, so it may be missing key factors that influence abuse and neglect. Finally, its accuracy is only average. It is guaranteed to produce thousands of false negatives and positives annually.110

The use of public services as an input variable means that low-income people are disproportionately represented in the database. This is because professional middle class families mostly rely on private sources for family support. Eubanks writes: “It is interesting to imagine the response if Allegheny County proposed including data from nannies, babysitters, private therapists, Alcoholics Anonymous, and luxury rehabilitation centers to predict child abuse among wealthier families.”111 She calls the current program a form of “poverty profiling”:

Like racial profiling, poverty profiling targets individuals for extra scrutiny based not on their behavior but rather on a personal characteristic: living in poverty. Because the model confuses parenting while poor with poor parenting, the AFST views parents who reach out to public programs as risks to their children.112

Eubanks’s conclusion about automated decision-making on the basis of the three examples in her book is damning:

[It] shatters the social safety net, criminalizes the poor, intensifies discrimination, and compromises our deepest national values. It reframes shared social decisions about who we are and who we want to be as systems engineering . And while the most sweeping digital decision-making tools are tested in what could be called “low rights environments” where there are few expectations of political accountability and transparency, systems first designed for the poor will eventually be used on everyone.113

Eubanks’s examples all relate to how the state interferes with its citizens rights. These examples are still relevant to this thesis because they clearly show what happens when algorithms and data are used to make decisions about people and what these people are entitled to. The processes of the state at least have a level of accountability and the need for legitimacy in their decision making. The same can’t be said for accumulators like Google and Facebook. They are under no democratic governance and don’t have any requirements for transparency. This makes it harder to see the unequal consequences of their algorithmic decision making, and as a result makes it harder to question those.

One example of an unequal treatment of freedom of speech was highlighted by ProPublica in a investigative piece titled Facebook’s Secret Censorship Rules Protect White Men From Hate Speech But Not Black Children.114 ProPublica used internal documents from Facebook to shed light on the algorithms that Facebook’s censors use to differentiate between hate speech and legitimate political expression.

The documents suggested that “at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.”115 Up until very recently, Facebook did not publish the enforcement guidelines for its “community standards,”116 only after increased pressure from civil society did it decide to be transparent about its rules.117 There are endless examples of marginalized groups who have lost their audience because Facebook has decided to block their posts or their pages. Often they are the victims of a rule where Facebook protects certain special categories (like ethnicity or gender), but not subsets of those categories. This has led to the absurd situation where “white men” is a category that is protected from hate speech, but “black children” or “female drivers” are not.118

There is some proof that Facebook uses the profitability of a particular page or post as one of the criteria in the decision making about whether to remove it or not. When a Channel 4 reporter went undercover, he was told by his content moderation trainer that the page of the extreme right organization Britain First was left up—even though they had broken the rules many times—because “they have a lot of followers so they’re generating a lot of revenue for Facebook.”119 A Dutch Facebook moderator in Berlin had a similar story about the hate speech which was directed at the black Dutch politician and activist Sylvana Simons. He wasn’t allowed to remove any of it, mainly because Facebook has no incentive to take down content. This changed when the reporting about Simons turned on Facebook itself.120 As soon as there is media attention for a particular decision, Facebook will often change course.

Kate Klonick, an academic specializing in corporate censorship, fears that Facebook is evolving into a place where celebrities, world leaders, and other prominent people “are disproportionately the people who have the power to update the rules.”121 This is a form of class justice and a clear example of a lack of equality. Dave Willner, a former member of Facebook’s content team, makes the explicit connection with justice. He says that Facebook’s approach is “more utilitarian than we are used to in our justice system, […] it’s fundamentally not rights-oriented.”122

Abuse of the commons

For what I’ve been calling “appropriation”, Rawls would most likely use the Aristotelian term “pleonexia”, which he defines as “gaining some advantage for oneself by seizing what belongs to another, his123 property, his reward, his office, and the like, or by denying a person that which is due to him, the fulfillment of a promise, the repayment of a debt, the showing of proper respect, and so on.”124 For Rawls, it is clear: “We are not to gain from the cooperative labors of others without doing our fair share.”125

Google using the collaborative effort of Wikipedia for their own gains (as seen in the YouTube case study above), is of course one of the more obvious examples of what Rawls does not allow. While researching this thesis, I encountered another dreadful example of this mechanism.126

Google augments their search results for certain keywords with an information box, for which the information comes from Wikipedia. It does this to quickly provide the information most searchers will be looking for, and thus to make their search engine even more attractive. Google does not compensate Wikipedia for this use.127 When I searched for “Los Angeles” in late July of 2018, it showed me an information box with despicable racist contents (see fig. 2 for a censored version of what I saw). After a bit of research, I found out that Wikipedia’s page had been vandalized at 8:00 in the morning, and that a Wikipedia volunteer had cleaned up the mess one hour later. Google had indexed the vandalized page, but hadn’t yet indexed the cleaned up version, even though it was ten hours after the problem had been fixed.

Figure 2: result for the search term Los Angeles on July 21st, 2018

Figure 2: result for the search term “Los Angeles” on July 21st, 2018

Because of Google’s dominance, it is reasonable to assume that way more people will see this vandalized version of the information on the Google page than on the Wikipedia page. It is probable that the vandal’s main purpose was to influence Google’s search results. If that is indeed the case, then it means that Google using Wikipedia to spruce up their results has a detrimental effect on the quality of the collaborative encyclopedia. The fact that a Republican senator thought he had to publicly prove that he was still alive, after Google erroneously listed him as having passed away, proves that point. This mistake was also the result of a vandal making a change in Wikipedia, but this fact was nowhere mentioned in the media coverage of the event.128

This is a straightforward example of an accumulator appropriating the commons. There are two more complex (and more impactful) ways that our technological predicament is enabling the abuse of the commons. The first, is how the predictive knowledge about how the world works is being enclosed, a problem with the informational commons. The second, is the way our attention is taken away from us, a problem with the attentional commons.

The informational commons

In 1993, Bruce Sterling wrote beautifully about the internet as a public good, comparing the anarchical nature of the internet with the way the English language develops:

Nobody rents English, and nobody owns English. As an English-speaking person, it’s up to you to learn how to speak English properly and make whatever use you please of it […]. Otherwise, everybody just sort of pitches in, and somehow the thing evolves on its own, and somehow turns out workable. And interesting. Fascinating, even. […] “English” as an institution is public property, a public good. Much the same goes for the Internet. […] It’s an institution that resists institutionalization. The Internet belongs to everyone and no one.129

Rawls writes about public goods in the context of looking at economic systems to see if they can satisfy the two principles of justice. According to him, they have two characteristic features: indivisibility and publicness. Public goods “cannot be divided up as private goods can and purchased by individuals according to their preferences for more or less.”130 Rawls acknowledges the free-rider problem (individuals avoiding doing their share), and how this will limit the chances for voluntary agreements about the public good to develop. He also sees clearly how the externalities of the production of public goods will not be reckoned with by the market.131 So for him it is evident “that the indivisibility and publicness of certain essential goods, and the externalities and temptations to which they give rise, necessitate collective agreements organized and enforced by the state. […] Some collective arrangement is necessary and everyone wants assurance that it will be adhered to if he is willingly to do his part.”132

Current literature sees public goods as one form of a commons, “a resource shared by a group of people that is subject to social dilemmas.”133 Charlotte Hess and Elinor Ostrom use two dimensions to categorize the different forms of commons. The first is “exclusion”, how difficult or easy is it to stop somebody from accessing the resource (similar to Rawls’s publicness). And the second is “subtractability”, does the use by one person subtract from the available goods for others (this comes close to Rawls’s indivisibility, but is more often framed as rivalry). Public goods are those with a low subtractability and a high difficulty for exclusion.134

One of the issues with commons is always the threat of enclosure. As Hess and Ostrom write: “The narrative of the enclosure is one of privatization, the haves versus the have-nots, the elite versus the masses.”135 The first enclosure136 was the withdrawing of community rights (by landowners and the state) from the European shared agricultural fields.137

In 2007, James Boyle argued that we were in the second enclosure movement, which he grandiloquently138 called “the enclosure of the intangible commons of the mind.”139 Hess and Ostrom think that “this trend of enclosure is based on the ability of new technologies to ‘capture’ resources that were previously unowned, unmanaged, and thus, unprotected.”140 Boyle focuses on the use of intellectual property rights and the related enforcement technologies to clamp down on the ease of copying. Basically the idea of enclosure is always to make it easier to exclude people from accessing the resource. Public goods that have become easy to exclude turn into toll or club goods.141

I want to argue that the data-driven appropriation in our technological predicament is a third movement of enclosure, turning public goods into toll goods. Going forward, it isn’t just intellectual labor (the “commons of the mind”) that is enclosed. It is our actual understanding of the world—an understanding that is predicated on measuring our social, cultural and economic behaviors—for which access becomes exclusive and under the terms of the data accumulators. This is happening in domain after domain; whether it is transportation, health, education or communication.

It is hard to find a precise enough analogy to make it easier to understand how this third movement of enclosure works. But maybe a look at how we predict the weather can help. Currently, gathering the weather data and using this to turn it into (predictive) models is mostly a public and decidedly collective effort. There are some private companies that help with collecting the data (airline companies for example), and the data is mostly freely available for anybody to use for their own purposes. If we would apply the third enclosure model to this situation, then an accumulator would come in and would outsource all the measuring and data collection to private individuals and small businesses (sometimes without them even knowing, and occasionally in return for access to some information). The accumulator would then use this data to create predictive weather models and would share parts of these predictions (basically when it suits their purposes, for example in order to get more sensor data) with the people who agree to their terms of service, or they would sell their predictions to public institutions as steering mechanisms for public policy. Some of these accumulators might even slightly adjust the predictions they share, in order to shape the behavior of the user of the prediction.

I therefore heartily agree with Aral Balkan’s forceful critique of swapping out a public goods based infrastructure for a toll based one, and I imagine Rawls would agree with him too:

It is not the job of a corporation to “develop the social infrastructure for community” as Mark [Zuckerberg] wants to do. Social infrastructure must belong to the commons, not to giant monopolistic corporations like Facebook. The reason we find ourselves in this mess with ubiquitous surveillance, filter bubbles, and fake news (propaganda) is precisely due to the utter and complete destruction of the public sphere by an oligopoly of private infrastructure that poses as public space.142

The attentional commons

In his 2015 book The World Beyond Your Head,143 Matthew Crawford makes a compelling case for an attentional commons. He considers our attention as a resource, because each of us only has so much of it. But it is a resource for which we currently lack what he calls a “political economy.”144 Crawford explains how we hold certain resources—like the air we breathe and the water we drink—in common. We don’t pay a lot of attention to them (usually just taking them for granted), but it is their availability that makes everything else we do possible. Crawford thinks that “the absence of noise is a resource of just this sort. More precisely, the valuable thing that we take for granted is the condition of not being addressed. Just as clean air makes respiration possible, silence, in this broader sense, is what makes it possible to think.”145

It is clear that resources like water and air need robust regulations to be protected as common resources. In the absence of these regulations, they “will be used by some in ways that make them unusable for others—not because they are malicious or careless, but because they can make money using them this way. When this occurs, it is best understood as a transfer of wealth from ‘the commons’ to private parties.”146

We have already reached the point where (cognitive) silence is offered as a luxury good. Basically, our attention is taken from us, and we then get to buy it back. Crawford gives the example of an airport, where he encounters ads inside his security tray and on the luggage belt,147 but is completely liberated from this noise as soon as he steps into the business class lounge. Silence as a luxury is already part of our technological predicament too. YouTube offers a premium subscription for which the main benefit is that it is completely ad-free148 and the Amazon Kindle e-readers come with “special offers”—Amazon’s euphemism for advertising—unless a one-time fee has been paid to remove them.149

The era of accumulation makes the creating of a political economy for attention more urgent. We are increasingly the object of targeted attention grabbing. Crawford wants to supplement the right to privacy with a right not to be addressed: “This would apply not, of course, to those who address me face-to-face as individuals, but to those who never show their face, and treat my mind as a resource to be harvested by mechanized means.”150

Crawford makes a beautiful argument why attention is both highly personal and intimate, while also being constitutive of our shared world:

Attention is the thing that is most one’s own: in the normal course of things, we choose what to pay attention to, and in a very real sense this determines what is real for us; what is actually present to our consciousness. Appropriations of our attention are then an especially intimate matter.

But it is also true that our attention is directed to a world that is shared; one’s attention is not simply one’s own, for the simple reason that its objects are often present to others as well. And indeed there is a moral imperative to pay attention to the shared world, and not get locked up in your own head. Iris Murdoch writes that to be good, a person “must know certain things about his surroundings, most obviously the existence of other people and their claims.”151

This matches the two reasons he gives for finding the concept of a commons suitable in the context of a discussion about attention:

First, the penetration of our consciousness by interested parties proceeds very often by the appropriation of attention in public spaces, and second, because we rightly owe to one another a certain level of attentiveness and ethical care. The words italicized in the previous sentence rightly put us in a political economy frame of mind, if by “political economy” we can denote a concern for justice in the public exchange of some private resource.152

I want to make the argument that it is beneficial to see attention analogously to a “primary good” in the Rawlsian sense of the word. Rawls uses the conception of “primary goods” as a way to address the practical political problem of people having conflicting comprehensive conceptions of the good. However distinct those conceptions may be, they require the same primary goods for their advancement “that is, the same basic rights, liberties, and opportunities, as well as the same all-purpose means such as income and wealth, all of which are secured by the same social bases of self-respect. These goods […] are things that citizens need as free and equal persons, and claims to these goods are counted as appropriate claims.”153 Rawls provides a basic lists of primary goods under five headings:

(i) basic rights and liberties, of which a list may also be given; (ii) freedom of movement and free choice of occupation against a background of diverse opportunities; (iii) powers and prerogatives of offices and positions of responsibility in the political and economic institutions of the basic structure; (iv) income and wealth; and finally, (v) the social bases of self-respect.154

Rawls allowed for things to be added to the list (if needed), for example to include other goods or maybe even to include mental states.155 Attention is an all-purpose mean that citizens need to live out their conception of the good life. And the way that technology companies manage to appropriate this attention is leading to an unjust division of the resource.

The Center for Humane Technology (which used to be called Time Well Spent), lays out the problem with great clarity:

Facebook, Twitter, Instagram, and Google have produced amazing products that have benefited the world enormously. But these companies are also caught in a zero-sum race for our finite attention, which they need to make money. Constantly forced to outperform their competitors, they must use increasingly persuasive techniques to keep us glued. They point AI-driven news feeds, content, and notifications at our minds, continually learning how to hook us more deeply—from our own behavior. […] These are not neutral products. They are part of a system designed to addict us.156

A utilitarian ethics

Data-driven accumulators are starting to realize that they need to justify the disparate impact that their data-driven technologies have. Google for example, acknowledges the power of artificial intelligence as a technology, and understands that the technology will have a significant impact on our society. To address their “deep responsibility”, they published a set of seven principles157 that guide their artificial intelligence work.158

The principles make it clear what type of ethical stance lies behind Google’s approach. The first principle is called “Be socially beneficial” and reads as follows:

The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.159

This is clearly a utilitarian perspective: they will use artificial intelligence as long as the benefits exceed the risks. Arguably it would be hard for them to espouse anything but a teleological ethical theory. They need it to justify what they are doing. Unfortunately, they don’t describe their utility function. What is to be considered socially beneficial, and what is seen as a cost to society? How will this utility be quantified, and who gets to decide on these questions?

Rawls understands the intuitive appeal of a utilitarian ethics. In a teleological approach you can define the good independently from the right, and then define the right as that which maximizes the good. The appeal to Google is clear, not only because they couldn’t justify their behaviour with a deontological approach, but also because utilitarianism seemingly embodies rationality. “It is natural to think that rationality is maximizing something and that in morals it must be maximizing the good. Indeed, it is tempting to suppose that it is self-evident that things should be arranged so as to lead to the most good.160

For Rawls, “the striking feature of the utilitarian view of justice is that it does not matter […] how this sum of satisfactions is distributed among individuals […]. The correct distribution […] is that which yields the maximum fulfillment.161 He therefore dismisses the principle of utility as”inconsistent with the idea of reciprocity implicit in the notion of a well-ordered society."162 According to Rawls, the utilitarian view of “social cooperation is the consequence of extending to society the principle of choice for one man, and then, to make this extension work, conflating all persons into one through the imaginative acts of the impartial sympathetic spectator.”163 He sees no reason why from the original position of equality this option would be seen as acceptable: “Since each desires to protect his interests, his capacity to advance the conception of the good, no one has a reason to acquiesce in an enduring loss for himself in order to bring about a greater net balance of satisfaction.”164 Rawls finishes his treatment of classic utilitarianism with a damning indictment:

Utilitarianism does not take seriously the distinction between persons.165

Rawls’s description of utilitarianism matches one for one with the espoused ethical theory of most technology companies. I therefore think the following is true and helps to elucidate our technological predicament:

Google does not take seriously the distinction between persons.166

Unfreedom in our technological predicament

Now that it is clear that the current ecosystems of data appropriation have many unjust consequences, I want to argue a less obvious point. Even though many of the technologies, enabled by data, create new options for us and increase our choices, they actually make us less free. To maintain that position, I will use a particular conception of freedom: civic republicanism.167 As this idea of freedom is most eloquently explained by Philip Pettit, I will stay very close to his reasoning.

The demands of freedom as non-domination

To illustrate the crucial point about his conception of freedom, Pettit often uses A Doll’s House as an example. The protagonists in this classic Ibsen play are Torvalds, a young banker, and his wife, Nora. During the late 19th century a husband had near limitless power over his wife, but Torvalds completely dotes over Nora, and denies her absolutely nothing. In practical daily life, she can basically do what she wants. According to Petit, Nora might have many benefits, but you can’t say she enjoys freedom in her relationship with Torvalds:

His hands-off treatment means that he does not interfere with her, as political philosophers say. He does not put any prohibitions or penalties in the way of her choices, nor does he manipulate or deceive her in her exercise of those choices. But is this enough to allow us to think of Nora as a free agent? If freedom consists in noninterference, as many philosophers hold, we must say that it is. But I suspect that like me, you will balk at this judgment. You will think that Nora lives under Torvald’s thumb. She is the doll in a doll’s house, not a free woman.168

This becomes abundantly clear in the last act of the play, where Torvalds first forgives Nora for the sins she has committed (all with the purpose of helping to solve the problems of his making), and then tells her:

There is something so indescribably sweet and satisfying, to a man, in the knowledge that he has forgiven his wife—forgiven her freely, and with all his heart. It seems as if that had made her, as it were, doubly his own; he has given her a new life, so to speak; and she is in a way become both wife and child to him. So you shall be for me after this, my little scared, helpless darling.169

That is when Nora realizes that Torvalds truly is a stranger to her. Soon after, she decides to leave him and the children behind. Her final action in the play is to slam the door as she leaves.

Pettit uses this example to show that the absence of interference isn’t enough to make us free.170 You also need “the absence of domination: that is, the absence of subjection to the will of others […].”171 Pettit argues that your freedom should have depth (freedom as a property of choices) and that you must have this deep freedom over a broad range of choices (freedom as a property of persons).

Freedom with depth

When is your freedom “deep”? If you have different options, what are then the conditions ensuring that your choice between those options is a free choice? Pettit has three conditions:

You enjoy freedom of choice between certain options to the extent that:

  1. you have the room and the resources to enact the option you prefer,
  2. whatever your own preference over those options, and
  3. whatever the preference of any other as to how you should choose.172

Having the room to enact the option you prefer means that there should be no interference with your options. That interference can be done in multiple ways. The option may be removed (blocking the ability to make the choice), the option may be replaced (by penalizing or burdening it), or the option may be misrepresented (deception about the available alternatives or manipulating the perception of the alternatives). There are certain ways in which a choice can be influenced without there being interference of this type. Incentivizing, persuading and nudging (without deception) all do not constitute interference because they do not remove, replace or misrepresent an option.173

Next to having the room, you should also have the resources. If you lack all the necessary resources to be able to choose a certain option, then you can’t be free to choose that option. Pettit categorizes resources into three broad areas: personal (the mental and bodily ability and knowhow needed to make the choice), natural (the conditions in the environment that put the option within reach) and social (the conventions and shared awareness that makes acts of communication possible).174

The second clause says that you only enjoy freedom in your choice of options, if all the options are available in the ways the first clause stipulates, regardless of your own preference for any of the options. Thomas Hobbes saw that differently. He thought that somebody is a free agent as long as that person “is not hindred to doe what he has a will to do.”175 This idea leads to the absurd situation that you would be able to liberate yourself by adapting your preferences. Isaiah Berlin dismissed that idea beautifully:

To teach a man that, if he cannot get what he wants, he must learn to want only what he can get, may contribute to his happiness or his security; but it will not increase his civil or political freedom.176

Pettit summarizes this second clause: “To have a free choice between certain options, you must be positioned to get whichever option you might want however unlikely it is that you might want it.”177

It is the third clause that sets civic republicans apart. Pettit: “Your capacity to enact the option you prefer must remain in place not only if you change your mind about what to choose, but also if others change their minds as to what you should choose.”178 The basic idea is that “you cannot be free in making a choice if you make it in subjection to the will of another agent, whether or not you are conscious of the objection.”179 As Pettit writes:

The republican insight is that you will also be subject to my will in the case where I let you choose the option you prefer—and would have let you choose any option you preferred—but only because I happen to want you to enjoy such latitude.180

If the third clause wouldn’t be deemed necessary for freedom, then it would be possible to liberate yourself through ingratiation, which is as absurd as liberating yourself through preference adaptation. From the republican perspective, liberty means that you live on your own terms and are exempt from the dominion of another. That is even more important than having the required resource to enact on your preference. This means “that it is inherently worse to be controlled by the free will of another than to be constrained by a contingent absence of resources.”181 Pettit cites Kant as somebody who gives this idea prominence:

Find himself in what condition he will, the human being is dependent upon many external things. […] But what is harder and more unnatural than this yoke of necessity is the subjection of one human being under the will of another. No misfortune can be more terrifying to one who is accustomed to freedom, who has enjoyed the good of freedom, than to see himself delivered to a creature of his own kind who can compel him to do what he will […].182

Freedom with breadth

If our freedom of choice is protected from domination, what should then be the range of decisions in which a freedom of this type should be available? According to Pettit, to be a free person in the republican conception requires you to be objectively secured against the intrusions of others, and subjectively that this security is a matter of common awareness: your status as a free person “must be salient and manifest to all.”183 This is because the recognition of the protection of your rights, reinforces that protection. This status can only be available under “a public rule of law in which all are treated as equals.”184 As Pettit sums up:

The republican ideal of the free citizen holds that in order to be a free citizen you must enjoy non-domination in such a range of choice, and on the basis of such public resourcing and protection, that you stand on a par with others. You must enjoy a freedom secured by public laws and norms in the range of the fundamental or basic liberties. And in that sense, you must count as equal with the best.185

Pettit then derives the basic liberties that should be associated with a free civic status from this idea of what a free citizen should be: “The ceiling constraint is that the basic liberties should not include choices that put people at loggerheads with one another and force them into competition”, and the “floor constraint is that the basic liberties should encompass all the choices that are co-enjoyable in this sense, not just a subset of them.”186

To be co-enjoyable by all, a choice must meet two conditions. The first, is that the choice must be co-exercisable in the sense that “people must be able to exercise any one of the choices in the set, no matter how many others are exercising it at the same time”. Secondly, the choice must be co-satisfying in the sense that “people must be able […] to derive satisfaction from the exercise of any choice, no matter how many others are exercising that choice, or any other choice in the set […].”187

Pettit then argues that co-exercisable are basically those choices that you can do on your own, and that co-satisfactory choices exclude those that do harm to others, that lead to overpowering or destructive effects, and those where exercising the choice together is counterproductive. Which basic liberties will satisfy these requirements and constraints will differ with the cultural, technological and economical characteristics of a particular society. For Pettit, a society that provides this robust form of freedom will count as just, democratic and sovereign:

If the society entrenches each against the danger of interference from others in the domain of the basic liberties, then it will count plausibly as a just society. If this entrenchment is secured under a suitable form of control by the citizenry, then the society will count as properly democratic […]. And if the international relations among peoples guard each against the danger of domination by other states or by non-state actors, then each people will have the sovereign freedom to pursue such justice and democracy […].188

The power to manipulate

Technological mediation virtualises the relationships between us and the rest of the world. In the most general terms, you could say that you interact with a third party who shapes and forms this virtual reality through which we connect and interact with other people and other objects. Schematically:

It is a much easier to shape a virtual information-based reality than it is to shape a material atom-based reality (for Facebook to change the color of their website from blue to green requires a change in one line of code, whereas for Facebook to change the color of its offices from blue to green will take many days of work). Moreover, this reality can be shaped at the personal level (it is much easier for Facebook to personalize their website and show it to me in my favorite color, than it is for them to show me their offices in my favorite color).

When more of what we pay attention to in the world is technologically mediated by virtual third parties, we become more vulnerable to manipulation. These third parties have the ability to shape and form our personal reality in such a way that it serves their aims.

Natasha Dow Schüll has given a brilliantly telling example of this phenomenon in her book Addiction by Design, in which she does an anthropological exploration of the world of gambling machines in Las Vegas. She explains, how much easier it became for vendors of slot machines to get their players into the zone, once the faces of these machines became virtual instead of the physical reels that were used before:

Virtual reel mapping has been used not only to distort players’ perception of games’ odds but also to distort their perception of losses, by creating “near miss” effects. Through a technique known as “clustering,” game designers map a disproportionate number of virtual reel stops to blanks directly adjacent to winning symbols on the physical reels so that when these blanks show up on the central payline, winning symbols appear above and below them far more often than by chance alone.189

Technology companies do similar things to manipulate the behavior of their users. In their paper about digital market manipulation in the “sharing” economy, Ryan Calo and Alex Rosenblat lay out some evidence of manipulation from Uber. Riders for example are shown fake cars:

A user may open her app and see many vehicles around her, suggesting that an Uber driver is close by should she decide to hail one. […] [However] the representation of nearby Uber cars can be illusory. Clicking the button to request an Uber prompts a connection to the nearest driver, who may be much further away. The consumer may then face a wait time as an actual Uber driver wends their way toward the pick up location. Those icons that appeared where cars were not present are familiar to some participants as “phantom cars.”190

Virtual manipulations like these are hard to check and validate. How can an average user know whether a car that is shown in their phone in an app is actually there, or whether it is a virtual fake trying to lure them into ordering an taxi. Drivers (or as Uber calls them: customers) are manipulated too. For example, by Uber hiding information about the market place. Heat maps about where surge prices are have been made less accurate, and now function as “a behavioral engagement tool but can effectively operate as a bait-and-switch mechanism similar to the use of phantom cars to entice ride-hailers.”191 Calo and Rosenblat directly address what this means for freedom:

These constraints on drivers’ freedom to make fully informed and independent choices reflect the broad information and power asymmetries that characterize the relationship between Uber and its drivers and illustrate how the Uber platform narrows the choices that drivers are free to make.192

Manipulation can also very effectively be done through adjusting the ranking of search results. Robert Epstein and Ronald E. Robertson have researched whether the ranking of search results could alter the preferences of undecided voters in democratic elections. They found that biased search rankings can shift the preferences of undecided voters by 20% or more, and that this bias can be masked, so that people aren’t aware of the manipulation. They conclude: “Given that many elections are won by small margins, our results suggest that a search engine company has the power to influence the results of a substantial number of elections with impunity. The impact of such manipulations would be especially large in countries dominated by a single search engine company.”193 They call this type of influence the search engine manipulation effect.

In an article for Politico, Epstein goes a step further and outlines what he considers to be three credible scenarios for how Google could decide a US presidential election. Google could make the executive decision to do this, there could be a rogue employee or group of employees who could implement a change in the algorithm, and finally, there be could a digital bandwagon effect where higher search activity creates higher search rankings, boosting voter interest, leading to higher search activity, and so on.194

Google’s reply to Epstein was telling. They called Epstein’s work “a flawed elections conspiracy theory” and argued that they have “never ever re-ranked search results on any topic (including elections) to manipulate user sentiment.”195 Although I have my doubts about the veracity of that statement, I also think it fails to address the core of Epstein’s worries. The question of whether Google will ever change the search results in order to get an election outcome that would suit Google’s purposes is a different question than whether Google has the power to do so, if they wanted to. From Pettit’s republican perspective, it is not relevant whether the domineering power is ever exercised in order to assess the extent of our freedom.

Epstein’s conclusion about our technological predicament is as follows:

We are living in a world in which a handful of high-tech companies, sometimes working hand-in-hand with governments, are not only monitoring much of our activity, but are also invisibly controlling more and more of what we think, feel, do and say. The technology that now surrounds us is not just a harmless toy; it has also made possible undetectable and untraceable manipulations of entire populations – manipulations that have no precedent in human history and that are currently well beyond the scope of existing regulations and laws.196

In this time where we increasingly become dependent on virtual representations of our world, the ability to manipulate people in order to create the future that you want is a logical consequence of the ability to predict the future. To be able to create the future you need two things:

  1. An understanding of the world, in the sense that you know which circumstances lead to what types of behavior.
  2. The ability to change the circumstances, so that you can bring about the circumstances that will lead to the behavior that you want to create.

The current best known example of a company which tries to exploit this mechanism, is Cambridge Analytica. They have argued that they were influential in getting people to vote for Trump and for the Brexit.197 Even now that Cambridge Analytica has gone bankrupt, their websites are still full of phrases like “Cambridge Analytica uses data to change audience behavior”198 and “We find your voters and move them to action. […] By knowing your electorate better, you can achieve greater influence […].”199 In their Trump case study they write “Analyzing millions of data points, we consistently identified the most persuadable voters and the issues they cared about. We then sent targeted messages to them at key times in order to move them to action.”200 Here they clearly spell out the steps of accumulation, which can of course also be applied to other domains than commercial marketing or political campaigns.201

Dependence on philanthropy

Most of the accumulators use their tremendous power and influence for philanthropic and social goals. Sometimes this is done very explicitly and without clear business goals, like’s investment of 1 billion U.S. dollars over a period of five years to improve , economic opportunity, and inclusion.202 Sometimes the social goals nicely align with the business goals, like with Facebook’s family of projects with the mission to bring “internet access and the benefits of connectivity to the portion of the world that doesn’t have them.”203204 But mostly, these companies consider themselves to already have a positive influence on the world. They charge their customers (mostly businesses that want to advertise with them) and provide the services to their users for free.205

This is why it can be Google’s mission to “Organize the world’s information and make it universally accessible and useful” and to do this “Not just for some. For everyone.”206 And why Facebook’s Mark Zuckerberg, when asked why he doesn’t use Facebook to push social agenda issues, answers as follows:

I think the core operation of what you do should be aimed at making the change that you want. […] What we are doing in making the world more open and connected, and now hopefully building some of the social infrastructure for a global community—I view that as the mission of Facebook.207

But depending on private philanthropy is very problematic from the perspective of (republican) freedom. It clientelizes the user and turns them into dependents. As Pettit writes:

If people depend in an enduring way on the philanthropy of benefactors, then they will suffer a clear form of domination. Their expectations about the resources available will shift, and this shift will give benefactors an effective power of interference in their lives.208

Arbitrary control

An simple, yet incredibly clear, example of the arbitrary nature of our relationship with the technology giants, can often be found in their terms of service. Before we do a close reading of Google’s terms over service,209 it is important to realize that these terms also apply to people’s Gmail accounts or their photos in Google Photos. Some would argue that the data that these services contain about you, actually is you:

Today, we are all cyborgs. This is not to say that we implant ourselves with technology but that we extend our biological capabilities using technology. We are sharded beings; with parts of our selves spread across and augmented by our everyday things.210

So when we look at these terms, we need to realize that we are talking about services that are part of people’s identities.

Firstly, Google wants you to understand that they can stop providing the service to you at any time:

We may suspend or stop providing our Services to you if you do not comply with our terms or policies or if we are investigating suspected misconduct.

It is hard to always be compliant with their terms or policies, because they reserve the right to change these without proactively noticing the user (as we shall see a bit later).

Next, they want to make sure that they carry no responsibility for what you do (or anybody else does for that matter) with your Google account:

You are responsible for the activity that happens on or through your Google Account.

This responsibility isn’t shared with Google. So even if you get hacked without it being your fault,211 you are still liable for the damage that is done with your account.

Even though they leave the ownership of what gets uploaded to their services with you, they do make you give them a worldwide and everlasting license on your content. Not only for operating their service, but also for promoting their services, and for developing new ones:

When you upload, submit, store, send or receive content to or through our Services, you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works […], communicate, publish, publicly perform, publicly display and distribute such content. The rights you grant in this license are for the limited purpose of operating, promoting, and improving our Services, and to develop new ones. This license continues even if you stop using our Services […].

It is unclear what use of the content would fall outside of the scope of this license.

There is no way that you can count on the service doing today what it did yesterday, because Google reserves the right to change the service whenever they want, and then to force this change upon you:

When a Service requires or includes downloadable software, this software may update automatically on your device once a new version or feature is available.

You can’t even count on the service to be there tomorrow:

We may add or remove functionalities or features, and we may suspend or stop a Service altogether.

Basically Google doesn’t want to take responsibility for their service doing anything. So it won’t make any promises that any of their services will do anything useful.

Other than as expressly set out in these terms or additional terms, neither Google nor its suppliers or distributors make any specific promises about the Services. For example, we don’t make any commitments about the content within the Services, the specific functions of the Services, or their reliability, availability, or ability to meet your needs.

And, it wants to make sure that the user understands that there isn’t any warranties and that Google won’t take the responsibility for any losses:

To the extent permitted by law, we exclude all warranties. […] When permitted by law, Google, and Google’s suppliers and distributors, will not be responsible for lost profits, revenues, or data, financial losses or indirect, special, consequential, exemplary, or punitive damages.

If, for some reason, they are still forced to pay damages, then they limit their own liability to whatever the user has paid for the services. In the case of Gmail and Google Photos for example, this payment amounts to zero (in monetary terms that is):

To the extent permitted by law, the total liability of Google, and its suppliers and distributors, for any claims under these terms, including for any implied warranties, is limited to the amount you paid us to use the Services […].

Finally, Google reserves the right to change these terms of service at any point in time and expects the user to look at them regularly to make sure they’ve noticed the change. If they don’t like the change, then the only option left for the user is to stop using the service.

We may modify these terms or any additional terms that apply to a Service to, for example, reflect changes to the law or changes to our Services. You should look at the terms regularly. […] If you do not agree to the modified terms for a Service, you should discontinue your use of that Service.

My much shorter version of these terms would be: “We at Google take no responsibility for anything, and you the user have no rights. And even though we can do what we want and you can expect nothing from us, we still want to be able to change this agreement whenever we feel like it.”

I am aware that much of this is standard legalese, and that some of these terms are limited by what the law allows (hence the few occasions of “to the extent permitted by law”), but I also find the way that these term are formulated idiosyncratic for the particular relationship that we have with companies like Google. Imagine if these were the terms that you had to sign before filling up at a gas station, or when buying a laptop.

In a sense, Google can be compared to ransomware. Ransomware encrypts your digital life and gives you the decryption key as soon as you paid the required ransom in cryptocurrency, whereas Google will only allow you to continue to have access to your digital life as long as you comply with their loaded terms.

The fact that Google can make arbitrary decisions and subject their users to their will breaks Pettit’s third clause for making a choice free. Pettit uses the eyeball test (you should be able “to look one another in the eye without reason for fear or deference”212) as a way to know when a free person has enough protections against arbitrary control. He has a version of the test, that he uses for our international relations, that I think is more fitting to administer to our relationship with Google and the other technology giants:

Each people in the world ought to be able to address other peoples […] as an equal among equals. It ought not to be required to resort to the tones of a subservient subject and it ought not to be entitled to adopt the arrogant tones of a master. It ought to enjoy the capacity to frame its expectations and proposals on the assumption of having a status no lower and no higher than others and so to negotiate in a straight-talking, open manner. Each people ought to be able to pass what we might call the straight talk test.213

We can’t pass this straight talk test in our technological predicament. We are living an increasing part of our lives inside corporate terms of service. To the extent that we live under their arbitrary governance, we can’t consider ourselves to have civic freedom.214

Part 3: What should we do about it?

In a situation where one party is more powerful than another party, there are basically three things you can do to create antipower. You can diminish the power of the first party, you can regulate the first party in such a way that there is no way for them to exercise their power, or you can empower the second party. As Pettit writes:

We may compensate for imbalances by giving the powerless protection against the resources of the powerful, by regulating the use that the powerful make of their resources, and by giving the powerless new, empowering resources of their own. We may consider the introduction of protective, regulatory, and empowering institutions.215

For Pettit it is clear that antipower can’t just come from the legal instruments with which the state operates, there is also a clear role for the various institutions inside civil society.216 In the final part of this thesis, I will do some short speculative explorations of potential directions towards bettering our technological predicament. Consider these “plays” in the antipower playbook.

These explorations stay very close to the three core characteristics of our technological predicament. To counter the domineering scale, we need to look at ways of reducing the scale; to address data-driven appropriation, we need to reinvigorate our commons; and to deal with the problem of asymmetrical relationships with arbitrary control, we need to see how we can use technology to design equality in our relationships.217

Reducing the scale

There are two obvious ways to reduce the scale at which our communications infrastructure operates. We can try to make the technology giants smaller (or at least stop them from getting any bigger and more dominant), or we can try and switch to a technological infrastructure that still allows us to connect at a world scale, without creating similar dependencies as in our current technological predicament.

Traditional antitrust legislation tries to battle the negative effects of monopolies through looking at how market domination affects the price for the consumer. A classic antitrust measure is to bust a cartel that has artificially fixed the prices. Looking at prices becomes close to meaningless in a situation where the consumer doesn’t appear to pay anything, and is—on the surface—better off using the product rather than not using the product.

The different data flows and market dominance from a user perspective don’t get enough focus in decisions about antitrust. This is why the European Commission made the mistake of allowing Facebook to buy its competitor WhatsApp for 19 billion U.S. dollars.218 Facebook told the Commission in 2014 that it would not be technically feasible to reliably automate the matching between Facebook user accounts and the accounts of WhatsApp. In August 2016, Facebook did exactly that, and eventually was fined 110 million euro for this behavior.219

Facebook’s acquisition of WhatsApp is part of a larger pattern. Big giants like Google, Microsoft, and Facebook prefer to buy up smaller competing companies who are delivering an innovative product.220 But if these companies refuse to be bought, they will just make a blatant copy of their functionality. Some people call it a “kill-zone” around the internet giants.221 For example, Facebook was able to buy Instagram, but couldn’t get its hands on Snapchat. So it copied most of Snapschat’s features into Instagram.222 Facebook has even bought Onavo, an app that monitors what people are doing on their phone. It uses the aggregated data of the millions of users of the app, to see what services are popular, in order to snap them up before they get too big and endanger the size of Facebook’s user base.223224

The Economist therefore recommends that antitrust authorities start taking a different approach. Instead of using just size to determine whether to intervene, “they now need to take into account the extent of firms’ data assets when assessing the impact of deals. The purchase price could also be a signal that an incumbent is buying a nascent threat.”225

A more technical approach than making use of antitrust law, is to work on alternatives to the big companies. One of the incredible things about the internet is that the network facilitates peer-to-peer interactions. It is possible to have a direct connection between two internet enabled devices (for example two smartphones) and have an encrypted set of communications data flow between them. This allows for a typology of different ways to federate or decentralize technological infrastructure. The following are just three examples of technology projects that reduce scale, and therefore reduce domination:

  • Mastodon226 is an open source social network allowing users to post short messages, pictures, and videos in a similar way to Facebook and Twitter. Unlike other social networks it is fully decentralized. There is no one single company or server that contains all the messages. Instead, different “instances” of Mastodon have a way of talking to one another. Each instance can have their own rules about what type of content it allows and which people they will give accounts on their system. Users within an instance can follow each other, but it is also possible to follow people who have their home base at another instance.
  • Briar227 is a secure messaging app that allows peer-to-peer encrypted messaging and forums. It breaks with the normal messaging paradigm which relies on a central server to receive and deliver messages. Briar only delivers messages when both parties have an internet connection at the same time. It can do this locally using a Bluetooth connection or a Wi-Fi network, but it can also use an internet connection. In the latter case, it will route the traffic over Tor in order to ensure its anonymity and to hide the user’s location. Connections on Briar are made by being together physically and exchanging keys. All of these design choices make Briar very resistent to both surveillance and censorship. The app can even keep local communication flowing during internet blackouts.
  • The Dat Project228 is host of the Dat Protocol, a peer-to-peer data sharing protocol that allows for distributive syncing. With Dat’s network users can store data wherever they want (with most data being stored at multiple locations). Dat keeps a history of how a file has changed, facilitating collaboration and easy reproducibility. Users can easily replicate a remote Dat repository and subscribe to live changes. Network traffic is encrypted, and it is possible to create your own private data sharing networks.229

It is also possible to resist scale by making your own websites and tools. I have built a few sites myself which I call “hyperpersonal microsites”, because they mainly have an audience of one (even though they are public) and serve a single purpose. In this way, I have replaced my use of Amazon’s Goodreads, a social network for readers, with my own website for storing what books I have read, which ones I still want to read, and my book reviews.230 Rather than feeding large corporations with data about my reading habits, I now make use of their application programming interfaces (APIs) for my own purposes. A similar project, is a small website that allows me to answer my main mapping need (how to get to somewhere in Amsterdam on my bike, from my home or from my work) in a quicker way than with Google Maps, while relying on the communal data of OpenStreetMap for the routing.231

Within all these projects lies the danger of technological elitism: they require a lot of knowledge to get going and to operate. But they do make it clear that escaping from the domineering scale of big technology companies is only possible by making very conscious long term technology choices. Democratizing access to the internet, to coding skills, and to the hardware that is necessary to make things for yourself, should therefore be paramount.

Reinvigorating the commons

The P2P Foundation has put a lot of effort into conceptualizing the commons which, according to them, can be understood from at least four different perspectives:

  1. Collectively managed resources, both material and immaterial, which need protection and require a lot of knowledge and know-how.
  2. Social processes that foster and deepen thriving relationships. These form part of complex socio-ecological systems which must be consistently stewarded, reproduced, protected and expanded through commoning.
  3. A new mode of production focused on new productive logics and processes.
  4. A paradigm shift, that sees commons and the act of commoning as a worldview.232

It is important to realize that “the Commons is neither the resource, the community that gathers around it, nor the protocols for its stewardship, but the dynamic interaction between all these elements.”233 The P2P Foundation sees peer-to-peer relations in their non-hierarchical and non-coercive form as one of the “enabling capacities for actions. [Peer-to-peer] facilitates the act of ‘commoning,’ as it builds capacities to contribute to the creation of maintenance of any shared and co-managed resource (a commons).”234 An important example of a commons in the context of this thesis, is Wikipedia.

One obvious way to reinvigorate the commons, is to explicitly invest into commoning projects like Wikipedia and OpenStreetMap, and also to start seeing them as commons, rather than as a simple free resource. But doing this wouldn’t necessarily intervene directly into the data-driven appropriation of the accumulators and their abuse of the informational and attentional commons. There is a dearth of academic work in this space,235 so the following couple of ideas are necessarily very rough and underdeveloped.

A first change would be to start thinking about ecosystem (or collective) rights in addition to individual rights. Currently, most data protection law tries to intervene at the individual level, as it describes individual rights. This means that it can’t address collective problems from processes that don’t deal with personal data (for example what Vodafone does with mobility data, as mentioned in the introduction). We need to start thinking about what it means for society if we allow private companies to capture all of the externalities of the use of their services, and then sell that information about the world back to the public.

We could also consider a flat out ban on the appropriation of data (as earlier defined in this thesis) for private purposes. This would forbid private companies from collecting and using data against people’s will or without their knowledge. It would be important to combine these rules with very strict purpose limitations: data that is collected (with knowledge and free permission) for one purpose cannot be used for another purpose. If this seems too radical, then an alternative would be to require private companies to open up the non-personal (or fully anonymized) data they have gathered, and make it available inside a data commons with open licenses.

It is interesting to think about what would happen if we were to take a “right to be left alone” seriously, and work towards an attentional commons. Someone who saw the importance of this was Gilberto Kassab, the mayor of São Paulo. In 2007, as part of his Clean City Law, he put into effect a near complete ban of outdoor advertising in his city: “The Clean City Law came from a necessity to combat pollution […] pollution of water, sound, air, and the visual. We decided that we should start combating pollution with the most conspicuous sector – visual pollution.”236 The results were interesting: it encouraged companies to reassess their advertising campaigns and find new and creative ways to engage with their customers. All without covering up the architecture of the city.237 It is hard to imagine the virtual analogy to the Clean City Law, but approaching our virtual spaces from the perspective of abating cognitive pollution certainly could help.

A final idea to stop data-driven appropriation, is to require of the technology giants that they provide access to their data through open standards and through open application programming interfaces (APIs). Privacy technology specialist Jaap-Henk Hoepman has written about this idea on his blog.238 Hoepman starts by explaining how email is an open standard, which means that you can exchange emails with other people, regardless of what program they use to access their email. This compatibility isn’t the case with messages between Apple’s iMessage, Instagram, Skype, WhatsApp and other messaging clients. According to Hoepman, this is as if Outlook users would only be able to email with other Outlook users, or if you could only text from your Nokia phone to other people with a Nokia phone, or only to people with the same mobile service provider. Forcing the use of open standards and open APIs should allow Apple to find a way to let iMessage talk to WhatsApp and might even allow truly open alternatives to ride the coat-tails of the network effects that are enabling the technology giants.

Equality in relationships

When trying to battle the asymmetry in relationships with the technology giants, it is important to find a way to break up the user lock-in that affects our relationship with companies like Google and Facebook. Two workable ways that this can be done are through breaking up the lock-in with a requirement for data portability, and through never stepping into the lock-in by using free, instead of proprietary, software.

Data portability is the idea that it should be possible to transfer your data from one service to another, preferably in an automated fashion. Europe’s General Data Protection Regulation (GDPR) defines data portability as an explicit right for all the people who are residing in the European Union. The regulation defines the right to data portability as follows:

The data subject shall have the right to receive the personal data concerning him or her, which he or she has provided to a controller, in a structured, commonly used and machine-readable format and have the right to transmit those data to another controller without hindrance from the controller to which the personal data have been provided […]. In exercising his or her right to data portability […] the data subject shall have the right to have the personal data transmitted directly from one controller to another, where technically feasible.239

Data portability has some challenges when it comes to privacy (if I want to move all my social networking contacts from service A to service B, do all my contacts want service B to now know about their existence?), and this is why the EU has added to the right to data portability that it “shall not adversely affect the rights and freedoms of others.”240 But these challenges are in no way insurmountable. It is important that we urgently start expecting a lot more maturity from the likes of Google and Facebook in making this right a concrete reality. The GDPR’s data regime already has had an effect:241 Facebook, Google, Twitter and Microsoft have recently launched the Data Transfer Project, which aims to provide users with “the ability to initiate a direct transfer of their data into and out of any participating provider.”242

Rather than making use of data portability, it is also possible to never step into a locked-in situation. This is enabled through what is called “free software”. This software uses the term free as in liberty, it isn’t about price. It guarantees freedom through a legal license which was initially developed by Richard Stallman.243 Stallman wants anybody who uses software to have what he calls “the four freedoms” (which he purposefully starts counting at zero, like any computer engineer would do):

  1. The freedom to run the program as you wish, for any purpose.
  2. The freedom to study how the program works, and change it so it does your computing as you wish.
  3. The freedom to redistribute copies so you can help others.
  4. The freedom to distribute copies of your modified versions to others. By doing this you can give the whole community a chance to benefit from your changes.244

Stallman sees guaranteeing these freedoms as a moral imperative for developers of software. Any free software licence245 protects the user from an asymmetrical relationship with the creators of the software. There can’t be a vendor lock-in because there are no barriers to entry for providing services around a free software product. Through using free software you can inoculate yourself against domineering technology companies.

It behooves the state to promote the use of free software. According to Stallman the state should only use free software for their own computing, should only teach the use of free software in schools, should never require its citizens to use non-free programs to access state services, and should incentivize and patron the development of free software. With these measures the state can recover control over its computing, and help citizens, businesses and organizations to do the same.246


Writing this thesis would not have been possible without the two years of support and boundless patience and flexibility from my partner. It is an incredible privilege to share a life with someone who emanates that much love and joy.

I am also very thankful to my mother and her life partner for offering me a place to write. It allowed for the necessary solitude, while also giving me the opportunity to discuss my progress during the three—fully catered—meals a day. I am grateful for the pleasure of working with my wonderful colleagues at Bits of Freedom every single working day. The countless discussions with them, and with people from the broader digital rights movement, have certainly sharpened my thinking.

In academia, I would like to explicitly thank Beate Roessler for her no-nonsense approach to supervision. She managed to always increase clarity, both for the process and for the content. Thomas Nys for his willingness to be the second reader, and Gijs van Donselaar for introducing me to Pettit’s thinking, and for sharpening my Bachelor’s thesis. Special thanks go out to Philip Pettit, for making the time to have a conversation with me in Prague to share his thoughts on republicanism and our technological predicament.

The software stack that I used for the writing of this thesis was completely free from domination. I want to thank the creators of all the free software that enabled me to write with a conscience. I can’t do justice to the layers of work upon other people’s work that allow for my computer to function as it does, but I do want to at least acknowledge the creators and maintainers of GNU/Linux and Ubuntu (for providing the core of my operating system), i3 (a tiling window manager), Firefox (my browser of choice), Vim (the text editor allowing me to “edit text at the speed of thought”247), Zotero (for storing my references), and Markdown, LateX and Pandoc (enabling the workflow from a text file to a beautifully typeset PDF).

Now that this is done, I look forward to putting more energy in getting us out of our technological predicament, and in helping to build a just and free alternative technological infrastructure.

Yours in struggle,

Hans de Zwart

Amsterdam, August 2018


“About Seth.” Seth Stephens-Davidowitz. Accessed July 8, 2018.

Alli, Kabir. “YOOOOOO LOOK AT THIS.” Tweet. @iBeKabir, June 2016.

Anderson, Chris. “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete.” Wired, June 2008.

Angwin, Julia, and Hannes Grassegger. “Facebook’s Secret Censorship Rules Protect White Men from Hate Speech but Not Black Children.” Text/html. ProPublica, June 2017.

“Annual Report – Google Diversity.” Accessed July 21, 2018.

Austin, Evelyn. “Women on Waves’ Three YouTube Suspensions This Year Show yet Again That We Can’t Let Internet Companies Police Our Speech.” Bits of Freedom, June 2018.

Ayers, Phoebe. “YouTube Should Probably Run Some A/B Tests with the Crew at @WikiResearch First.” Tweet. @Phoebe_ayers, March 2018.

Balkan, Aral. “Encouraging Individual Sovereignty and a Healthy Commons,” February 2017.

———. “The Nature of the Self in the Digital Age,” March 2016.

Barocas, Solon, and Helen Nissenbaum. “Big Data’s End Run Around Procedural Privacy Protections.” Communications of the ACM 57, no. 11 (October 2014): 31–33. doi:10.1145/2668897.

Berlin, Isaiah. “Two Concepts of Liberty.” In Liberty: Incorporating ’Four Essays on Liberty’, edited by Henry Hardy, 166–217. Oxford: Oxford University Press, 2002.

Bickert, Monika. “Publishing Our Internal Enforcement Guidelines and Expanding Our Appeals Process.” Facebook Newsroom, April 2018.

Bliss, Laura. “The Real Problem with ’Areas of Interest’ on Google Maps.” CityLab, August 2016.

Borgers, Eddie. “Marktaandelen Zoekmachines Q1 2018.” Pure, April 2018.

Boyle, James. “The Second Enclosure Movement.” Renewal 15, no. 4 (2007): 17–24.

Bridle, James. “Something Is Wrong on the Internet.” James Bridle, November 2017.

Brouwer, Bree. “YouTube Now Gets over 400 Hours of Content Uploaded Every Minute.” Tubefilter, July 2015.

Cadwalladr, Carole, and Emma Graham-Harrison. “Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach.” The Guardian, March 2018.

Calo, Ryan, and Alex Rosenblat. “The Taking Economy: Uber, Information, and Power.” Columbia Law Review 117 (March 2017). doi:10.2139/ssrn.2929643.

Chappell, Bill. “Google Maps Displays Crimean Border Differently in Russia, U.s.” NPR, April 2014.

“Choose Your Audience.” Facebook Business. Accessed June 23, 2018.

Claburn, Thomas. “Facebook, Google, Microsoft, Twitter Make It Easier to Download Your Info and Upload to, Er, Facebook, Google, Microsoft, Twitter Etc…” The Register, July 2018.

“Commission Fines Facebook €110 Million for Providing Misleading Information About WhatsApp Takeover.” European Commission Press Releases, May 2017.

“Commons Transition and P2P: A Primer.” Transnational Institute, March 2017.

Conger, Kate, and Dell Cameron. “Google Is Helping the Pentagon Build AI for Drones.” Gizmodo, June 2018.

Court of Justice. “Google Spain SL and Google Inc. V Agencia Española de Protección de Datos (AEPD) and Mario Costeja González,” May 2014.

Crawford, Matthew B. The World Beyond Your Head: On Becoming an Individual in an Age of Distraction. New York: Farrar, Straus; Giroux, 2015.

Curran, Dylan. “Are You Ready? This Is All the Data Facebook and Google Have on You.” The Guardian, March 2018.

“Dat Project – A Distributed Data Community.” Dat Project. Accessed August 13, 2018.

“Data Drives All That We Do.” Cambridge Analytica. Accessed June 24, 2018.

“Data Transfer Project Overview and Fundamentals,” July 2018.

“Data-Driven Campaigns.” CA Political. Accessed June 24, 2018.

De Zwart, Hans. “Demystifying the Algorithm.” Hans de Zwart, June 2015.

———. “Facebook Is Gemaakt Voor Etnisch Profileren.” De Volkskrant, June 2016.

———. “Google Wijst Me de Weg, Maar Niet Altijd de Kortste.” NRC, August 2015.

———. “Hans de Zwart’s Books.” Accessed August 13, 2018.

———. “Hans Fietst.” Accessed August 13, 2018.

———. “Liberty, Technology and Democracy.” Amsterdam: University of Amsterdam, August 2017.

———. “Medium Massage – Writings by Hans de Zwart.” Accessed July 21, 2018.

———. “Miljardenbedrijf Google Geeft Geen Cent Om de Waarheid.” NRC, August 2018.

“DeepMind Health.” DeepMind. Accessed June 24, 2018.

“Definition of Appropriate in the Merriam Webster Dictionary.” Merriam Webster. Accessed June 24, 2018.

“Definition of Appropriate in the Oxford Dictionary.” Oxford Dictionaries. Accessed June 24, 2018.

Dinzeo, Maria. “Google Ducks Gmail Captcha Class Action.” Courthouse News Service, February 2016.

“Donald J. Trump for President.” CA Political. Accessed June 24, 2018.

Economist, The. “American Tech Giants Are Making Life Tough for Startups.” The Economist, June 2018.

———. “The World’s Most Valuable Resource Is No Longer Oil, but Data.” The Economist, May 2017.

Ehrlich, Jamie. “GOP Senator Says He Is Alive Amid Google Searches Suggesting He Is Dead.” CNN, July 2018.

Epstein, Robert. “How Google Could Rig the 2016 Election.” POLITICO Magazine, August 2015.

Epstein, Robert, and Ronald E. Robertson. “The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections.” Proceedings of the National Academy of Sciences 112, no. 33 (August 2015): E4512–E4521. doi:10.1073/pnas.1419828112.

Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press, 2018.

“Facebook Ads Targeting Comprehensive List.” Two Wheels Marketing, January 2018.

Falcone, John. “Amazon Backtracks, Will Offer $15 Opt-Out for Ads on Kindle Fire Tablets.” CNET, September 2012.

Farokhmanesh, Megan. “YouTube Didn’t Tell Wikipedia About Its Plans for Wikipedia.” The Verge, March 2018.

Franceschi-Bicchierai, Lorenzo. “The SIM Hijackers.” Motherboard, July 2018.

Gellman, Robert. “Disintermediation and the Internet.” Government Information Quarterly 13, no. 1 (January 1996): 1–8. doi:10.1016/S0740-624X(96)90002-7.

“General Data Protection Regulation,” May 2018.

Gibbs, Samuel. “SS7 Hack Explained: What Can You Do About It?” The Guardian, April 2016.

Goodrow, Cristos. “You Know What’s Cool? A Billion Hours.” Official YouTube Blog, February 2017.

Goodson, Scott. “No Billboards, No Outdoor Advertising? What Next?” Forbes, January 2012.

“Google Search Statistics.” Internet Live Stats. Accessed July 8, 2018.

“Google Terms of Service,” October 2017.

“Google Trends.” Google Trends. Accessed July 8, 2018.

Gregory, Karen. “Big Data, Like Soylent Green, Is Made of People.” Digital Labor Working Group, November 2014.

Griffith, Erin. “Will Facebook Kill All Future Facebooks?” Wired, October 2017.

Harris, David L. “Massachusetts Woman’s Lawsuit Accuses Google of Using Free Labor to Transcribe Books, Newspapers.” Boston Business Journal, January 2015.

Hern, Alex. “Facebook Protects Far-Right Activists Even After Rule Breaches.” The Guardian, July 2018.

Hess, Charlotte, and Elinor Ostrom. “Introduction: An Overview of the Knowledge Commons.” In Understanding Knowledge as a Commons, 3–26. Cambridge, Masachusetts: The MIT Press, 2007.

Hirsch Ballin, Ernst, Dennis Broeders, Erik Schrijvers, Bart van der Sloot, Rosamunde van Brakel, and Josta de Hoog. “Big Data in Een Vrije En Veilige Samenleving.” Den Haag: Wetenschappelijke Raad voor het Regeringsbeleid/Amsterdam University Press, April 2016.

Hobbes, Thomas. Leviathan. Edited by Richard Tuck. Cambridge: Cambridge University Press, 1996.

Hoepman, Jaap-Henk. “Doorbreek Monopolies Met Open Standaarden En API’s,” February 2018.

“How Google Retains Data We Collect.” Google Privacy & Terms. Accessed June 23, 2018.

“How It Works.” Briar. Accessed August 13, 2018.

Ibsen, Henrik. A Doll’s House. Gloucester: Dodo Press, 2015.

“Internet: Toegang, Gebruik En Faciliteiten.” Centraal Bureau Voor de Statistiek – StatLine. Accessed June 23, 2018.

Jagadish, H. V., Johannes Gehrke, Alexandros Labrinidis, Yannis Papakonstantinou, Jignesh M. Patel, Raghu Ramakrishnan, and Cyrus Shahabi. “Big Data and Its Technical Challenges.” Communications of the ACM 57, no. 7 (July 2014): 86–94. doi:10.1145/2611567.

Kamona, Bonnie. “I Saw a Tweet Saying ‘Google Unprofessional Hairstyles for Work’.” Tweet. @HereroRocher, April 2016.

Kant, Immanuel. Notes and Fragments. Cambridge: Cambridge University Press, 2005.

Kreiken, Floris. “Humanitair-Vrijheids-Vrede-Mensenrechten-Project-Facebook.” Bits of Freedom, August 2014.

Kreling, Tom, Huib Modderkolk, and Maartje Duin. “De Hel Achter de Façade van Facebook.” Volkskrant, April 2018.

Levy, Steven. “How Google’s Algorithm Rules the Web.” Wired, February 2010.

Li, Mark, and Zhou Bailang. “Discover the Action Around You with the Updated Google Maps.” The Keyword, July 2016.

“List of Public Corporations by Market Capitalization.” Wikipedia, June 2018.

Madrigal, Alexis C. “How Google Builds Its Maps—and What It Means for the Future of Everything.” The Atlantic, September 2012.

“Mastodon.” Accessed August 13, 2018.

Matsakis, Louise. “Don’t Ask Wikipedia to Cure the Internet.” Wired, March 2018.

———. “YouTube Will Link Directly to Wikipedia to Fight Conspiracy Theories.” Wired, March 2018.

Miller, Ron. “Cheaper Sensors Will Fuel the Age of Smart Everything.” TechCrunch, March 2015.

Murphy, Mike, and Akshat Rathi. “All of Google’s—Er, Alphabet’s—Companies and Products from A to Z.” Quartz, August 2015.

Neil, Drew. Practical Vim: Edit Text at the Speed of Thought. Pragmatic Bookshelf, 2015.

Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press, 2018.

O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Broadway Books, 2016.

Ogden, Maxwell, Karissa McKelvey, Matthias Buus Madsen, and Code for Science. “Dat – Distributed Dataset Synchronization and Versioning,” May 2017. doi:10.31219/

Ohm, Paul. “Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization.” UCLA Law Review 57 (2009): 1701–78.

“Our $1 Billion Commitment to Create More Opportunity for Everyone.” Accessed August 4, 2018.

“Our Company.” Google. Accessed August 4, 2018. //

“Our Mission.” Accessed August 4, 2018.

“Our Society Is Being Hijacked by Technology.” Center for Humane Technology. Accessed August 7, 2018.

Page, Larry. “G Is for Google.” Alphabet. Accessed July 8, 2018.

Perez de Acha, Gisela. “Our Naked Selves as Data – Gender and Consent in Search Engines,” April 2018.

Pettit, Philip. “Freedom as Antipower.” Ethics 106, no. 3 (1996): 576–604.

———. Just Freedom: A Moral Compass for a Complex World. W. W. Norton & Company, 2014.

Pichai, Sundar. “AI at Google: Our Principles.” Google, June 2018.

Pierce, David. “Facebook Has All of Snapchat’s Best Features Now.” Wired, March 2017.

“Predictive Policing Software.” PredPol. Accessed June 23, 2018.

Rawls, John. A Theory of Justice. Revised Edition. Cambridge, Masachusetts: The Belkanp Press of Harvard University Press, 1999.

———. “The Priority of Right and Ideas of the Good.” Philosophy & Public Affairs 17, no. 4 (1988): 251–76.

“reCAPTCHA – Creation of Value.” Accessed July 7, 2018.

Rogers, Simon. “Data Are or Data Is? The Singular V Plural Debate.” The Guardian, July 2012.

Ruane, Laura. “Signs of Our Times: Airport Ads Are Big Business.” USA Today, September 2013.

Rushe, Dominic. “WhatsApp: Facebook Acquires Messaging Service in $19bn Deal.” The Guardian, February 2014.

Safian, Robert. “Mark Zuckerberg on Fake News, Free Speech, and What Drives Facebook.” Fast Company, April 2017.

“São Paulo: A City Without Ads.” Adbusters, August 2007.

Scahill, Jeremy. “The Assassination Complex.” The Intercept, October 2015.

Schüll, Natasha Dow. Addiction by Design: Machine Gambling in Las Vegas. Princeton: Princeton University Press, 2012.

“Search Engine Market Share.” NetMarketShare. Accessed July 8, 2018.

Shane, Scott, and Daisuke Wakabayashi. “‘The Business of War’: Google Employees Protest Work for the Pentagon.” The New York Times, April 2018.

Singhal, Amit. “A Flawed Elections Conspiracy Theory.” POLITICO Magazine, August 2016.

Smith, Kit. “39 Fascinating and Incredible YouTube Statistics.” Brandwatch, April 2018.

“Sociale Netwerken Dagelijks Gebruik Vs. App Geïnstalleerd Nederland.” Marketingfacts, July 2017.

Stallman, Richard. “A Radical Proposal to Keep Your Personal Data Safe.” The Guardian, April 2018.

———. “Measures Governments Can Use to Promote Free Software, and Why It Is Their Duty to Do so.” GNU Project – Free Software Foundation, January 2018.

Stephens-Davidowitz, Seth. Everybody Lies. London: Bloomsbury Publishing, 2017.

———. “The Cost of Racial Animus on a Black Candidate: Evidence Using Google Search Data.” Journal of Public Economics 118 (October 2014): 26–40.

Sterling, Bruce. “Science Column #5 ‘Internet’.” The Magazine of Fantasy and Science Fiction, February 1993.

“Text of Creative Commons Attribution-ShareAlike 3.0 Unported License.” Wikipedia, April 2018.

“The Open Source Definition.” Open Source Initiative, March 2007.

Thompson, Ben. “Aggregation Theory.” Stratechery by Ben Thompson, July 2015.

———. “Antitrust and Aggregation.” Stratechery by Ben Thompson, April 2016.

Thompson, Clive. “For Certain Tasks, the Cortex Still Beats the CPU.” Wired, June 2007.

“Turning Big Data into Actionable Information.” Mezuro. Accessed July 8, 2018.

“Understanding Mobility.” Mezuro. Accessed July 8, 2018.

Vaithianathan, Rhema, Tim Maloney, Emily Putnam-Hornstein, and Nan Jiang. “Children in the Public Benefit System at Risk of Maltreatment: Identification via Predictive Modeling.” American Journal of Preventive Medicine 45, no. 3 (September 2013): 354–59. doi:10.1016/j.amepre.2013.04.022.

Van Hoboken, Joris. “Comment on ’Democracy Under Siege’: ’Digital Espionage and Civil Society Resistance’ Presentation by Seda Gürses.” Spui25, July 2018.

Varian, Hal R. “Computer Mediated Transactions.” American Economic Review 100, no. 2 (May 2010): 1–10. doi:10.1257/aer.100.2.1.

Von Ahn, Luis, and Will Cathcart. “Teaching Computers to Read: Google Acquires reCAPTCHA.” Official Google Blog. Accessed July 22, 2018.

“Wealthfront Investment Methodology White Paper.” Wealthfront. Accessed June 24, 2018.

“What Is Free Software?” Free Software Foundation. Accessed August 13, 2018.

“What Is reCAPTCHA?” Google Developers. Accessed July 22, 2018.

“What We Do.” Data & Society. Accessed August 13, 2018.

“Who Are We?” Women on Waves. Accessed July 21, 2018.

Wong, Julia Carrie. “Cambridge Analytica-Linked Academic Spurns Idea Facebook Swayed Election.” The Guardian, June 2018.

“YouTube Premium.” YouTube. Accessed August 7, 2018.

Zuboff, Shoshana. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” Journal of Information Technology 30, no. 1 (March 2015): 75–89. doi:10.1057/jit.2015.5.

———. “Google as a Fortune Teller: The Secrets of Surveillance Capitalism.” Frankfurter Allgemeine Zeitung, March 2016.

  1. Translated to English: “I make my anonymized network data available for analysis.”

  2. “Turning Big Data into Actionable Information.”

  3. “Understanding Mobility.”

  4. “Our” is often an unspoken exclusive notion, so to make it explicit: This thesis is written from my perspective as a Dutch citizen. The concept of “our” and “we” in this thesis thus encompasses (parts of) society in North Western Europe. There are many parts of the world where the pace of digitization isn’t rapid and where the themes of this thesis will have very little bearing on daily reality.

  5. Data from 2017, see: “Internet: Toegang, Gebruik En Faciliteiten.”

  6. Data from June 2017, see: “Sociale Netwerken Dagelijks Gebruik Vs. App Geïnstalleerd Nederland.”

  7. In principle technology could have a very broad definition. You could argue that a book is a technology mediating between the reader and the writer. My definition of technology is a bit more narrow for this thesis. I am referring to the information and communication technologies that have accelerated the digitization of society and have categorically transformed it in the last thirty years or so (basically since the advent of the World Wide Web.

  8. Zuboff, “Big Other.”

  9. This is Varian’s euphemism for surveillance.

  10. Varian, “Computer Mediated Transactions,” 2.

  11. Jagadish et al., “Big Data and Its Technical Challenges,” 88–90.

  12. See for example: Hirsch Ballin et al., “Big Data in Een Vrije En Veilige Samenleving,” 21.

  13. I will often use data with a singular verb, see: Rogers, “Data Are or Data Is?”

  14. This three-phase model also aligns with Zuboff’s model of surveillance capitalism.

  15. Miller, “Cheaper Sensors Will Fuel the Age of Smart Everything.”

  16. Curran, “Are You Ready?”

  17. “How Google Retains Data We Collect.”

  18. This isn’t being too restrictive. As Karen Gregory writes: “Big data, like Soylent Green, is made of people.” See: Gregory, “Big Data, Like Soylent Green, Is Made of People.”

  19. “What Is Personal Data?”

  20. Ohm, “Broken Promises of Privacy.”

  21. Anderson, “The End of Theory.”

  22. Ibid.

  23. “Choose Your Audience.”

  24. “Facebook Ads Targeting Comprehensive List.”

  25. I find this final category deeply problematic, see: De Zwart, “Facebook Is Gemaakt Voor Etnisch Profileren.”

  26. “Predictive Policing Software.”

  27. “Wealthfront Investment Methodology White Paper.”

  28. “DeepMind Health.” DeepMind’s slogan on their homepage is “Solve intelligence. Use it to make the world a better place.”

  29. Van Hoboken, “Comment on ’Democracy Under Siege’.”

  30. “Definition of Appropriate in the Oxford Dictionary.”

  31. “Definition of Appropriate in the Merriam Webster Dictionary.”

  32. Gellman, “Disintermediation and the Internet,” 7.

  33. Thompson, “Aggregation Theory.”

  34. Thompson, “Antitrust and Aggregation.”

  35. This is also one of the reasons why classical antitrust thinking doesn’t have the toolkit to address this situation.

  36. Barocas and Nissenbaum, “Big Data’s End Run Around Procedural Privacy Protections,” 32.

  37. The top ten at the end of the first quarter of 2011 were Exxon Mobil, PetroChina, Apple Inc., ICBC, Petrobras, BHP Billiton, China Construction Bank, Royal Dutch Shell, Chevron Corporation, and Microsoft. At the end of the 1st quarter of 2018, Apple Inc., Alphabet Inc., Microsoft,, Tencent, Berkshire Hathaway, Alibaba Group, Facebook, JPMorgan Chase en Johnson & Johnson were at the top of list. See: “List of Public Corporations by Market Capitalization.”

  38. Google is now a wholly owned subsidiary of Alphabet, but all these examples still fall under the Google umbrella. See: Page, “G Is for Google.”

  39. Alphabet literally has products starting with every letter of the alphabet. See: Murphy and Rathi, “All of Google’s—Er, Alphabet’s—Companies and Products from A to Z.”

  40. “Search Engine Market Share.”

  41. In the Netherlands for example, Google Search has a 89% market share on the desktop and a 99% market share on mobile. See: Borgers, “Marktaandelen Zoekmachines Q1 2018.”

  42. “Google Search Statistics.”

  43. Levy, “How Google’s Algorithm Rules the Web.”

  44. For most use cases, there are specific domains where niche search engines might perform better.

  45. “Google Trends.”

  46. Stephens-Davidowitz, “The Cost of Racial Animus on a Black Candidate,” 36.

  47. Stephens-Davidowitz, Everybody Lies, 14.

  48. Google noticed Stephens-Davidowitz’s research and hired him as a data scientist. He stayed on for one and a half years. See: “About Seth.”

  49. Court of Justice, “Google Spain SL and Google Inc. V Agencia Española de Protección de Datos (AEPD) and Mario Costeja González,” para. 87.

  50. Perez de Acha, “Our Naked Selves as Data – Gender and Consent in Search Engines.”

  51. Noble, Algorithms of Oppression, 3.


  53. Kamona, “I Saw a Tweet Saying ‘Google Unprofessional Hairstyles for Work’.”

  54. In 2017, the percentage of black tech workers at Google was 1.4%. See: “Annual Report – Google Diversity.”

  55. Noble, Algorithms of Oppression, 80.

  56. Smith, “39 Fascinating and Incredible YouTube Statistics.”

  57. Brouwer, “YouTube Now Gets over 400 Hours of Content Uploaded Every Minute.”

  58. Goodrow, “You Know What’s Cool?”

  59. This last figure is particularly staggering. It means that if you look up any world citizen at any point in time, the chances that they are watching a YouTube video right when you drop in, is bigger than 1 in 200. Or said in another way: Globally we spend more than 0.5% of the total time that we have available to us watching videos on YouTube.

  60. “Who Are We?”

  61. Being present on YouTube is important for them because in many countries it is safer to visit than

  62. Austin, “Women on Waves’ Three YouTube Suspensions This Year Show yet Again That We Can’t Let Internet Companies Police Our Speech.”

  63. Ibid.

  64. Bridle, “Something Is Wrong on the Internet.”

  65. Ibid.

  66. Ibid.

  67. Ibid.

  68. Matsakis, “YouTube Will Link Directly to Wikipedia to Fight Conspiracy Theories.”

  69. Matsakis, “Don’t Ask Wikipedia to Cure the Internet.”

  70. Farokhmanesh, “YouTube Didn’t Tell Wikipedia About Its Plans for Wikipedia.”

  71. Ayers, “YouTube Should Probably Run Some A/B Tests with the Crew at @WikiResearch First.”

  72. Google follows local laws when presenting a border, so when you look up the Crimea from the Russian version of Google Maps you see it as part of Russia, whereas if you look at it from the rest of the world it will be listed as disputed territory. See: Chappell, “Google Maps Displays Crimean Border Differently in Russia, U.s.”

  73. I’ve written up this example before. See: De Zwart, “Demystifying the Algorithm.” and De Zwart, “Google Wijst Me de Weg, Maar Niet Altijd de Kortste.”

  74. The residents argue that fire trucks aren’t able to pass by these parked cars in case of an emergency.

  75. The project to improve the quality of the maps at Google is called ‘Ground Truth’. See: Madrigal, “How Google Builds Its Maps—and What It Means for the Future of Everything.”

  76. Li and Bailang, “Discover the Action Around You with the Updated Google Maps.”

  77. Ibid.

  78. Bliss, “The Real Problem with ’Areas of Interest’ on Google Maps.”

  79. De Zwart, “Medium Massage – Writings by Hans de Zwart.”

  80. Often, to then use the server for mining cryptocurrencies.

  81. It stands for “Completely Automated Public Turing test to tell Computers and Humans Apart”.

  82. Thompson, “For Certain Tasks, the Cortex Still Beats the CPU.”

  83. Von Ahn and Cathcart, “Teaching Computers to Read.”

  84. “reCAPTCHA.”

  85. “reCAPTCHA – Creation of Value.”

  86. Harris, “Massachusetts Woman’s Lawsuit Accuses Google of Using Free Labor to Transcribe Books, Newspapers.”

  87. Dinzeo, “Google Ducks Gmail Captcha Class Action.”

  88. “What Is reCAPTCHA?”

  89. Assuming 1.500 working hours per year and 200 million reCAPTCHAs filled in per day, taking 10 seconds each. This estimate is likely to be too low, but probably is at the right order of magnitude.

  90. Conger and Cameron, “Google Is Helping the Pentagon Build AI for Drones.”

  91. Scahill, “The Assassination Complex.”

  92. Zuboff, “Google as a Fortune Teller.”

  93. Rawls, A Theory of Justice, 6.

  94. Ibid., 15–16.

  95. Ibid., 16–17.

  96. Ibid., 17.

  97. Ibid., 17.

  98. Ibid., 266.

  99. Ibid., 266.

  100. Ibid., 266.

  101. Ibid., 88.

  102. Eubanks, Automating Inequality.

  103. Ibid., 6–7.

  104. Ibid., 12–13.

  105. Ibid., 127–73.

  106. Vaithianathan et al., “Children in the Public Benefit System at Risk of Maltreatment.”

  107. Eubanks, Automating Inequality, 130.

  108. Ibid., 130.

  109. O’Neil, Weapons of Math Destruction, 21.

  110. Eubanks, Automating Inequality, 146.

  111. Ibid., 157.

  112. Ibid., 158.

  113. Ibid., 12.

  114. Angwin and Grassegger, “Facebook’s Secret Censorship Rules Protect White Men from Hate Speech but Not Black Children.”

  115. Ibid.

  116. Facebook’s euphemism for a code of conduct.

  117. Bickert, “Publishing Our Internal Enforcement Guidelines and Expanding Our Appeals Process.”

  118. Angwin and Grassegger, “Facebook’s Secret Censorship Rules Protect White Men from Hate Speech but Not Black Children.”

  119. Hern, “Facebook Protects Far-Right Activists Even After Rule Breaches.”

  120. Kreling, Modderkolk, and Duin, “De Hel Achter de Façade van Facebook.”

  121. Angwin and Grassegger, “Facebook’s Secret Censorship Rules Protect White Men from Hate Speech but Not Black Children.”

  122. Ibid.

  123. Regrettably, there are only ever male protagonists in Rawls’s examples.

  124. Rawls, A Theory of Justice, 9.

  125. Ibid., 96, 301.

  126. I published the following story as an opinion piece in the NRC newspaper. See: De Zwart, “Miljardenbedrijf Google Geeft Geen Cent Om de Waarheid.”

  127. To be clear: Wikipedia does allow the free use of their information under a Creative Commons Attribution-ShareAlike 3.0 license. Google complies with the attribution clause, but fails to tell its visitors under what license the information is available. See: “Text of Creative Commons Attribution-ShareAlike 3.0 Unported License.”

  128. See for example: Ehrlich, “GOP Senator Says He Is Alive Amid Google Searches Suggesting He Is Dead.”

  129. Sterling, “Science Column #5 ‘Internet’.”

  130. Rawls, A Theory of Justice, 235.

  131. Rawls: “There are the striking cases of public harms, as when industries sully and erode the natural environment.” See: ibid., 237

  132. Ibid., 237.

  133. Hess and Ostrom, “An Overview of the Knowledge Commons,” 3.

  134. Ibid., 8–9.

  135. Ibid., 12.

  136. The authors probably mean that this was the first enclosure movement to be theorized.

  137. Ibid., 12.

  138. He himself grandiloquently used the word “grandiloquently”.

  139. Boyle, “The Second Enclosure Movement,” 19.

  140. Hess and Ostrom, “An Overview of the Knowledge Commons,” 12.

  141. Ibid., 9.

  142. Balkan, “Encouraging Individual Sovereignty and a Healthy Commons.”

  143. Crawford, The World Beyond Your Head.

  144. Ibid., 11.

  145. Ibid., 11.

  146. Ibid., 12.

  147. Airports are one of the places with the most ads, mainly because it is “a high dwell time environment, delivering a captive audience.” See: Ruane, “Signs of Our Times.”

  148. This doesn’t mean that you will no longer be tracked though. See: “YouTube Premium.”

  149. Falcone, “Amazon Backtracks, Will Offer $15 Opt-Out for Ads on Kindle Fire Tablets.”

  150. Crawford, The World Beyond Your Head, 13.

  151. Ibid., 13–14.

  152. Ibid., 14.

  153. Rawls, “The Priority of Right and Ideas of the Good,” 257.

  154. Ibid., 257.

  155. Rawls himself mentions leisure time and the absence of physical pain as potential candidates. See: ibid., 257

  156. “Our Society Is Being Hijacked by Technology.”

  157. Pichai, “AI at Google.”

  158. The principles were probably a direct reaction to their employees protesting a Google contract with the US Department of Defense. See: Shane and Wakabayashi, “‘The Business of War’.”

  159. Pichai, “AI at Google,” italics added.

  160. Rawls, A Theory of Justice, 22.

  161. Ibid., 23.

  162. Ibid., 13.

  163. Ibid., 24.

  164. Ibid., 13.

  165. Ibid., 24.

  166. Google is just the example here, because they’ve made their ethics explicit. Many of the other technology companies behave on the basis of a similar ethical stance.

  167. Also called “neorepublicanism”.

  168. Pettit, Just Freedom xiv.

  169. Ibsen, A Doll’s House, 88.

  170. In my Bachelor’s thesis, I’ve attempted to show that a classic liberal negative conception of freedom as non-interference has a much harder time showing what’s wrong with our current technological predicament than the republican ideal of freedom. See: De Zwart, “Liberty, Technology and Democracy.”

  171. Pettit, Just Freedom xv.

  172. Ibid., 30.

  173. Ibid., 34–35.

  174. Ibid., 36–38.

  175. Hobbes, Leviathan, 146.

  176. Berlin, “Two Concepts of Liberty,” 32.

  177. Pettit, Just Freedom, 41.

  178. Ibid., 41.

  179. Ibid., 43, emphasis added.

  180. Ibid., 43.

  181. For Pettit this explains why we feel resentment when we are controlled by another’s will, and only exasperation when the constraints don’t have anything to do with the will. See: ibid., 215n28.

  182. Kant, Notes and Fragments, 11.

  183. Pettit, Just Freedom, 57.

  184. Ibid., 58.

  185. Ibid., 60.

  186. Ibid., 62.

  187. Ibid., 62–63.

  188. Ibid., 73.

  189. Schüll, Addiction by Design, 92.

  190. Calo and Rosenblat, “The Taking Economy,” 1655.

  191. Ibid., 1662.

  192. Ibid., 1662.

  193. Epstein and Robertson, “The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections” E4512.

  194. Epstein, “How Google Could Rig the 2016 Election.”

  195. Singhal, “A Flawed Elections Conspiracy Theory.”

  196. Epstein, “How Google Could Rig the 2016 Election.”

  197. Cadwalladr and Graham-Harrison, “Revealed.”

  198. “Data Drives All That We Do.”

  199. “Data-Driven Campaigns.”

  200. “Donald J. Trump for President.”

  201. It is important to realize that there is probably a big gap between the commercial sales language that Cambridge Analytica uses and the actual abilities of its products (see for example: Wong, “Cambridge Analytica-Linked Academic Spurns Idea Facebook Swayed Election.”). However, it is still early days in using data to drive behavior and the predictions don’t have to be perfect for there to be results that have an impact.

  202. “Our $1 Billion Commitment to Create More Opportunity for Everyone.”

  203. “Our Mission.”

  204. Unfortunately, seems to equate “internet access” to “access to Facebook”. See: Kreiken, “Humanitair-Vrijheids-Vrede-Mensenrechten-Project-Facebook.”

  205. Only in a monetary sense, of course.

  206. “Our Company.”

  207. Safian, “Mark Zuckerberg on Fake News, Free Speech, and What Drives Facebook.”

  208. Pettit, Just Freedom, 88.

  209. All of the quotations of terms come from: “Google Terms of Service.”

  210. Balkan, “The Nature of the Self in the Digital Age.”

  211. For example through hijacking the two-factor SMS code. See: Gibbs, “SS7 Hack Explained.” or Franceschi-Bicchierai, “The SIM Hijackers.”

  212. Pettit, Just Freedom, 90.

  213. Ibid., 181–82.

  214. Rawls also understands the freedom limiting aspects of arbitrary decisions: “But if the precept of no crime without a law is violated, say by statutes, being vague and imprecise, what we are at liberty to do is likewise vague and imprecise. The boundaries of our liberty are uncertain. And to the extent that this is so, liberty is restricted by a reasonable fear of its exercise.” See: Rawls, A Theory of Justice 210.

  215. Pettit, “Freedom as Antipower,” 589–90.

  216. Ibid., 593.

  217. It would probably be feasible to argue that many of the problems in our technological predicament have the prevailing neo-liberal form of capitalism as their root cause. This means that solutions to the problem would need to consist of finding pathways to new economic arrangements. As this thesis does not contain a capitalist critique, these explorations don’t explicitly address economic systems either. However, it is glaringly obvious that it will require appropriate adjustments to our accumulation mindset for many of these ideas to be successful.

  218. Rushe, “WhatsApp.”

  219. “Commission Fines Facebook €110 Million for Providing Misleading Information About WhatsApp Takeover.”

  220. Alphabet, Amazon, Apple, Facebook, and Microsoft together spent $31.6bn on acquisitions in 2017. See: Economist, “American Tech Giants Are Making Life Tough for Startups.”

  221. Ibid.

  222. Pierce, “Facebook Has All of Snapchat’s Best Features Now.”

  223. Griffith, “Will Facebook Kill All Future Facebooks?”

  224. Google is in a similar information position, through owning the Google app store, and through its Chrome browser.

  225. Economist, “The World’s Most Valuable Resource Is No Longer Oil, but Data.”

  226. “Mastodon.”

  227. “How It Works.”

  228. “Dat Project – A Distributed Data Community.”

  229. Ogden et al., “Dat – Distributed Dataset Synchronization and Versioning.”

  230. De Zwart, “Hans de Zwart’s Books.”

  231. De Zwart, “Hans Fietst.”

  232. “Commons Transition and P2P,” 5, emphasis removed.

  233. Ibid., 5.

  234. Ibid., 10.

  235. The most interesting work is probably done at the Data & Society research institute in New York, which focuses on the social and cultural issues arising from data-centric and automated technologies. See: “What We Do.”

  236. “São Paulo.”

  237. Goodson, “No Billboards, No Outdoor Advertising?”

  238. Hoepman, “Doorbreek Monopolies Met Open Standaarden En API’s.”

  239. “General Data Protection Regulation” Article 20.

  240. Ibid. Article 20.

  241. Claburn, “Facebook, Google, Microsoft, Twitter Make It Easier to Download Your Info and Upload to, Er, Facebook, Google, Microsoft, Twitter Etc…”

  242. “Data Transfer Project Overview and Fundamentals.”

  243. Stallman also has a radical proposal to keep our personal data safe: create laws that stop data appropriation. See: Stallman, “A Radical Proposal to Keep Your Personal Data Safe.”

  244. “What Is Free Software?”

  245. See “The Open Source Definition.” for a definition of what make a license free.

  246. Stallman, “Measures Governments Can Use to Promote Free Software, and Why It Is Their Duty to Do so.”

  247. Neil, Practical Vim.

Liberty, Technology and Democracy

This is the thesis I wrote for my BA in philosophy. I want to thank Gijs van Donselaar for his excellent supervision.

The thesis can also downloaded as a PDF.


Somewhere in early 2014 I made the decision to quit using Google. I had already quit Facebook many years before1 but was still all-in with Google’s services. Google not only provided me with Search capabilities, it also served2 as my news reader, my online photo album, my mapping and routing service, and most importantly as my email and calendar provider. It took a lot of effort to find a new place to receive and store my email and to find a host where I could install an open source alternative search engine, news reader and photo album.

According to most measures I am worse off in the new situation: I can no longer easily find my emails, I have to invest time in maintaining all these self-hosted applications, I am probably less secure against people who want to hack into my things and I am now intimately aware of how much less convenient OpenStreetMap is in comparison to Google Maps if you have to get somewhere. Still, for some reason I am very happy having made the move. This is because without having to use Google I feel more free. I feel liberated.

How can I feel this way? How exactly was Google making me less free? From a classic liberal (and dominant) point of view I am free if I am not constrained in my options and if I am not interfered with. Isn’t it the case that there is no interference from Google in our lives? Aren’t they just a service provider whom nobody is forcing you to use? You could even argue that I have less functionality, less options and so have less freedom.

This thesis explores whether a different conception of freedom —a (neo-)republican one— could explain the feeling of liberation that I had after moving away from Google. If we don’t see freedom as lack of interference, but as lack of domination, would that make it easier to take a critical look at the role of information companies like Google?

To see whether this is the case we will first take a deeper look at the thinking of neo-republicans like Quentin Skinner and Philip Pettit. Isaiah Berlin famously wrote about two concepts of liberty3: positive liberty (often seen as self-mastery) and negative freedom (the absence of interference). Skinner is inspired by the republican tradition to explicitly define a third concept of liberty. This social or political form of freedom can be characterised as not being dependent, a situation where there is no arbitrary domination. By looking closely at two critics of republican thinking —William Paley and Matthew H. Kramer— and by looking at the replies of Skinner and Pettit to these critiques we gain a more precise understanding of the differences between the concepts of negative freedom and republican freedom. Pettit and Skinner both deny that a slave —however benign their master— can ever be considered free.

Back to Google. They are not the only US West Coast information based company that is having a big influence over our lives. Not too long ago the oil majors and a few big banks were at the top of the list of the biggest companies in the world. Currently the top five largest companies are Apple, Alphabet (Google’s parent company), Microsoft, Amazon and Facebook.4 These five companies show us that “the world’s most valuable resource is no longer oil, but data.”5 So, is there indeed a form of arbitrary and domineering power from these information giants over their users? In order to answer that question we look at three different ways of framing our relationship with and our dependency on technology.

Firstly we delve into Shoshana Zuboff’s concept of ‘surveillance capitalism’. She explains how the internet giants extract value from us by collecting as much data about us as possible, analysing that data with data scientists and machine learning algorithms, then making behavioural predictions on the basis of data, to finally sell those predictions on prediction markets. Next we look at the work of Evgeny Morozov and Bruce Schneier who both make an explicit analogy between our relationship to the big five and those of the peasants to the landowners during feudal times. Finally we look at some of the research that Facebook has been doing to lift the veil that hides much of their activities.

These ways of looking at technology help us take another look at the different ideals of freedom. We show how a liberal strictly negative view of freedom has trouble addressing surveillance and thus surveillance capitalism. The republican way of framing power relationships is helpful in situations where we are not aware of the potential for arbitrary control that organisations have over us. Republicanism requires a deliberative democracy. This is put under pressure by technological developments. Finally we will look at what this will likely do to our psychological state of mind.

This thesis finishes with a set of directions for solutions that can possibly be offered by republican thinking. We touch on three forms of antipower: protection through data protection legislation and encryption, regulation through antitrust, and empowerment through free and federated technology.


Understanding whether and if how our current technological reality inhibits our (political) freedom requires a deeper understanding of the different conceptions of liberty. In this chapter we first look at the classic liberal concept of negative freedom as described by Berlin, and then explore republican thinking through Skinner and Pettit. We finish with two liberal critics of republican freedom and the response to that criticism. This discussion gives us the tools to take a critical look at Silicon Valley and the services it provides.

Liberal freedom: freedom from interference

In 1958 Isaiah Berlin delivered an inaugural lecture before the University of Oxford titled Two Concepts of Liberty. In it he looks at two political senses of freedom. The one which he calls the positive sense is involved in trying to answer the question “What, or who, is the source of control or interference that can determine someone to do, or be, this rather than that?”6 Whereas the negative sense is involved with the question “What is the area within which the subject — a person or a group of persons — is or should be left to do or be what he is able to do or be, without interference by other persons?”7

Berlin sees positive freedom as the ability to be one’s own master. This self-mastery or the ability to be in control or to be fully yourself is then often equated with being rational. This is exactly where Berlin saw the danger in the concept. He notes how often in history a concept of positive freedom is used to force a collective will (from a tribe, the church, a state) onto the individual in the name of their ‘real selves’, arguing that the individual doesn’t know what is good for them. Positive freedom can thus easily gain an authoritative streak, oppression in the name of freedom.8

It is therefore that Berlin’s thinks that negative liberty is the more important concept for political freedom. He writes:

I am normally said to be free to the degree to which no man or body of men interferes with my activity. Political liberty in this sense is simply the area within which a man can act unobstructed by others. If I am prevented by others from doing what I could otherwise do, I am to that degree unfree [..].9

He is very explicit that only constraints created by humans can take away our political liberty. So being free means the absence of interference. “The wider the area of non-interference the wider my freedom.”10

This way of looking at freedom has become the dominant perspective on political liberty. When we talk about freedom in the context of politics we nearly always talk about negative freedom. It is wat lies under the individual liberties like freedom of speech, freedom of religion and freedom of movement. The role of the state in this perspective is clear. It is there to make sure that these individual liberties are protected and that citizens don’t coerce each other without justification. State interference can be justified if it protects individual rights, but is still a limitation of our freedom (with being in prison as the ultimate form of not being free). Where the law ends, freedom begins. The current political liberal program is mostly based on this thinking.

(Neo)-Republican freedom: freedom from domination

Both Quintin Skinner and Philip Pettit believe that Berlin completely misses a particular dimension of political freedom. Effectively saying that there is a third concept of freedom, they argue that being free means being free of arbitrary domination. They are called neo-republicans because their thinking is a continuation of classic republican ideas. To confuse matters further Skinner prefers calling this thinking ‘neo-Roman’ as he considers Rome to be the birth ground of the republic.11

Skinner —an eminent historian— shows in Liberty before Liberalism what the republican traditions of Machiavelli, the English republicans and the American founders consist of. According to Skinner they all share a set of two assumptions. The first being that:

[Any] understanding of what it means for an individual citizen to possess or lose their liberty must be embedded within an account of what it means for a civil association to be free.12

According to these authors the natural body and the body politic are very similar in how they can forfeit their liberty. The body politic should govern itself, preferably through some representative body of the people.

Their second shared assumption is that:

[What] it means to speak of a loss of liberty in the case of a body politic must be the same as in the case of an individual person. And they go on to argue [..] that what it means for an individual person to suffer a loss of liberty is for that person to be made a slave.13

They contrast the concept of liberty with the concept of slavery. Slaves don’t lose their freedom because they are being coerced. There are enough examples of slaves who manage to avoid being coerced. The crux of the master-slave distinction is a power relationship:

A slave is [..] someone whose lack of freedom derives from the fact that they are ‘subject to the jurisdiction of someone else’ and are consequently ‘within the power’ of another person.14

This concept of ‘jurisdiction’ will be useful in our analysis further down the line. Living under an arbitrary power capable of interfering in your activities without having to consider your interests, is enough to make you unfree..15

Pettit puts more focus within the republican concept of freedom on non-domination. According to him there is no domination without unfreedom.16 But domination and interference do need to be pulled apart from each other: we can have domination without interference (a non-interfering master) and interference without domination (a non-mastering interferer).17

For Pettit there are three aspects to a relationship of domination. The dominator has the capacity to interfere, this capacity will need to have an arbitrary basis and should be within certain choices that the other is in a position to take.18 He considers acts of interference non-arbitrary when the act of power tracks the wellfare of the public (or the subject) rather than the wellfare of the power holder.19

Pettit considers non-domination to be both necessary and sufficient for the ideal of political freedom:

The necessity claim is that if a person is dominated in certain activities, if he or she performs those activities in a position where there are others who can interfere at their pleasure, then there is a sense in which that person is not free. [..] The sufficiency claim is that if a person is not dominated in certain activities—if they are not subject to arbitrary interference—then however much non-arbitrary interference or however much non-intentional obstruction they suffer, there is a sense in which they retain their freedom.20

Basically Pettit is biting the bullet and agreeing that within a republican concept of freedom somebody who has been convicted for a crime and is in jail can still be free (in some sense). He would argue that the law in a well-ordered republic could be considered a non-mastering interferer.21 As long as the interference is not arbitrary and is controlled by the interests and opinions of those affected, then it doesn’t represent a form of domination.22

Republican thinking runs counter to the classic liberal thinking about the law which sees it as an inhibitor of freedom. It is the difference between liberty by the law and liberty from the law.23 Republicans consider a strategy of constitutional provision as a way to achieve non-domination. A constitutional authority will not only make sure that its citizens aren’t coerced, it also needs to make sure that citizens aren’t arbitrarily dependent on the goodwill of others. Any interference that it practices must be suitably responsive to the common good.24

Critics of republican freedom

Looking at the critics of the republican ideal of freedom can help us get an even sharper perspective on the differences between freedom as non-interference and freedom as non-domination.

Paley’s objections and Pettit’s defense

In the late 18th century William Paley famously formulated three criticisms to the concept of non-domination as an ideal of liberty.25 In Republicanism Pettit summarizes and counters his arguments.26

Firstly Paley says that republicans confuse the means with the end. They “describe not so much liberty itself, as the safeguards and preservatives of liberty.”27 Pettit thinks that Paley doesn’t understand what republicans mean when they say they want to secure non-interference by taking away arbitrary power. It isn’t their goal to promote non-interference, it is their goal to protect against it by taking away the ability of the other to interfere in an arbitrary manner.28

Next Paley argues that republicans are too black and white in their perspective on freedom. When republicans “speak of a free people; of a nation of slaves; which call one revolution the aera [sic] of liberty, or another the loss of it; with many expressions of a like absolute form; are intelligible only in a comparative sense.”29 Pettit explains that domination can actually vary in both intensity and in extent. He makes a distinction between factors that compromise liberty and factors that condition it. If you are not dominated and so your freedom is not compromised, there might still be significant limitations of your options conditioning your freedom. Your freedom as non-domination can be increased by taking away these conditioning factors.30

Paley’s final objection is that an ideal of non-domination is just too hard to accomplish. Republican ideas about liberty will “[be] unattainable in experience, inflame expectations that can never be gratified, and disturb the public content with complaints, which no wisdom or benevolence of government can remove.”31 Pettit is convinced that one reason that the ideal of non-interference became so dominant is because the ruling classes couldn’t stand the moral imperative towards equality that comes with an ideal of non-domination. The prevailing notions of the time where that employees and servants were subject to the will of their master and women were subject to the will of their father or husband.32 Pettit’s reply to Paley merits a full quotation:

The shift from freedom as non-interference to freedom as non-domination [has] two effects [..]. [It] is going to make us potentially more radical in our complaints about the ways in which social relationships are organized. And it is going to make us potentially less sceptical about the possibilities of rectifying those complaints by recourse to state action.33

This point is important to remember when we start looking at our technological society from a republican perspective.

A modern liberal criticism of the republican ideal

Current day critics of the republican ideal like Ian Carter34 and Matthew H. Kramer35 argue that a pure negative liberty theory is more capacious than the republicans say. These critics have a slightly enlarged view of negative liberty in comparison to let’s say Hobbes who argues that only actual interference can count as limiting freedom (“Liberty, of Freedome, signifieth (properly) the absence of Opposition; (by Opposition, I mean the externall Impediments of motion;)”36). They see freedom not only as being reduced by actual interference but also by potential interference (like coercion, threats and displays of superiority), and they see freedom as being reduced when options are being foreclosed. So this means that the readiness to interfere, which according to them is what domination amounts to, reduces freedom. Thus they argue that there is no need to go beyond the theory of negative liberty.37

Central to the criticism that these thinkers have of Skinner and Pettit is that they say that freedom is only negatively and proportionally affected in relation to the probability of the power actually being exercised. The threat needs to be plausible.38

Kramer describes three interesting questions around dominance which he considers to be problematic for republicans. To understand them it is important to know Kramer’s definition of freedom. He argues that “the overall freedom of each person [..] is largely determined by the range of the combinations of conjunctively exercisable opportunities that are available to him.”39

The first question is if it matters for your freedom whether you know that you are being dominated. According to Kramer, Skinner would argue that you need to have knowledge of the dominating power of the other before your freedom is limited. Kramer thinks this is too narrow and comes up with the following example:

If a man is in a room where the only door has been firmly locked by someone else, then he is unfree-to-depart irrespective of whether he knows that the door cannot be opened. Of course, he will not feel unfree unless he does apprehend that he is confined to the room; but he will be unfree even if he remains ignorant of his plight.40

So for Kramer your unfreedom is independent of your knowledge of your unfreedom.41

Secondly Kramer wonders whether it is important that the act of dominating interference is intentional. He quotes Pettit saying that non-intentional forms of obstruction can’t count as interference.42 Again Kramer thinks this is too narrow and demonstrates this with another example of people in a room. Imagine that Mark and Molly are both in a room and that Simon locks the door because he wants to forcibly confine Molly in the room. Imagine that Simon didn’t know that Mark was in the room. If we correlate intentional and non-intentional with “unfree” and “not free” then we have to conclude “that a single human act which imposes exactly the same physical constraints on two people of similar capacities has affected their unfreedom in markedly different ways.”43 Kramer thinks that this shows that the republicans have a moralized account of freedom where there is not enough attention for the (in)abilities of Mark and too much attention for the morality of Simon’s action.44

Kramer’s final question is what it means if there is a situation of dominance where interference is completely improbable. Taking into account Kramer’s definition of freedom you can see that the dominator’s superiority by itself it not a source of unfreedom, rather it is what the dominator does with its superiority. Republicans see the dominator’s superiority itself as a source of unfreedom. Kramer thinks this is a problematic perspective and uses the example of the friendly giant to make his point. Imagine a giant born in a community where he is larger, stronger, swifter and more intelligent than any of his compatriots. Imagine that if he wanted to he could get an autocratic sway over the community and that he himself is very aware of this. Imagine also that he actually loathes that idea and decides to live a lonely live in a cave in the hills nearby. According to Kramer, Pettit would call this giant a dominator even though he is not reducing the overall liberty of anybody else.45 Kramer thinks this makes no sense, he concludes:

In the very rare circumstances where relationships of domination genuinely involve extremely low probabilities of nontrivial encroachments on the freedom of subordinate people, we should not characterize the state of subordination as a state of unfreedom.46

A slave can’t be free: a republican response to their critics

In response to the criticism Skinner decides to keep the strict disconnect between the presence of unfreedom and the imposition of interference. To him liberty consists of being independent from the will of another. If you are subject to the arbitrary power of someone else, then you are no longer able to forbear according to your own will and desires, forfeiting your liberty.47

For Skinner it isn’t necessary that the arbitrary power is ever exercised, just the potentia of the ruler turns its subjects into slaves depriving them of their liberty.48 It is true that people who are aware of being dominated tend to have a lack of energy and initiative and can be expected to behave with servility and censor themselves, but that doesn’t make knowing about your enslaved position a necessity for losing your liberty. As Skinner writes:

[Anyone] who reflects on their own servitude will probably come to feel unfree to act or forbear from acting in certain ways. But what actually makes them unfree is the mere fact of living in subjection to arbitrary power.49

Pettit has a more formal analytical approach to answer his critics. He reformulates the republican conception of freedom in the process. He does this by formulating three axioms and four theorems.

The three axioms are as follows:50

  1. The reality of personal choice — The options we face are really options and we choose them at our will.
  2. The possibility of alien control — Alien control is a relationship where the first party will control what the second party does in a way that takes from the personal choice of the agent. The controller needs to be aware of the controlled as an agent subject to control, the controlled agent doesn’t need to be aware of the controller.
  3. The positionality of alien control — Alien control is a zero-sum commodity: if one gains, the other loses. It is about a relative position, not an absolute one.

From these axioms he derives four theorems defining the connection between interference and control:51

  1. Alien control may materialize with interference — Pettit has an inclusive notion of interference that covers both intentional and quasi-intentional interventions. Examples of alien control with interference include hypnosis, brainwashing, intimidation and other forms of manipulation. The alien control is realized via reduction, removal or replacement of options.
  2. Alien control may materialize without interference – Control doesn’t have to be active it can also be virtual. It is possible for person A to control the choice of person B without any interference. For example when A is watching what B does and is ready to interfere, but only if required. This virtual control doesn’t even have to be intentional on the part of person A.
  3. Non-alien control may materialize without interference — Control is non-alien when person A controls what person B does, but person B isn’t denied the thought “I can do that” and still has the options independently available. Pettit calls co-reasoning one way in which this happens. Interestingly he notes how offers (unlike threats) are always non-alien forms of control unless they can’t be refused.
  4. Non-alien control may materialize with interference — Interference can be non-arbitrary when it is forced to track the avowed interests of the person who is being interfered with. Pettit makes it clear that this is independent from any moral criterion, so that the republican theory isn’t moralized.

Using these theorems Pettit shows that critics like Kramer ignore the most salient explanation of why coercion affects freedom of choice. Unchecked coercion doesn’t just remove options, it also replaces options.52 And Pettit’s response to the friendly giant argument is very similar to Skinner’s. Of course it can be a relief that your fear of interference can lessen if the giant decides to live in a cave, but that still won’t give you any reason for thinking that you are now less unfree than you were previously.53

In the end liberty is defined by Pettit as the absence of alien or alienating control on the part of other persons. This distinguishes the republican theory of freedom from liberal negative theories of freedom on two separable counts:

First, in taking freedom of choice to require the absence of alien control, not just the absence of interference; and second, in taking the freedom of the person to require a systematic sort of protection and empowerment against alien control over selected choices.54


We now live in an information society. More and more of our interactions are technologically intermediated. Our social interactions (through for example Facebook, WhatsApp or Gmail), our economic interactions (via the likes of eBay, Google Maps, Amazon or PayPal) and even our cultural interactions (think of the Kindle, YouTube or Spotify). This means that there is now a third party between the two parties having the interaction. Just by interacting with each other and with the world we are creating data streams which can be captured by those third parties.

The prevalence of technological intermediation is altering the existing power relationships in society. This chapter will show how private companies are taking center stage and are starting to control the way we live.55

Tech’s Frightful Five

In 2006 the five world’s largest companies (by market cap) were Exxon Mobil, General Electric, Microsoft, Citigroup and Bank of America. In April 2017 that list has significantly changed and looks like this: Apple, Alphabet (Google),56 Microsoft, Amazon and Facebook.57 The Economist recently wrote an article about the dominance of these five companies. They collectively made more than 25 billion US dollar profit in the first quarter of 2017 alone. Amazon manages to capture half of every dollar spent online in the United States.58 With over 2 billion monthly active users, Facebook is now a bigger sovereignty than any other country in the world..59 Apple, Google and Microsoft can also call themselves billion-customer global businesses.60

Nowadays these five companies are often described together and in the context of our increased dependence on them. Farhad Manjoo writes in the New York Times about the Frightful Five61 and the role they play in his life: “We are, all of us, in inescapable thrall to one of the handful of American technology companies that now dominate much of the global economy”. Manjoo then plays a game in which he decides in which order he would abandon the Frightful Five. He decides it would be Amazon last, because as he writes:

Amazon has become, for my family more than a mere store. It is my confessor, my keeper of lists, a provider of food and culture, an entertainer and educator and handmaiden to my children.62

You could argue that nobody is forcing you to make use of the services of these five companies. And it is true that you could easily live your life without a smartphone and without being a member of some ‘social’ network, but your non-participation will come at an increasingly high social cost. Jason Ditzian writes how he can no longer make use of the car sharing service that he has been a member of for years if he continues to refuse to create a Facebook account63 and Sander Pleij beautifully describes how he tries to avoid using Facebook but has to capitulate for WhatsApp (owned by Facebook) because his editors at Vrij Nederland, the parents at his children’s school and his rugby club all use the tool to communicate.64 I personally will not forget the time I was waiting all alone at the gym with my sports bag, only to learn that the basketball game had been cancelled (“Didn’t you read the WhatsApp message?”).

Currently the cost of opting out is mostly just awkwardness, soon it will be ostracism.

Surveillance capitalism

How did these companies from Silicon Valley gain their dominance? Shoshana Zuboff is one of the first academic authors to get a clear grasp of the fact that the global architecture of computer intermediation leads to a new and mostly uncontested expression of power (she christens that power ‘Big Other’). In a recent article she describes ‘surveillance capitalism’ as the emergent logic of accumulation in the networked sphere.65

According to Zuboff each era has a dominant logic of accumulation. Mass production-based capitalism which was in sway for most of the 20th century made way for financial capitalism by the end of the century. Zuboff attempts to illuminate a new logic of accumulation, one that is becoming dominant in today’s networked spaces: surveillance capitalism. Her primary lens for doing that is Google, because it is widely considered to be the pioneer of using big data.66 Her explanation of surveillance capitalism is best understood as a four step process.

The first step is the accumulation and capturing of as much data as possible. Zuboff mentions five sources: data from computer-mediated economic transactions, data from billions of embedded sensors, data from corporate and government databases (often sold by data brokers), data from private and public surveillance camera’s and finally user-generated data created by people using services like Gmail, YouTube and most importantly Google’s search engine. This last category contains an interesting feedback loop: a search engine gets better when more people use it, leading to more people using it because it is better.67 Zuboff writes about Google’s hunger for data:

What matters is quantity not quality. Another way of saying this is that Google is ‘formally indifferent’ to what its users say or do, as long as they say it and do it in ways that Google can capture and convert into data.68

This data is ‘extracted’ from the populations who are using Google services. It is important to note that there is an absence of structural reciprocities between Google and its users. This is different from earlier corporations who were always deeply interdependent with the populations they served. Because Google’s clients are advertisers (and not its users) this interdependency is not present.69

The second step is to have data scientists analyse the extracted data (the ‘surveillance assets’) using methodologies like predictive analytics, reality mining and pattern-of-life analysis. Machine learning algorithms are also a new way to find patterns in the data.70

The third step is to use this analysis to create predictions of behavioral patterns. This is what underlies personalised technologies like Google Now, the assistent that seems to know what you need right at the moment that you need it. A mode of continuous experimentation is needed to turn the correlational patterns gleaned from the data into something that can have an immediate effect on a person’s life.71 The need for massive amounts of data to do this successfully was shown by Samsung’s admission that the English version of their personal assistent (Bixby) was delayed because of a lack of “accumulation of big data, which is key to deep learning technology [..]”72

Finally, these behavioral predictions are sold on prediction markets. Currently Google’s main prediction market is build around advertising (in the first quarter of 2017 Alphabet had 20.3 billion US dollar revenue, with 18 billion US dollar coming from advertising, that is close to 90%73), but there are many other behavioral patterns that could be sold other than buyer’s intent, for example locational behavior or health-related behavior.74

For Zuboff these processes reconfigure the structure of power. There is no longer a centralised power of mass society (usually symbolized as Big Brother), it has been replaced by “distributed opportunities for observation, interpretation, communication, influence, prediction, and ultimately modification of the totality of action.”75 There is no escaping Big Other, with dire consequences:

What is accumulated here is not only surveillance assets and capital, but also rights. This occurs through a unique assemblage of business processes that operate outside the auspices of legitimate democratic mechanisms or the traditional market pressures of consumer reciprocity and choice. It is accomplished through a form of unilateral declaration that most closely resembles the social relations of a pre-modern absolutist authority. In the context of this new market form that I call surveillance capitalism, hyperscale becomes a profoundly anti-democratic threat.

Surveillance capitalism thus qualifies as a new logic of accumulation with a new politics and social relations that replaces contracts, the rule of law, and social trust with the sovereignty of Big Other. It imposes a privately administered compliance regime of rewards and punishments that is sustained by a unilateral redistribution of rights. Big Other exists in the absence of legitimate authority and is largely free from detection or sanction. In this sense Big Other may be described as an automated coup from above.76

Feudalism 2.0

What is the best way to characterize our relationship to the big five technology firms? In his book Data and Goliath Bruce Schneier uses a metaphor:

Our relationship with many of the Internet companies we rely on is not a traditional company–customer relationship. That’s primarily because we’re not customers. We’re products those companies sell to their real customers. The relationship is more feudal than commercial. The companies are analogous to feudal lords, and we are their vassals, peasants, and—on a bad day—serfs. We are tenant farmers for these companies, working on their land by producing data that they in turn sell for profit.77

Schneier is aware that it is just a metaphor78 but he does sees us pledging allegiance to Google (with Google Calendar, Google Docs, a Gmail account and an Android phone) or to Apple (iMacs, iPhones, iPads and a backup of everything in the iCloud). “We might prefer one feudal lord to the others. We might distribute our allegiance among several of these companies, or studiously avoid a particular one we don’t like. Regardless, it’s becoming increasingly difficult to not pledge allegiance to at least one of them.”79

Evgeny Morozov is choosing the same metaphor to describe the dominance of both Google and Facebook. He considers it “quite likely that Google, Facebook and the rest will eventually run the basic infrastructure on which the world functions” and warns us for a “hyper-modern form of feudalism, whereby those of us caught up in their infrastructure will have to pay [..] for access to anything with a screen or a button.”80

It is already the case that before you are able to use any of the services of companies like Google or Facebook (pledging your alliance so to say) you will have to agree to their terms of service. By giving your consent you literally step into their jurisdiction. The terms are not negotiable, it is a matter of take it or leave it. Google’s terms of service contain policies like: “Google keeps your searches and other identifiable user information for an undefined period of time”, “Google can use your content for all their existing and future services”, “Google can share your personal information with other parties” and “Google may stop providing services to you at any time.”81

Facebook’s research

Facebook has a research department which is constantly running different experiments to explore how a change in their services leads to a change in behavior of its users.82 Facebook conveniently believes that their users, because they have consented to Facebook’s data policy, do not need to give explicit consent to participate in the research. Facebook’s data scientists occasionally publish scientific papers with their findings. Usually what Facebook thinks they have learned from their research is different from the main learning points that the rest of world get out of it. Let us look at three different examples.

In 2012 Facebook researchers published an article in Nature titled A 61-million-person experiment in social influence and political mobilization.83 In it they delivered political mobilisation messages to 61 million users during the 2010 congressional elections. Facebook found out that the messages could directly influence political self-expression and real world voting behavior and that the effect of social transmission on real world voting was greater than the effect of the messages. We found out that delivering just a single extra message in the news feed of 61 million people “increased turnout directly by about 60,000 voters and indirectly through social contagion by another 280,000 voters, for a total of 340,000 additional votes.”84 If they would want to, Facebook could increase voting turnout significantly.

In 2013 Facebook researchers published a paper at a conference for the advancement of artificial intelligence. It was called Self-Censorship on Facebook.85 By keeping track of what 3.9 million users were typing into Facebook pages and then deciding to delete before posting, Facebook found out that 71% of the users exhibit some form of last-minute self-censorship during the 17 days of tracking (“[The] users produced content, indicating intent to share, but ultimately decided against sharing”86) and that people with more boundaries to regulate censored more (“[Current] solutions on Facebook do not effectively prevent self-censorship caused by boundary regulation problems”87). We found out that Facebook is capable of tracking what we type even before we press send and has no qualms in looking at exactly the data that we decided against sharing after all. We also learned that Facebook is actively researching what inhibits us from sharing more with the platform.

And in 2014 Facebook researchers published an article titled Experimental evidence of massive-scale emotional contagion through social networks in the Proceedings of the National Academy of the Sciences (PNAS).88 For this research they manipulated the news feed of 689,003 people for a week to either show more positive emotional content from their friends or to show more negative emotional content from their friends. Facebook found out that massive-scale emotional contagion could happen in social networks: “When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred.”89 It also conveniently disproved a criticism that is often aimed at Facebook: that positive posts by friends on Facebook affect us negatively. We found out that Facebook can manipulate our emotions at will. This particular paper led to a lot of backlash for Facebook. The editors of PNAS expressed their concern about whether the participants in the study had properly opted in before they were made to feel less positive90 and journalists found out that the subjects likely included children between 13 and 18 years old and that Facebook had only updated their terms of services to include ‘research’ as one of the ways it can use the data of its users after the research had already been done.91

Facebook’s day to day manipulation goals are much more mundane than this research might make you think. They are mostly interested in manufacturing habitual use of their service.92 More time spent on the platform is more money earned for Facebook. So we can safely assume that Facebook is currently not actively trying to get people to vote, not storing the texts that people have backspaced and not trying to induce particular emotions in people. Also Facebook manipulated its (unwilling) participants only for short periods of time and the participants were only a tiny percentage of the 2 billion users that could be manipulated.

Here Facebook has been chosen as an example, but we have to assume that the other information giants have similar potential powers for alien control.93 There are two important things to note about examples like this. Firstly, each of these three examples show much more potential for interference than that they show actual interference. Secondly, we often don’t realise that this potential for interference is present.


There isn’t much academic work which uses a republican lens to look at the way that corporate technology shapes our society and our democracy. We look at two articles that use a republican framework to look at surveillance and privacy respectively and see what we can learn if we try to translate their argumentation to the world of the frightful five.

How freedom as noninterference doesn’t address surveillance (capitalism)

J. Matthey Hoye and Jeffrey Monaghan use neo-republicanism to give a normative critique of surveillance in relation to freedom.94 Even though their focus is mostly on government surveillance95 some of their argumentation can help us form a republican perspective on surveillance capitalism and its consequences.

For instance they argue that “regarding surveillance the neo-republican concept of freedom does not suffer the same conceptual impediments as liberalism.”96 Hoye and Monaghan are convinced that a liberal critique of surveillance, rooted in a privacy argument that tries to balance state protected civil liberties with state intrusion, can’t address a broader conceptualization of surveillance as “a governing rationality — or governmentality — for the entire spectrum of social conduct.”97 The focus on the balancing act between individual rights and state interference then leaves space for the state to circumvent the critique of interference by declaring “that information is being collected, stored distributed, and analysed, but interference is kept to a minimum.”98

A similar dynamic is taking place when looking at the role of the information giants. The discourse about these companies is usually framed from a perspective of individuals making the free choice to either consent to the terms of these services or to abstain from their use. Who isn’t free in this framing? The systemic power imbalance does not get addressed.

What if we don’t know about the potential for arbitrary control?

Andrew Roberts gives us a republican account of the value of privacy.99 He is convinced that the republican concept of domination can provide a solid foundation to account for the value of privacy. Liberals and republicans do not differ much in their perspective on privacy as a protection against interference from others and as a pre-requisite for leading an autonomous life.100 The two accounts start to diverge when a person is not aware of their loss of privacy.

To illustrate this, Roberts uses the example of somebody who writes potentially embarrassing information in their personal diary. Imagine that a second person takes the information in this diary without the writer’s consent and shares this information with a third person. From the perspective of freedom as the absence of interference it is quite difficult to label this situation as a loss of freedom for the writer. As long as the diarist is aware of the loss of their privacy a liberal can explain the harm in terms of positive freedom. Roberts quotes Beate Rössler who argues that privacy is also valuable because to have control over your self-presentation is an intrinsic part of your self-understanding as an autonomous individual.101 Somebody sharing your information without you wanting to erodes this control. But what if the person with the diary doesn’t know that their privacy was breached? In that case a liberal perspective will not be able to argue for a loss of freedom, but a republican perspective can. As Roberts writes:

While liberals are generally concerned about the effect that a loss of privacy will have on the autonomy of the subject, the focus of republican concern will be any unchecked inequality in power that is created by such a loss. Republicans will say the loss of privacy we suffer when others watch or acquire information about us is harmful to the extent that it provides others with power to interfere in our decisions that we do not control – the power to remove, replace or misrepresent options that would be available to us had we not suffered a loss of privacy. This harm arises whether or not we are aware that others are watching or acquiring information about us.102

Now it becomes clear why it so hard to criticize technology giants like Google and Facebook. From our dominant liberal perspective on the world we can only see a problem when these companies actually interfere in our lives and can only argue against their surveillance capitalist methodology to the extent that we are aware of the fact that they are using it. While looking at Facebook’s research experiments we saw that the potential for interference is much bigger than the actual interference and that we are very much unaware of this potential. Our language around autonomy makes us blind to the power for arbitrary control that these companies have. We aren’t free, but we barely know it.

The death of deliberation

Fulfilling a republican political ideal of freedom and making sure that state interference involves as little arbitrariness as possible demands a particular organisation of the state: a constitutional democracy.103

Pettit lists a set of constitutional requirements that help to diminish arbitrariness. The instruments of the state need to be non-manipulable through making sure there is an empire of law rather than an empire of men, through dispersion of legal powers and through making the law resistant to the majority will.104

Any system of law leaves the decision-making power in the hands of certain public authorities. It is therefore important to make sure that these decisions are made in a way that rules out arbitrary power. Pettit writes: “The promotion of freedom as non-domination requires, therefore, that something be done to ensure that public decision-making tracks the interests and the ideas of those citizens whom it affects.”105 Traditionally that tracking of the interests is assured through some form of collectivised consent. This isn’t enough for Pettit as it is obvious to him that certain decisions may have majority support while representing very arbitrary interference in the lives of minorities. According to Pettit “non-arbitrariness requires not so much consent as contestability.”106 And this contestability needs to be effective. This requires a certain democratic profile, one that is contestatory rather than consensual. Pettit says that for this to be the case three conditions need to be satisfied: there should be a basis for contestation, there should be a channel available by which decisions may be contested and a suitable forum should exist for hearing contestations.107

The basis for contestation should be debate-based rather than bargain-based, decisions should be made in a deliberative way. The democracy can’t just be deliberative, it also needs to be inclusive with all stakeholder groupings being represented. The democracy will need to respond appropriately to any contestations raised against decisions.108

We can now see how our technological predicament is democratically problematic in two distinct ways. The first is that the frightful five are starting to have a level of governance over our lives that is very similar to state interference, but without any of the constitutionally democratic controls. A company like Facebook, for example, has put a lot of effort in its corporate structuring to make sure it is an empire of a single man (founder Mark Zuckerberg), rather than an empire of law.109 And while European legislation forces the companies that want to collect and use our data into getting our unambiguous and freely given consent,110 there doesn’t seem to be any way to seriously contest the decisions that these companies make about our lives.111

The second problem is that these companies are eroding deliberative democracy itself. This is because of the level of personalisation that is done by these platforms to try and capture our attention. Maciej Cegłowski writes:

The feeds people are shown on these sites are highly personal. What you see in your feed is algorithmically tailored to your identity and your interaction history with the site. No one else gets the same view. This has troubling implications for democracy, because it moves political communication that used to be public into a private space. [..] Obviously, in this situation whoever controls the algorithms has great power. Decisions like what is promoted to the top of a news feed can swing elections. Small changes in UI can drive big changes in user behavior. There are no democratic checks or controls on this power, and the people who exercise it are trying to pretend it doesn’t exist.112

Robin Celikates calls this “the outsourcing of self-determination — the reduction of democratic control to editorial control of norms authored by others.”113 It is impossible to be a free citizen in such a situation.114

An abject condition of torpor and sluggishness

One argument that republicans often make for pursuing freedom of domination is because of what it does to a people when they are subjected to masters with arbitrary power. This is a worry about the psychological impact. Skinner, for example, writes about the concerns of Sallust and Tacitus:

When a whole nation is inhibited from exercising its highest talents and virtues, these qualities will begin to atrophy and the people will gradually sink into an abject condition of torpor and sluggishness.115

Servitude inevitably breeds servility. Skinner points to the classical belief that we can only expect greatness from people who live in truly free states. You only have to look at the European peasantry or the subjects of Sultan at Constantinople to see that they “have become so discouraged and dispirited by the experience of living under arbitrary power that they have become totally supine and base, and nothing can now be expected of them.”116

After reading these descriptions of what a lack of (republican) freedom can do to people, and after assessing the current direction our technological world is moving in, it becomes easy to see how the 2008 movie WALL·E117 has cutting predictive qualities. As Rod Dreher writes about the future in which the movie is situated:

Every possible need of its inhabitants is taken care of by high technology. [..] They are constantly entertained, and fed by junk food. And they all look happy. They have been thoroughly infantilized [..] and have grown completely dependent on the BNL Corporation, the massive company that, it appears, became the government back on Earth, and whose priorities —sell crap to consumers, and make them totally dependent on their own desires— led to the catastrophe on Earth. BNL is totalitarian, but it’s the softest totalitarianism imaginable: they’ve taken over by fulfilling every desire of the populace, a populace that (apparently) came to think of politics as chiefly a matter of ordering the polis around the telos of satisfying human desires.118

Frederick Douglass wrote in 1855 that to “make a contented slave, you must make a thoughtless one.”119 Is the way Google turns us into consumption serfs the 21st century manifestation of the contented slave? In a world that is now again rapidly becoming less equal in economic terms120 we can’t afford to stick with a liberal perspective on freedom only and have to start working explicitly towards republican freedom. Freedom from domination, freedom from arbitrary control.

Conclusion: the need for antipower

So what are we to do? Pettit argues that from a republican point of view three broadly different, but consistent, strategies come to mind:

We may compensate for imbalances by giving the powerless protection against the resources of the powerful, by regulating the use that the powerful make of their resources, and by giving the powerless new, empowering resources of their own. We may consider the introduction of protective, regulatory, and empowering institutions.121

Pettit does not only see a role for formal state initiatives, he also thinks that other forms of organisation like trade unions, consumer movements, rights organizations, women’s groups, civil liberties groups and even competitive market forces have a crucial role to play in increasing the range of undominated freedom.122

Though much more work will need to be done, we can already try and do a speculative sketch of how these strategies could be applied in a technologically intermediated world.

Protection: Europe’s California effect and encryption

Pettit has the most faith in this particular antipower: “The protection of the individual is mainly ensured in our society by the institutions of a nonthreatening defense system and a nonvoluntaristic regime of law.”123 These laws will then have to constitute a rule of law by being general, transparent, nonretroactive and coherent.124 So how does the law try to regulate the data giants?

Currently there seem to be two approaches towards regulating the collection and use of data.125 The first, prevalent in the United States, focuses on how the data is used by the organisations that collect the data. It focuses on reducing the harm that is done, for example through self-regulatory Fair Information Practices or by creating protection through consumer law. The second approach, more European, focuses on fundamental human rights and thus doesn’t just look at the use of the data, but already starts regulating at the collection phase. This second approach is behind Europe’s soon to be introduced General Data Protection Regulation (GDPR) which aims to protect the right of natural persons to the protection of their personal data.126

In a globalized world we have to worry whether companies will not just move their operations to the countries that have the least amount of rules around data: a race to the bottom. This is traditionally called the “Delaware Effect” (named after what happened with corporate chartering requirements in the US, which are lowest in Delaware). In Trading Up, David Vogel shows that there can be another regulatory effect, a race to the top. Especially protective regulations can move into the direction of stricter regulatory standards. Vogel calls this the “California Effect”, after the way in which California’s stricter emission standards for cars have helped to up the federal minimum requirements. He thinks that this effect can also take place between national legislations as long as the right market and political forces are in play.127 It does seem to be case that Europe’s stricter regulatory framework around data could lead to a California effect. That is mainly because Europe is big enough as a market for creators of information services to make changes to their products in order to continue to have access to the European market. As a way of protecting citizens the GDPR already functions as an aspirational piece of law for other countries.128

All the legislation around data and information focuses on either reducing the harm done (a very liberal point of view) or on protecting the fundamental human right to privacy (a slightly more republican way of looking at the world). Unfortunately neither approach directly addresses the tremendous power imbalance, leaving opportunities for arbitrary control and thus dominance.

Another way of protecting people against the prowess of the information giants is a technological one: stimulating and using Privacy Enhancing Technologies (PETs). Encryption is the most important PET.129 It makes (communications) data only available to the persons having the key. The Snowden disclosures led to a push to get more of our communications encrypted, making it harder for the secret services to try and intercept the traffic.130 But well designed encryption and data minimisation schemes can also help immunise ourselves against corporate domination. Compare for example what the chat app Signal knows about its users (just the phone number, the account creation date and the last connection date, nothing else131) with what apps like Google Allo and WhatsApp know (occasionally the contents of the messages, the complete social graph of their users, IP addresses, location data, connection times and more).132 It is obvious that services like Signal provide much more protection as antipower than the services provided by the frightful five.

Regulation: dispersion of power through antitrust

When making sure that the resources of the powerful are regulated there is a focus on those who are in government. This is why we have a set of rule-of-law constraints like regular elections and limited tenure, democratic discussions and the separation of powers.133 Pettit warns that those who are in economically privileged positions can also dominate others. This requires a different form of regulation. One type of measures he mentions is those against monopoly power.134 Classic (neo)liberalist thinking doesn’t like government interference into private corporate affairs. From a republican point of view antitrust measures are less problematic. First of all because it doesn’t think that regulatory interference by the state is necessarily as bad as the private interference against which it guards, but mostly because it manages to secure a benefit that is more important than noninterference: non-domination.135

Initially governments were very hesitant to apply antitrust measures to the information monopolies, but in the last couple of years there has been an increasing understanding that something needs to be done to try and disperse the power.136 Even The Economist, who has argued in the past that breaking up the tech giants was too drastic of an action, now has cause for concern: “Internet companies’ control of data gives them enormous power. Old ways of thinking about competition, devised in the era of oil, look outdated in what has come to be called the ‘data economy’ [..] A new approach is needed.”137 They have two ideas that could help with this new approach.138

Firstly, it is important to realise that monopolies are created through acquisition. When regulators currently assess whether a merger is acceptable they mostly look at size. They will need to start looking at a firms data assets to see how the deal will impact the freedom of consumers. The fact that companies are willing to spend billions of dollars for the acquisition of companies with barely any revenue (Facebook, for example, buying WhatsApp for $16 billion139) clearly shows the mechanism of incumbent companies buying nascent threats.

Secondly, it will be essential to loosen the grip that these companies have over the data of its users. One way of doing this could be to increase the transparency. A more radical approach would be to force companies like Google and Facebook to open their data vaults and turn their data into public infrastructure. This is what Evgeny Morozov argues for:

All of the nation’s data [..] could accrue to a national data fund, co-owned by all citizens (or, in the case of a pan-European fund, by Europeans). Whoever wants to build new services on top of that data would need to do so in a competitive, heavily regulated environment while paying a corresponding share of their profits for using it.140

Jonathan Taplin adds a third way of regulating. This is to remove the “safe harbor” clause from the Digital Millennium Copyright Act. This clause defines the platforms as mere conduits and limits their liability when it comes to copyright. He argues that this is what allows services like Facebook and Google’s YouTube to free ride on the content which is produced by others. Taplin clearly sees the relationship between monopolies and governance: “Woodrow Wilson was right when he said in 1913, ‘If monopoly persists, monopoly will always sit at the helm of the government.’ We ignore his words at our peril.”141

Empowerment: free and federated technology

When Pettit writes about empowerment as the third antipower, he mainly means welfare-state initiatives and refers to things like universal access to education, services like public transportation and communication, and measures for the vulnerable like social security, medical care or legal aid.142 Could these type of initiatives have technological equivalents?

Software usually comes with a license prohibiting you from copying it and sharing it with others. Online services come with terms that you have to accept before you get to use the product. There is one category of software that doesn’t come with these type of limitations and even explicitly promotes freedom: free software.143 Free software guarantees everyone equal rights to the program through using a license that explicitly gives the user the following four freedoms:

  • The freedom to run the program as you wish, for any purpose (freedom 0).
  • The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
  • The freedom to redistribute copies so you can help your neighbor (freedom 2).
  • The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.144

This way of looking at software can radically decrease the dependence on any particular company or even actor and therefore can truly enhance (republican) freedom.

When free software is used as a service on the web, we can easily still fall in a dependency trap. If we all would be communicating and sharing knowledge through the same service provider who uses free software, then that provider would still have a level of arbitrary control. It is therefore important that these technologies are implemented in a decentralised and federated manner. Email is the canonical example of a standards based technology that can be implemented by any party (you can run your own mailserver, use a web host or use a dedicated web based email service) and still allows for interoperability. Multiple free software projects attempt to do the same in domains like social networking, voice communications, file sharing or (personal) publishing.145 These usually allow for identity to federate over multiple instances of the same software, increasing your independence from one particular service provider.146

The state can promote the use of free software in different ways. Richard Stallman, the founding father of free software, argues that the state has a moral obligation to so and introduces a practical set of measures. First of all the state should have a clear educational policy to only teach the use of free software to students rather than the use of any proprietary alternatives. Next the state can advance the use of free software by never requiring non-free software, distribute only free software and make use of free formats and protocols. It should also make sure to achieve computational sovereignty by migrating to free software and by truly controlling its own computers. Finally the state should encourage the development of free software and discourage the development of unfree software.147


“Alphabet Announces First Quarter 2017 Results.” Alphabet Investor Relations, April 2017.

Berlin, Isaiah. “Two Concepts of Liberty.” In Liberty: Incorporating ’Four Essays on Liberty’, edited by Henry Hardy, 166–217. Oxford: Oxford University Press, 2002.

Bond, Robert M., Christopher J. Fariss, Jason J. Jones, Adam D. I. Kramer, Cameron Marlow, Jaime E. Settle, and James H. Fowler. “A 61-Million-Person Experiment in Social Influence and Political Mobilization.” Nature 489, no. 7415 (September 2012): 295–98. doi:10.1038/nature11421.

Carter, Ian. “How Are Power and Unfreedom Related?” In Republicanism and Political Theory, edited by Cecile Laborde and John Maynor, 58–82. Malden: Blackwell Publishing, 2009.

Cegłowski, Maciej. “Build a Better Monster: Morality, Machine Learning, and Mass Surveillance.” Idle Words, April 2017.

Celikates, Robin. “Freedom as Non-Arbitrariness or as Democratic Self-Rule? A Critique of Contemporary Republicanism.” In To Be Unfree: Republicanism and Unfreedom in History, Literature and Philosophy, 37–54. Bielefeld: transcript Verlag, 2014.

“Commission Fines Facebook €110 Million for Providing Misleading Information About WhatsApp Takeover.” European Commission Press Releases, May 2017.

“Commission Fines Google €2.42 Billion for Abusing Dominance as Search Engine by Giving Illegal Advantage to Own Comparison Shopping Service.” European Commission Press Releases, June 2017.

Das, Sauvik, and Adam Kramer. “Self-Censorship on Facebook.” In Proceedings of the Seventh International AAAI Conference on Weblogs and Social Media, 120–27, 2013.

De Zwart, Hans. “Why I Have Deleted My Facebook Account.” Medium Massage, March 2011.

Ditzian, Jason. “Facebook Goes Full ‘Black Mirror’: How Facebook Is Making Membership a Prerequisite to Everyday Existence.” The Bold Italic, March 2017.

“Doing Business with Argentina Just Got Easier.” TrustArc Blog, January 2017.

Douglass, Frederick. My Bondage and My Freedom. Newhaven: Yale University Press, 2014.

Dreher, Rod. “‘Wall-E’: Aristotelian, Crunchy Con.” Crunchy Con, July 2008.

“Economics & Computation.” Facebook Research. Accessed July 6, 2017.

Economist, The. “The World’s Most Valuable Resource Is No Longer Oil, but Data.” The Economist, May 2017.

“Encrypt All The Things.” Accessed July 22, 2017.

Epstein, Robert, and Ronald E. Robertson. “The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections.” Proceedings of the National Academy of Sciences 112, no. 33 (August 2015): E4512–E4521. doi:10.1073/pnas.1419828112.

Eyal, Nir. Hooked: How to Build Habit-Forming Products. New York: Penguin, 2014.

“Facebook Reports First Quarter 2016 Results and Announces Proposal for New Class of Stock.” Facebook Investor Relations, April 2016.

“Facebook to Acquire WhatsApp.” Facebook Newsroom, February 2014.

Finley, Klint. “Encrypted Web Traffic More Than Doubles After NSA Revelations.” WIRED, May 2014.

“Grand Jury Subpoena for Signal User Data, Eastern District of Virginia.” Open Whisper Systems, October 2016.

Hill, Kashmir. “Facebook Added ’Research’ To User Agreement 4 Months After Emotion Manipulation Study.” Forbes, June 2014.

Hobbes, Thomas. Leviathan. Edited by Richard Tuck. Cambridge: Cambridge University Press, 1996.

Hoye, J. Matthew, and Jeffrey Monaghan. “Surveillance, Freedom and the Republic.” European Journal of Political Theory, October 2015, 1474885115608783. doi:10.1177/1474885115608783.

“Human Computer Interaction & UX.” Facebook Research. Accessed July 6, 2017.

Kramer, A. D. I., J. E. Guillory, and J. T. Hancock. “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks.” Proceedings of the National Academy of Sciences 111, no. 24 (June 2014): 8788–90. doi:10.1073/pnas.1320040111.

Kramer, Matthew H. “Liberty and Domination.” In Republicanism and Political Theory, edited by Cecile Laborde and John Maynor, 31–58. Malden: Blackwell Publishing, 2009.

Laborde, Cecile, and John Maynor. “The Republican Contribution to Contemporary Political Theory.” In Republicanism and Political Theory, edited by Cecile Laborde and John Maynor, 1–28. Malden: Blackwell Publishing, 2009.

Lee, Micah. “Battle of the Secure Messaging Apps: How Signal Beats WhatsApp.” The Intercept, June 2016.

Manjoo, Farhad. “Tech’s Frightful Five: They’ve Got Us.” The New York Times, May 2017.

Morozov, Evgeny. “Tech Titans Are Busy Privatising Our Data.” The Guardian, April 2016.

———. “To Tackle Google’s Power, Regulators Have to Go After Its Ownership of Data.” The Guardian, July 2017.

Nowak, Mike, and Guillermo Spiller. “Two Billion People Coming Together on Facebook.” Facebook Newsroom, June 2017.

Page, Larry. “G Is for Google.” Official Google Blog, August 2015.

Paley, William. The Principles of Moral and Political Philosophy. Indianapolis: Liberty Fund, 2002.

Pettit, Philip. “Freedom as Antipower.” Ethics 106, no. 3 (1996): 576–604.

———. “Freedom in the Market.” Politics, Philosophy & Economics 5, no. 2 (June 2006): 131–49. doi:10.1177/1470594X06064218.

———. “Republican Freedom: Three Axioms, Four Theorems.” In Republicanism and Political Theory, edited by Cecile Laborde and John Maynor, 102–30. Malden: Blackwell Publishing, 2009.

———. Republicanism: A Theory of Freedom and Government. Oxford: Oxford University Press, 1997.

Piketty, Thomas. Capital in the Twenty-First Century. Cambridge: Harvard University Press, 2014.

Pleij, Sander. “Facebookisme: Het Nieuwe Totalitaire Bewind.” Vrij Nederland, April 2017.

Ramo, Joshua Cooper. “For Apple, Facebook and Amazon, ’Network Power’ Is the Key to Success.” Fortune, July 2016.

“Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation),” April 2016.

Roberts, Andrew. “A Republican Account of the Value of Privacy.” European Journal of Political Theory 14, no. 3 (July 2015): 320–44. doi:10.1177/1474885114533262.

Sax, Marijn. “Big Data: Finders Keepers, Losers Weepers?” Ethics and Information Technology 18, no. 1 (March 2016): 25–31. doi:10.1007/s10676-016-9394-0.

Schneier, Bruce. Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. New York: W. W. Norton & Company, 2015.

Shin, Ji-hye. “Bixby’s English Version Delayed Due to Big Data Issue.” The Korea Herald, July 2017.

Skinner, Quentin. “A Third Concept of Liberty.” Proceedings of the British Academy 117 (2002): 237–68.

———. “Freedom as the Absence of Arbitrary Power.” In Republicanism and Political Theory, edited by Cecile Laborde and John Maynor, 83–101. Malden: Blackwell Publishing, 2009.

———. Liberty Before Liberalism. Cambridge: Cambridge University Press, 1998.

Stallman, Richard. “Measures Governments Can Use to Promote Free Software, And Why It Is Their Duty to Do so.” GNU Project – Free Software Foundation. Accessed July 19, 2017.

Stanton, Andrew. “WALL·E,” 2008.

Taplin, Jonathan. “Is It Time to Break Up Google?” The New York Times, April 2017.

“Terms of Service;Didn’t Read: Google.” Terms of Service;Didn’t Read. Accessed July 6, 2017.

Van Hoboken, Joris. “From Collection to Use in Privacy Regulation? A Forward- Looking Comparison of European and Us Frameworks for Personal Data Processing.” In Exploring the Boundaries of Big Data, 231–59. WRR-Verkenningen 32. The Hague/Amsterdam: WRR/Amsterdam University Press, 2016.

Verma, Inder M. “Editorial Expression of Concern: Experimental Evidence of Massivescale Emotional Contagion Through Social Networks.” Proceedings of the National Academy of Sciences 111, no. 29 (July 2014): 10779–9. doi:10.1073/pnas.1412469111.

Vogel, David. Trading Up: Consumer and Environmental Regulation in a Global Economy. Cambridge: Harvard University Press, 2009.

“What Is Free Software?” GNU Project – Free Software Foundation. Accessed July 19, 2017.

Zuboff, Shoshana. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” Journal of Information Technology 30, no. 1 (March 2015): 75–89. doi:10.1057/jit.2015.5.


  1. De Zwart, “Why I Have Deleted My Facebook Account.”

  2. A highly inappropriate word in this context as we will come to learn later.

  3. ‘Liberty’ and ‘freedom’ will be used as interchangeable concepts in this text.

  4. Taplin, “Is It Time to Break Up Google?”

  5. Economist, “The World’s Most Valuable Resource Is No Longer Oil, but Data.”

  6. Berlin, “Two Concepts of Liberty,” 169.

  7. Ibid., 169.

  8. Ibid., 179–80.

  9. Ibid., 169.

  10. Ibid., 170.

  11. For ease of reading I will write often write ‘republican’ when talking about neo-republican thinking.

  12. Skinner, Liberty Before Liberalism, 23.

  13. Ibid., 36.

  14. Ibid., 41.

  15. Skinner, “A Third Concept of Liberty,” 247–48.

  16. Pettit, Republicanism, 5.

  17. Ibid., 23.

  18. Ibid., 52.

  19. Ibid., 56.

  20. Ibid., 26.

  21. Ibid., 31.

  22. Ibid., 35.

  23. Ibid., 39.

  24. Skinner, Liberty Before Liberalism, 119; and Pettit, Republicanism, 67–68.

  25. Paley, The Principles of Moral and Political Philosophy.

  26. Pettit, Republicanism, 73–78.

  27. Paley, The Principles of Moral and Political Philosophy, 315.

  28. Pettit, Republicanism, 73–74.

  29. Paley, The Principles of Moral and Political Philosophy, 312.

  30. Pettit, Republicanism, 75–76.

  31. Paley, The Principles of Moral and Political Philosophy, 315.

  32. Pettit, Republicanism, 48.

  33. Ibid., 78.

  34. Carter, “How Are Power and Unfreedom Related?”

  35. Kramer, “Liberty and Domination.”

  36. Hobbes, Leviathan, 145.

  37. Laborde and Maynor, “The Republican Contribution to Contemporary Political Theory,” 5–6.

  38. Ibid., 6.

  39. Kramer, “Liberty and Domination,” 34.

  40. Ibid., 39.

  41. Ibid., 39.

  42. Ibid., 40.

  43. Ibid., 41.

  44. Ibid., 41.

  45. Ibid., 47.

  46. Ibid., 49.

  47. Skinner, “Freedom as the Absence of Arbitrary Power,” 89.

  48. Ibid., 90.

  49. Ibid., 93–94.

  50. Pettit, “Republican Freedom,” 104–10.

  51. Ibid., 110–18.

  52. Ibid., 122.

  53. Ibid., 124–25.

  54. Ibid., 104.

  55. This chapter embodies a very Western-European and Northern-American view of the world. I am aware of that. For argument’s sake let’s assume I am writing about the lives of average people in Amsterdam in the Netherlands.

  56. In October 2015 Google did a corporate restructuring creating Alphabet as a new public holding company with Google as one of the subsidiaries (incidentally replacing the nonsensical “Don’t be evil” moto with the slightly improved “Do the right thing”). For ease of understanding I will continue to refer to both Alphabet and Google as “Google”. For more information on the restructuring, see: Page, “G Is for Google.”

  57. Taplin, “Is It Time to Break Up Google?”

  58. Economist, “The World’s Most Valuable Resource Is No Longer Oil, but Data.”

  59. Nowak and Spiller, “Two Billion People Coming Together on Facebook.”

  60. Ramo, “For Apple, Facebook and Amazon, ’Network Power’ Is the Key to Success.”

  61. There isn’t a common name for these five companies yet. They are also called “the internet giants” (for obvious reasons) or “the stacks” (for their ability to create integrated ecosystems).

  62. Manjoo, “Tech’s Frightful Five.”

  63. Ditzian, “Facebook Goes Full ‘Black Mirror’.”

  64. Pleij, “Facebookisme.”

  65. Zuboff, “Big Other,” 75.

  66. Ibid., 77.

  67. Ibid., 78–79.

  68. Ibid., 79.

  69. Ibid., 80.

  70. Ibid., 80–81.

  71. Ibid., 83–85.

  72. Shin, “Bixby’s English Version Delayed Due to Big Data Issue.”

  73. “Alphabet Announces First Quarter 2017 Results.”

  74. You could make an ethical argument that these companies aren’t justified in selling these insights, see for example: Sax, “Big Data.”

  75. Zuboff, “Big Other,” 82.

  76. Ibid., 83.

  77. Schneier, Data and Goliath, 58.

  78. I am deeply uncomfortable comparing our current situation living in a technologically intermediated society with serfdom let alone with slavery. However I do believe that there are similar mechanisms of dependence and control leading to arbitrary power. Structurally we can compare them, in their consequences they are of course incomparable.

  79. Ibid., 58.

  80. Morozov, “Tech Titans Are Busy Privatising Our Data.”

  81. “Terms of Service;Didn’t Read.”

  82. “Economics & Computation”; “Human Computer Interaction & UX.”

  83. Bond et al., “A 61-Million-Person Experiment in Social Influence and Political Mobilization.”

  84. Ibid., 297.

  85. Das and Kramer, “Self-Censorship on Facebook.”

  86. Ibid., 122.

  87. Ibid., 127.

  88. Kramer, Guillory, and Hancock, “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks.”

  89. Ibid., 8788.

  90. Verma, “Editorial Expression of Concern.”

  91. Hill, “Facebook Added ’Research’ To User Agreement 4 Months After Emotion Manipulation Study.”

  92. Through the four phases of the “Hook Model”: trigger, action, variable reward, and investment. See: Eyal, Hooked.

  93. For example with Google there is what the researchers call the “search engine manipulation effect”, see: Epstein and Robertson, “The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections.”

  94. Hoye and Monaghan, “Surveillance, Freedom and the Republic.”

  95. There is a lot to say about the powers that governments are accruing through their use of technology and data and how that impacts a republican conception of freedom, but this is outside the scope of this thesis.

  96. Ibid., 3.

  97. Ibid., 4.

  98. Ibid., 11.

  99. Roberts, “A Republican Account of the Value of Privacy.”

  100. Ibid., 328–29.

  101. Ibid., 333.

  102. Ibid., 335.

  103. Pettit, Republicanism, chap. 6.

  104. Ibid., 172–83.

  105. Ibid., 184.

  106. Ibid., 184–85.

  107. Ibid., 186–87.

  108. Ibid., 200.

  109. “Facebook Reports First Quarter 2016 Results and Announces Proposal for New Class of Stock.”

  110. “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation),” recital 32.

  111. It is also possible to raise some serious doubts as to whether any of the consent given to these information giants is truly given freely.

  112. Cegłowski, “Build a Better Monster.”

  113. Celikates, “Freedom as Non-Arbitrariness or as Democratic Self-Rule? A Critique of Contemporary Republicanism,” 50.

  114. It has to be said that this new publication and distribution method, often described as the ‘filter bubble’, is of course also deeply problematic from a liberal point of view: It is an example of actual interference.

  115. Skinner, “A Third Concept of Liberty,” 258.

  116. Ibid., 261.

  117. Stanton, “WALL·E.”

  118. Dreher, “‘Wall-E’.”

  119. Douglass, My Bondage and My Freedom, 254.

  120. Piketty, Capital in the Twenty-First Century, 571.

  121. Pettit, “Freedom as Antipower,” 589–90.

  122. Ibid., 592.

  123. Ibid., 590.

  124. Ibid., 590.

  125. Van Hoboken, “From Collection to Use in Privacy Regulation? A Forward- Looking Comparison of European and Us Frameworks for Personal Data Processing.”

  126. “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation).”

  127. Vogel, Trading Up, 259–60.

  128. See for example: “Doing Business with Argentina Just Got Easier.”

  129. Schneier, Data and Goliath, 215.

  130. See for example Finley, “Encrypted Web Traffic More Than Doubles After NSA Revelations” and “Encrypt All The Things.”.

  131. “Grand Jury Subpoena for Signal User Data, Eastern District of Virginia.”

  132. Lee, “Battle of the Secure Messaging Apps.”

  133. Pettit, “Freedom as Antipower,” 590–91.

  134. Ibid., 591.

  135. Pettit, “Freedom in the Market,” 145.

  136. The EU, for example, has recently handed out fines to Facebook for giving misleading information when acquiring WhatsApp and to Google for abusing its power with their Google shopping results (see: “Commission Fines Facebook €110 Million for Providing Misleading Information About WhatsApp Takeover,” “Commission Fines Google €2.42 Billion for Abusing Dominance as Search Engine by Giving Illegal Advantage to Own Comparison Shopping Service.”).

  137. Economist, “The World’s Most Valuable Resource Is No Longer Oil, but Data.”

  138. Ibid.

  139. “Facebook to Acquire WhatsApp.”

  140. Morozov, “To Tackle Google’s Power, Regulators Have to Go After Its Ownership of Data.”

  141. Taplin, “Is It Time to Break Up Google?”

  142. Pettit, “Freedom as Antipower,” 591–92.

  143. ‘Free’ here refers to liberty not to price, or as is usually said: “Free as in free speech, not as in free beer”.

  144. “What Is Free Software?”

  145. Projects like GNU social (, FreedomBox (, Nextcloud ( or IndieWeb (

  146. Looking at free and federated software as an antipower is based on the assumption that people have access to the internet. Universal access to the internet would probably be one of the things that Pettit would now put under his heading of access to communication.

  147. Stallman, “Measures Governments Can Use to Promote Free Software, And Why It Is Their Duty to Do so.”

Draconian anti-terrorism measures turn us into scared and isolated people

We are becoming more and more scared. Images of terror attacks influence our daily decisions. A friend of mine gets nervous when he has to travel past an airport by train, and another friend surprised me by telling me that this year he stayed home during gay pride. Several people have told me of times when they crossed the street to avoid a nervous looking man with a Middle-Eastern appearance carrying a backpack. According to recent research by Statistics Netherlands (CBS), more than 25% percent of Dutch citizens are occasionally scared of becoming a victim of a terror attack in the Netherlands.

During the past years, a number of terrifying attacks has taken place in Western Europe. From a rational point of view, the chances of dying in such an attack are negligible: infinitesimally smaller than dying in a traffic accident. But it feels different. The apparent randomness and landmark locations like London Bridge make us feel that it might as well have been us who were the victims.

It is understandable that politicians talk tough after a terror attack, especially since the legitimacy of the government, which is tasked with taking care of its citizens, is at stake. "Enough is enough", said Prime Minister of the United Kingdom Theresa May after the third attack on their soil within a few months. According to her, internet companies can no longer be sanctuaries for extremist content, the police must have more extensive powers, and punishments for terrorism must be made more severe. Action is being taken in the Netherlands as well. The Senate approved a bill that gives the secret services a dragnet surveillance power. In the near future, the secret services will be able to eavesdrop on large numbers of innocent citizens. These measures and the usual call for vigilance appear to be aimed at reducing the symptoms instead of solving the problems. Everyone understands that it is impossible to always be able to prevent someone from driving into a crowd. Paradoxically, these measures claim victims of their own: innocent citizens are accused of crimes they did not commit and we restrict our own liberties.

Examples abound: The hipster-members of a Swedish beard club that were contacted by the police because they, like ISIS, had a black flag. The alleged explosives in the home of a terrorism suspect that turned out to be shoarma spices.

Muslims, or rather people who look like they are from the Middle East, become suspects disproportionately more frequently. Ahmed Mohamed, a 14-year-old American high school student, proudly brought a self-made clock to his school, only to be removed from school in handcuffs because his teacher thought it was a bomb. Before the flight home from their holidays in Paris, Faisal and Nazia Ali were removed from the plane because they were transpiring and had used the word "Allah". For each of these examples mentioned in the media, there are probably many more that don’t receive any attention.

The recently approved dragnet surveillance powers will only increase the number of false accusations. “Data mining is probably an ineffective method for preventing terror attacks”, wrote the Dutch Scientific Council for Government Policy (WRR) in their 2016 report “Big data in a free and safe society” (“Big data in een vrije en veilige samenleving”). “Because each terror attack is unique, it is nearly impossible to create an accurate profile. Combined with the small number of attacks, this results in an unusably high error rate.”

If you don’t look Middle-Eastern, you might be able to convince yourself that it is better to be safe than sorry. However, a Norwegian philosopher Lars Svendsen demonstrated the short-sightedness of this argument already ten years ago in his book A Philosophy of Fear. According to Svendsen, Europe lives in a culture of fear: we believe that we are more and more often exposed to increasing danger, from epidemics to terrorism. In reality we are safer than ever, but precisely for this reason we can afford to be worried about dangers that will probably never materialise. Fear is a by-product of luxury.

Meeting each other in good faith lies at the core of human relationships. We depend on each other constantly, every day. From the train engineer getting us to work, to the restaurant employee serving our lunch. Without faith in other people our society would not function. Our permanent fear, however, undermines this faith. All new security measures have mistrust as their starting point. They undermine society and turn us into scared and isolated individuals. Caught in our fear, we have already become victims of terrorism.

Mistrust is also a self-fulfilling prophesy: if we avoid contact, we will also never learn that the other person is not dangerous. Human interactions that require trust will then be impossible, and non-standard behaviour will be tolerated less and less. Like this, we limit our own freedom and the freedom of other people.

At the end of the sixteenth century, Michel de Montaigne wrote an essay about fear. On the run from war and the plague, the French statesman clearly saw the effect fear has on people. According to Montaigne, the fact that people will hang or drown themselves as a result of fear proves that being afraid is in some cases less bearable than death. Therefore man is “most afraid of fear itself”, he writes. Words of wisdom. If we really want terrorism to claim fewer victims, we must invest less in “pseudo measures” against terrorism itself, and more in measures that tackle our fear of terrorism.

This essay was published (in Dutch) as an opinion piece for the NRC newspaper. Thank you to Philip Westbroek for the translation and to -JosephB- for the photo.

Vier keer een publieksfilosofische avond

Een tjokvolle De Balie bij "De Macht van Data"

De afgelopen twee weken ben ik naar een aantal avonden geweest waar op verschillende manieren publieksfilosofie werd bedreven. Hieronder vier korte impressies.

De wankele waarheid

Happy Chaos organiseerde in LAB11 een avond over de wankele waarheid. In drie sessies werd verkend op welke manier de journalistiek een stempel drukt op de werkelijkheid en op die manier de waarheid vormgeeft.

The Elite Times in zaal 2 ging over de vraag of de parlementaire journalistiek wel divers genoeg is. Thijs Broer (Vrij Nederland), Romana Abels (Trouw, waar ze geen auteurspagina’s hebben), Kim van Keken (freelance o.a. bij Follow the Money) en Thomas Muntz (Investico) probeerden te verkennen of de verslaggevers in Den Haag te elitair zijn.

Het ging daarbij hard met de sportmetaforen: er wordt bijvoorbeeld kluitjesvoetbal gespeeld ("het verbaasde me dat alle kranten tegelijkertijd Pechthold als de schuldige aanwijzen") en er is teveel aandacht voor het spel en te weinig voor de bal ("bij Henri Keizer ging het meer over het falen van Keizer’s PR strategie dan over zijn frauduleuze handelen"). Wat vooral opviel is dat alle aanwezigen, dus inclusief het publiek, daarbij vooral inside baseball speelden ("we komen elkaar regelmatig tegen in het rookhok van Nieuwspoort"). De echte diversiteitsvraag —Wie is en en hoe kan het nou echt anders?— kwam daardoor helaas nooit aan bod.

Spui25 over populisme

Boven in de OBA presenteerde Amsterdam University Press het meest recente deel in de serie Elementaire Deeltjes: Populisme. Cas Mudde, één van de twee auteurs, gaf op de avond “een korte samenvatting van een kort boekje”.

Mudde’s definitie van het populisme is zeer verhelderend. Hij definieert het als een dunne ideologie, die de samenleving verdeelt in twee groepen die tegenover elkaar staan, ‘het pure volk’ en ‘de corrupte elite’, en die wil dat de politiek gebaseerd is op de algemene wil van het volk. Het populisme is meer dan alleen een strategie om aan de macht te komen, maar ook een (monistische) ideologie. De twee groepen zijn homogeen en het onderscheid tussen de twee groepen is moreel. Iedereen in het volk is goed en iedereen in de elite is slecht. Populisten denken dat iedereen in ‘het volk’ dezelfde waarden heeft. Dus kun je beleid maken dat voor iedereen goed is.

Populisme kan zowel links als rechts zijn. Dat is afhankelijk van de ‘gastideologie’. Op dit moment is het meer rechts in het Noorden en links in het Zuiden, zowel in Europa als in Amerika. In Europa krijgen de populistische partijen ongeveer 20% van de stemmen, de overgrote meerderheid van de mensen stemt dus niet op een populistische partij. Het populisme is op dit moment wel sterker dan het ooit is geweest. Volgens Mudde komt dat doordat de mediastructuur veranderd is en omdat de populistische politici beter zijn geworden met name in het gebruik van social media. Het gevolg is een polarisering van het politieke debat en een politisering van bepaalde issues (zoals bijvoorbeeld immigratie). In de oppositie hebben populisten vaak een correctieve functie voor de liberale democratie, maar zodra ze aan de macht komen vormen ze juist een bedreiging. Kortom: populisme is een illiberaal-democratische respons op ondemocratisch liberalisme.

De macht van data

De Balie was benieuwd naar de manier waarop algoritmes ons leven bepalen. Met De macht van Data organiseerden ze voor een tjokvolle zaal een tjokvolle avond: 3 panels met steeds 3 sprekers, twee winnaars van een essay-wedstrijd, twee kunstenaars en ook nog een inleiding van een data scientist.

Zelf nam ik, samen met Rutger Rienks (predictive policing expert bij de Nederlandse politie) en Marjolein Lanzing (filosoof en allround held), deel aan een panel over veiligheid. Kun je big data inzetten om op een slimme manier de maatschappij veiliger te maken? Rutger Rienks denkt van wel en schreef daar het boek Predictive Policing – Kansen voor veiligere toekomst over. Zelf zag ik nog wel wat risico’s aan een politie die op basis van grote hoeveelheden data besluit waar ze haar capaciteit in gaat zetten.

Mijn grootste bezwaar is dat er een verschuiving plaats vindt van het aanpakken van strafbaar gedrag, naar het aanpakken van afwijkend gedrag. Als je net iets te lang op het toilet in Schiphol blijft zitten, dan komt de politie controleren wat je aan het doen bent. Het verzamelen van gegevens wordt met predictive policing een doel op zich voor de politie. Dat is zorgelijk omdat het de politie nu al niet lukt om zich aan de Wet politiegegevens te houden. Dit soort patroonherkenning werkt daarnaast alleen maar op misdrijven waar enige vorm van patroon in zit, terwijl de meeste criminaliteit impulsief is. Het lijkt soms wel alsof efficïentie het hoogste goed is bij de overheid, terwijl er eigenlijk voor legitimiteit geoptimaliseerd zou moeten worden.

Algoritmen en veiligheid
De macht van data.
Foto: Jan Boeve / De Balie

Felix & Sofie: Objecten aller landen…. Verenigt u!

In één van de leukste zaaltjes van Amsterdam, die van Perdu, organiseerde Felix & Sofie onder de noemer Objecten aller landen… Verenigt u! een programma over objectgeoriënteerde filosofie in de context van het klimaat en onze leefomgeving. Een zoektocht naar de politieke verbeelding. Lieke Marsman las voor uit haar boek Het tegenovergestelde van een mens en Huub Dijstelbloem maakte een toegankelijke synthese van Sloterdijk en Latour. Maar mijn favoriete spreker van de avond was Lisa Doeland. Het lukte haar meerdere keren om mij echt aan het denken te zetten.

Doeland is gefascineerd door afval, ze is een ‘afvalofiel’. In haar filosofische onderzoek over dit thema is ze er achter gekomen dat afval in eerste instantie vooral relatief is: one man’s trash is another man’s treasure. Bij heel veel afvaltheorie wordt afval gezien als matter out of place, maar het is niet per se een ordeningsprobleem van plaats, maar veel meer matter out of time of misschien zelfs wel matter out of time scales. Wanneer we dingen weggooien ontkennen we de essentiële relativiteit ervan. Afval is volgens Doeland breken met ambiguïteit. Ik was daarom de hele tijd benieuwd naar hoe Doeland naar Marie "KonMari" Kondo zou kijken. Kondo heeft namelijk vanuit het shintoïsme een heel aparte relatie met objecten én met wegdoen, maar zonder ambiguïteit.

Schitterend aanwezig in zijn afwezigheid was de object georiënteerde ontoloog Timothy Morton. Zijn boek Dark Ecology staat nu dus op mijn leeslijst. Ook werd ik door de avond herinnerd aan de dagen die ik het British Museum doorbracht om een wereldgeschiedenis in 100 objecten te bekijken.

Hobbes and the Problem of Sour Grapes

Hobbes's Leviathan


Certain theories of freedom have difficulty dealing with the problem of ‘sour grapes’. The idea that you can make yourself free by changing what you want when you run into limitations (inspired by the fox in Aesop’s fables who, upon finding out that the grapes could not be reached, decided they must be sour). This paper first explores in what (theoretical) situations this problem of preference adaptation pops up. What are the perspectives on liberty that open the door to this particular way of liberating yourself? It then argues how a definition of freedom which allows for a ‘contented slave’ does not align with our common understanding of the concept of liberty. The paper next shows how the classic interpretation of Hobbes’s ideas about deliberation forming the will, and his concept of freedom as nonfrustration make him particularly vulnerable to the issue of preference adaptation and seems to leave him no other choice than to bite the bullet. Finally, the paper explores whether Hobbes’s concept of liberty can be interpreted in way that escapes the ‘sour grapes’ trap, while keeping the rest of his political project alive.

This paper can also be downloaded as a PDF.


A famished fox saw some clusters of ripe black grapes hanging from a trellised vine. She resorted to all her tricks to get at them, but wearied herself in vain, for she could not reach them. At last she turned away, hiding her disappointment and saying: "The Grapes are sour, and not ripe as I thought."1

As soon as this famished fox from Aesop’s fables realised that she wouldn’t be able to get to the grapes, she changed what she wanted and convinced herself that she wasn’t interested in eating grapes after all, as the grapes were sour. Aesop’s fable about the fox is the origin of the concept of ‘sour grapes’ which is used whenever somebody disparages that which they can’t have. John Elster calls the proces of changing our preferences on the basis of the constraints we encounter "adaptive preference formation".2

Philosophical theories that define freedom as the absence of external constraints on that which you want to do (e.g. you are free if, when you want to leave the room, the door is indeed open), often have a hard time dealing with liberation through preference adaptation. When confronted with the image of the contented slave —one that cannot imagine wishing for anything other than the current indentured situation that they are in—, they would have to call this person free.

Even though preference adaptation clearly is a natural psychological phenomenon and sometimes even touted as a path to happiness, the image of the contented slave also shows that a theory of freedom that allows for adapting preferences to make oneself free does not align well with our common intuitive perception of liberty.

Thomas Hobbes likely falls into the ‘sour grapes’ trap. In Leviathan he defines freedom as the absence of external opposition3 and a free man as somebody who is not hindered to do that which he wants to do.4 Using Hobbes’s view on freedom, you can liberate yourself by changing your will.

This paper concludes by seeing if and how a different interpretation of Hobbes’s thinking about liberty or a small concession on Hobbes’s part could align him with one or more of the ways of escaping the preference adaptation problem.

1. The Problem of Preference Adaptation

In school, whenever I had to do something like memorize the periodic table, my father would say the key thing to doing boring tasks is to think about not so much what you’re doing but the importance of why you are doing it. Though when I asked him if slavery wouldn’t have been less psychologically damaging if they’d thought of it as "gardening", I got a vicious beating that would’ve made Kunta Kinte wince.5

In Sour grapes — utilitarianism and the genesis of wants, Jon Elster delineates the problem of adaptive preference formation by comparing it with other mechanisms of preference change that are closely related to it and are often confused with it.6

According to Elster we need to distinguish adaptive preference formation from the change of preferences that can come about from learning or experience. The former is reversible, whereas the latter isn’t. Adaptive preferences are the effect of a limited set of options and not the cause. They are endogenous to a person and can’t come from the deliberate manipulation of wants by other people. The changed preferences can’t be the result of deliberate character planning (as in the Stoic or Buddhist philosophies). As these are intentional rather than causal and usually upgrade the accessible options, whereas the sour grapes idea is to downgrade the inaccessible options. Finally they need to be kept apart from wishful thinking and other rationalisations. Wishful thinking shapes the perception of the situation instead of the evaluation of the situation.

In psychology adaptive preference formation is a very real phenomenon. Psychologists call it cognitive dissonance reduction: striving for internal consistency when you hold contradictory beliefs. There even is some evidence that this feature of our cognitive make-up developed quite early from an evolutionary perspective, as nonhuman primates also exhibit decision rationalisation. A 2007 Yale study titled The Origins of Cognitive Dissonance7 showed that both four-year-old children and capuchin monkeys will downgrade how much they desire something after they weren’t able to obtain that particular thing at an earlier time.

We can all recognise the very human (not to say primate) trait that the lack of availability of something we initially want, changes our perception of how much we want it. When the person that we are infatuated with tells us that it is never going to happen, we can suddenly see all the person’s character traits that would have prevented the relationship from ever working anyway. And when you are living in a tiny apartment without the resources to get something bigger, it is easy to think of all the reasons why having a large house is mostly a burden. Attempting to reduce cognitive dissonance through adapting your preferences is probably a mentally healthy exercise and will likely lead to an increase in happiness. But it would be strange to say that it can also lead to an increase in liberty.

As soon as you define liberty as having the freedom to do what you want or to satisfy your desires, you run into the problem of adaptive preference formation. Isaiah Berlin states this in a clear and concise way:

If degrees of freedom were a function of the satisfaction of desires, I could increase freedom as effectively by eliminating desires as by satisfying them: I could render men (including myself) free by conditioning them into losing the original desires which I have decided not to satisfy. Instead of resisting or removing the pressures that bear down upon me, I can ‘internalise’ them.8

Berlin then writes about the slave Epictetus who, by reducing his desires, managed to become freer than his master.

Preference adaptation is not just a philosophical problem. It is something that at least some slaves actually did. When Tocqueville traveled through the United States in the early 1830s he wasn’t sure whether he should call it a proof of God’s mercy or a proof of God’s wrath that:

The negro, who is plunged in this abyss of evils, scarcely feels his own calamitous situation. Violence made him a slave, and the habit of servitude gives him the thoughts and desires of a slave; he admires his tyrants more than he hates them, and finds his joy and his pride in the servile imitation of those who oppress him: his understanding is degraded to the level of his soul.9

It would be preposterous to say that an early 19th century slave on a plantation in the Southern US —however much contented— could be free. Clearly changing your desires cannot be a way to liberate yourself. There should be more to freedom than not being frustrated in your wants and desires.

2. Hobbes’s Deliberations on Liberty

In his Leviathan, Thomas Hobbes sets out to describe —starting from first principles— the ‘Artificiall Man’, or the ‘Body Politique’, that is the commonwealth. In doing this, he has laid both the foundation for much of Western political philosophy (in particular the field of ‘social contract theory’)10 and for liberal thought.11

Having lived through multiple civil wars, Hobbes was convinced that the natural condition of man is a war of everyone against everyone. To escape this dreaded predicament he makes the argument that we should covenant with each other and hand over the authority to an absolute and undivided sovereign. This is the only way that we can live secure lives.

Hobbes wrote his Leviathan as a reaction to the defeat and execution of Charles I in 1649. He was working on his book De corpore at the time, was shocked to see what happened to the king and saw himself forced to postpone that work to write Leviathan to "fight on behalf of all kings".12 The prevailing idea at the time was that only a republic could provide true freedom and that living in a monarchy is like living in servitude, if not like living in slavery. If Hobbes were to defend the monarchy he would have to come up with a conception of liberty which would not be affected by the choice for a particular political system.

He managed to accomplish this by separating the liberty to act in a particular way from the power to perform the action involved. In the classic interpretation of Hobbes, he has a purely external (and negative) perspective on liberty and can’t see how internal limitations can affect freedom:

Liberty, of Freedome, signifieth (properly) the absence of Opposition; (by Opposition, I mean the externall Impediments of motion;) [..] For whatsoever is so tyed, or environed, as it cannot move, but within a certain space, which space is determined by the opposition of some externall body, we say it hath not Liberty to go further.13

And he distinguishes freedom from having the power to act:

But when the impediment of motion, is in the constitution of thing it selfe, we use not to say, it wants the Liberty; but the Power to move; as when a stone lyeth stil, or a man is fastned to his bed by sicknesse.14

Just to make it absolutely clear that it is only external impediments that can put limits on liberty, Hobbes writes about the sailor who very willingly throws his goods into the sea to save himself. He considers that a free action. This leads him to the following definition of a free man:

A Free-Man, is he, that in those those things, which by his strength and wit he is able to do, is not hindred to doe what he has a will to.15

This is obviously a very limited definition of freedom. You can’t lose your freedom because you are scared to do something (not even when somebody threatens you with a weapon) and you can’t lose your freedom through being domineered. Hobbes even thinks it is an abuse of the concept of freedom to apply it to anything that isn’t a physical body. Only things that are subjected to motion can be hindered. To put it simply: you can literally decrease the liberty of a prisoner in jail by making his jail cell smaller. To be free does not require you to have a choice.

There is another way that Hobbes talks about liberty in Leviathan. This has to do with the act of deliberating. He makes an etymological error and suggests that to deliberate comes from de-liberate or to make unfree (the word actually comes from librare, to weigh)16:

[It] is called Deliberation; because it is a putting an end to the Liberty we had of doing, or omitting, according to our own Appetite, or Aversion. [..] Every Deliberation is then sayd to End, when that whereof they Deliberate, is either done, or thought impossible; because till then wee retain the liberty of doing, or omitting, according to our Appetite, or Aversion. In Deliberation, the last Appetite, or Aversion, immediately adhaering to the action, or to the ommission thereof, is that wee call the WILL; the Act, (not the faculty,) of Willing.17

According to Hobbes the will is at the end of the deliberation process when the different passions have done their bidding. The will is then the same thing as the intention to act.

If you combine both of these conceptions of liberty, it leads to the very non-intuitive idea that in Hobbes’s view only stupid or irrational people can be unfree. Who else will form the intention to do something that they can’t do? If you are rational, then you will adapt your preferences to your situation. So you can be free even in jail, as long as you make sure that you don’t want to go anywhere. A classic example of sour grapes.

3. Attempting to Save Hobbes

But I don’t want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin.’18

Before we discuss whether Hobbes can be saved from his self-induced sour grapes situation, we need to first discuss whether he would like to be saved. Even though Hobbes himself acted as if his definition of a free man is completely obvious ("this proper, and generally received meaning of the word"19), his perspective was actually quite controversial, also in his time. Skinner calls the contention that a free-man is simply someone who is physically unimpeded from exercising their powers at will "sensationally polemical"20 and according to Pettit, Hobbes’s contemporaries thought his account of freedom was "strange to the point of being barely credible" and his definitions "so at variance with common usage that his readers were often deeply exasperated."21

So it could be argued that Hobbes would be more than happy to bite the sour grapes bullet. His way of looking at freedom served a particular purpose: to show that there can be freedom without independence, that it is possible to live in a monarchy and still be free. However, it must also be said that Hobbes has a very realistic (and empirical) perspective on human nature. His whole project exists for people to not only be secure but to lead meaningful and productive lives. It is hard to imagine that Hobbes would just accept the paradox that only the stupid and irrational can be unfree. It is worthwhile to see if there is a way to interpret his thinking on freedom that might solve the sour grapes problem while at the same time not implying that you can only have liberty in a free state.

Generally there are at least three potential solutions in solving the problem of preference adaptation: (I) by aligning what somebody wants to do with what they ought to do, (II) by enlarging a negative concept of freedom to include not just being free to do what you want to do, but to also include whatever you might want to do or (III) by requiring that your tastes are shaped by yourself rather than have them be shaped by outside agents.22 The first (Rousseauian) solution is too normative to mesh well with Hobbes’s thinking. But the two latter solutions hold the potential to help Hobbes out.

The second (Berlinian) solution initially seems impossible to align with Hobbes’s distaste for the republican point of view. Hobbes railed against the "Democraticall writers" of his time who were of the opinion that those who live in a popular common-wealth enjoy liberty and those who live in a monarchy are slaves."23 To defend the concept of liberty inside a monarchy he needed to make clear that to be free of subjection to arbitrary power isn’t a necessary condition to being free. He does that by saying that to be a free-man is to be free from being externally impeded. Skinner summarises this as follows:

The contrast he draws between himself and the theorist of republican liberty is [..] that, whereas they take it to be a necessary condition of being a free-man that we should be free from the possibility of arbitrary interference, he treats it as as a sufficient condition that we should be free from interference as a matter of fact. [..] Hobbes is denying that the mere fact of living in dependence on the will of others plays any part in limiting the freedom of the free-man.24

Hobbes then makes it clear that you always have the liberty to not obey the laws if you want. Obeying the law is in that sense a voluntary act. The threat of not being protected by a sovereign can’t be seen as limiting your freedom. The fear of what would happen if you disobey the sovereign doesn’t impede your liberty. "Feare, and Liberty are consistent" as Hobbes writes.25

This particular way of reasoning against republicanism does not preclude Hobbes from making a small concession to Berlin through expanding his concept of liberty from purely freedom of action to one that includes a freedom of choice. We know from his philosophical discussion with bishop Bramhall that Hobbes explicitly did not make this concession:

[A person] may deliberate of that which is impossible for him to do, as in the example he alleges of him that deliberates whether he shall play at tennis, not knowing that the door of the tennis-court is shut against him; yet it is no impediment to him that the door is shut till he have a will to play, which he has not till he has done deliberating whether he shall play or not.26

But the fact is that he could have made this concession without losing the distinction between external limitations and internal limitations (like fear), and so without losing the argumentative ammunition he needs to defend a monarchy.

The third way of escaping the problem of sour grapes takes a more positive approach to liberty and says that freedom requires autonomy. This approach can potentially help Hobbes too. Is it possible to find this more positive approach to freedom in Hobbes’s writing?

In Liberty, Rationality, and Agency in Hobbes’s Leviathan27 David van Mill argues against the traditional interpretation of Hobbes’s concept of liberty as a purely negative freedom and replaces it with what he calls ‘Hobbes’s "extended" theory of freedom.’, the idea that Hobbes discusses many other conditions of freedom besides the absence of external impediments.28

Hobbes is usually seen as only discussing freedom as the lack of external impediments, but Van Mill shows quite a few cases in Leviathan where Hobbes seems to realise that there can also be internal impediments to liberty. For example when Hobbes writes that idiots, children and crazy people are not obliged by the law29, he links freedom to responsibility and rationality, and thus introduces internal considerations into the question of liberty.30 Or when Hobbes writes:

The Liberty of a Subject, lyeth therefore only in those things, which in regulating their actions, the Soveraign hath praetermitted: such as is the Liberty to buy, and sell, and otherwise contract with one another; to choose their own aboad, their own diet, their own trade of life, and institute their children as they themselves think fit; & the like.31

Van Mill then writes that this statement "means that liberty exists where the law is silent and that within this realm, freedom includes the liberty to choose."32 In cases like these, according to Van Mill:

Hobbes is using the term liberty in a more conventional sense that his strict definition allows for because he is talking about civil liberties rather than whether one is being impeded or not by physical external barriers. [..] The liberty of the subject actually has very little to do with the absence of external obstacles. What Hobbes is really concerned with is not unimpeded movement, but "a right or liberty of action." [..] Clearly Hobbes thinks that civil society limits absolute freedom, but that this is necessary for a more worthwhile bounded liberty.33

Van Mill then argues that Hobbes was primarily interested in promoting the development of rational individuals as a necessary precondition for a society at peace, so that we might live autonomous lives. It might seem difficult to show that Hobbes was concerned with autonomy. He is mostly depicted as somebody who saw humans as survival machines, pursuing immediate gratification, with reason being the slave of the passions. Van Mill tackles this problem by focusing on Hobbes’s thoughts on rational agency.

In the introduction to Leviathan Hobbes already makes it clear that he thinks that humans as a species are rational: "Art goes yet further, imitating that Rationall and most excellent worke of Nature, Man"34 He is convinced that humans can only understand themselves through rational introspection. Hobbes makes a distinction between "Trayne of Thoughts Unguided" and "Trayne of Thoughts Regulated"35 and wouldn’t have done that if he didn’t want to argue that the latter is the preferred version. Just from the title of chapter 8 it is evident that Hobbes values intellectual virtues. In that chapter he also makes it clear that when passions are unguided and out of control they are a form of madness.36 Van Mill writes that these "passages all point to the conclusion that Hobbes thought that unguided and untempered passions are inconsistent with rational action."37

To live a contented life you would need to achieve a balance between the passions so that you can reason your way to the best course of action. As Hobbes says in chapter 8:

[Without] Steddinesse, and Direction to some End, a great Fancy is one kind of Madnesse; such as they have, that entring into any discourse, are snatched from their purpose, by everything that comes in their thoughts [..] Which kind of folly, I know no particular name for.38

About which Van Mill writes: "Perhaps lack of agency is an adequate name for such a condition."39 to then suggest that "Hobbes believes we must rationally order our passions, thoughts, desires and actions in order to live a fulfilling life."40 and that having to choose between our passions means that we must display some of the attributes of autonomy. Thus Van Mill would argue that Hobbes’s perspective on our human nature would obviate the sour grapes problem.


This paper first distinguished preference adaptation (sour grapes) from other mechanisms of preference change like wishful thinking and deliberate character planning. Preference adaptation is a common psychological phenomenon and likely a mentally healthy exercise. However, even though it might make you happy, it is hard to argue that it makes you free.

The most clear example of this is the nearly paradoxical situation of the contented slave. If freedom consists in being able to do what you desire, then slaves could free themselves through desiring nothing more than the lives they are already living.

Hobbes, in defense of a monarchical system of governance, defines the concept of liberty in Leviathan in a strictly external and negative fashion. Internal limitations (like fear) can’t affect freedom, it is just the absence of external opposition which makes you free. Hobbes also looks at the deliberation process and defines the will as the intention to act: the last appetite or aversion right before the action.

This makes Hobbes particularly vulnerable to the sour grapes problem and its extended version: the idea that only idiots and irrational people form the intention to do something for which there are external limitations. Rational people would only act in ways that are congruent with their options.

Assuming that Hobbes wouldn’t just bite the bullet (or eat the sour grapes if you will) there are three classic escapes: taking a normative approach, expanding liberty to include freedom of choice, and requiring autonomy. This paper explored whether Hobbes’s writing could be interpreted or slightly adapted to be aligned with the latter two options.

It was first made clear that it would have been possible for Hobbes to slightly expand his concept of freedom and not just talk about external impediments to motion (having the ability to act), but also to include external impediments to choice as limiting liberty. Doing that would still allow him to argue against the republicans and for a monarchy.

Finally this paper explored Van Mill’s analysis of Hobbes’s concerns with autonomy. Van Mill makes a strong case for extending the Hobbesian concept of freedom to also include internal constraints and opportunities. By showing how Hobbes embraces the concept of agency, he shows how we can rationally give up a little bit of freedom to be able to self-realise, lead autonomous lives and avoid the sour grapes trap.


Aesop. Aesop’s Fables. Translated by George Fyler Townsend, 2008.
Beatty, Paul. The Sellout. London: Oneworld Publications, 2016.
Berlin, Isaiah. Liberty: Incorporating ’Four Essays on Liberty’. Edited by Henry Hardy. Oxford: Oxford University Press, 2002.
Egan, Louisa C., Laurie R. Santos, and Paul Bloom. “The Origins of Cognitive Dissonance: Evidence from Children and Monkeys.” Psychological Science 18, no. 11 (November 2007): 978–83. doi:10.1111/j.1467-9280.2007.02012.x.
Elster, Jon. “Sour Grapes–Utilitarianism and the Genesis of Wants.” In Utilitarianism and Beyond, edited by Amartya Kumar Sen and Bernard Arthur Owen Williams, 219–38. Cambridge: Cambridge University Press, 1982.
Gaus, Gerald, Shane D. Courtland, and David Schmidtz. “Liberalism.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Spring 2015. Stanford: Metaphysics Research Lab, Stanford University, 2015.
Hobbes, Thomas. Leviathan. Edited by Richard Tuck. Cambridge: Cambridge University Press, 1996.
Hobbes, Thomas, and John Bramhall. Hobbes and Bramhall on Liberty and Necessity. Edited by Vere Chappell. Cambridge: Cambridge University Press, 1999.
Huxley, Aldous. Brave New World. London: Flamingo, 1994.
Lloyd, Sharon A., and Susanne Sreedhar. “Hobbes’s Moral and Political Philosophy.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Spring 2014. Stanford: Metaphysics Research Lab, Stanford University, 2014.
Mill, David van. Liberty, Rationality, and Agency in Hobbes’s Leviathan. Albany: State University of New York Press, 2001.
Parijs, Philippe van. Real Freedom for All: What (If Anything) Can Justify Capitalism? Oxford: Clarendon Press, 1997.
Pettit, Philip. “Liberty and Leviathan.” Politics, Philosophy & Economics 4, no. 1 (February 2005): 131–51. doi:10.1177/1470594X05049439.
Skinner, Quentin. Hobbes and Republican Liberty. Cambridge: Cambridge University Press, 2008.
Tocqueville, Alexis de. Democracy in America — Volume 1. Translated by Henry Reeve, 2006.

  1. Aesop, Aesop’s Fables.

  2. Elster, “Sour Grapes–Utilitarianism and the Genesis of Wants.”, 219.

  3. Hobbes, Leviathan, 145.

  4. Ibid., 146.

  5. Beatty, The Sellout, 106.

  6. Elster, “Sour Grapes–Utilitarianism and the Genesis of Wants.”, 220-226.

  7. Egan, Santos, and Bloom, “The Origins of Cognitive Dissonance.”

  8. Berlin, Liberty, 31.

  9. Tocqueville, Democracy in America — Volume 1, chapter XVIII.

  10. Lloyd and Sreedhar, “Hobbes’s Moral and Political Philosophy.”

  11. Gaus, Courtland, and Schmidtz, “Liberalism.”

  12. Skinner, Hobbes and Republican Liberty, 125–26.

  13. Hobbes, Leviathan, 145.

  14. Ibid., 146.

  15. Ibid.

  16. Pettit, “Liberty and Leviathan.”, 133.

  17. Hobbes, Leviathan, 44.

  18. Huxley, Brave New World, 219.

  19. Hobbes, Leviathan, 146.

  20. Skinner, Hobbes and Republican Liberty, 151.

  21. Pettit, “Liberty and Leviathan.”, 132.

  22. Parijs, Real Freedom for All, 18-19.

  23. Hobbes, Leviathan, 226.

  24. Skinner, Hobbes and Republican Liberty, 154-155.

  25. Hobbes, Leviathan, 146.

  26. Hobbes and Bramhall, Hobbes and Bramhall on Liberty and Necessity, 81.

  27. Mill, Liberty, Rationality, and Agency in Hobbes’s Leviathan.

  28. Ibid., 48.

  29. Hobbes, Leviathan, 187.

  30. Mill, Liberty, Rationality, and Agency in Hobbes’s Leviathan, 57-58.

  31. Hobbes, Leviathan, 148.

  32. Mill, Liberty, Rationality, and Agency in Hobbes’s Leviathan, 59.

  33. Ibid., 70.

  34. Hobbes, Leviathan, 9.

  35. Ibid., 20-21.

  36. Ibid., 54.

  37. Mill, Liberty, Rationality, and Agency in Hobbes’s Leviathan, 86.

  38. Hobbes, Leviathan, 51.

  39. Mill, Liberty, Rationality, and Agency in Hobbes’s Leviathan, 94.

  40. Ibid.