A Neorepublican Perspective on Automated Decision‐making

In september 2024, I started working on a PhD, after securing funding from NWO. This is the introduction of the proposal that I shared with them. You can read the full proposal here (PDF).


Suppose someone applied for welfare at the municipality of Amsterdam somewhere between April and July 2023. In that case, chances are that their application was given a score for the risk that it was fraudulent. Based on this score, enforcement professionals could investigate the application further. A machine learning algorithm, trained on previous welfare applications, would have created the score.

This type of statistical decision-making has found its way into many aspects of our lives. When you buy insurance, are trying to get credit, or are crossing a border, you are being profiled. Your particular situation is generalized to match you with people who share a set of statistically relevant characteristics. Machine learning has encouraged the adoption of these methods by adding the ability to find decision rules from large datasets.

An important question with profiling, in general, is whether it isn’t biased towards particular groups. Are all groups treated equally? Data-driven profiling decisions allow for measuring this bias. This has led to a flurry of activity in a young field of computer science: fairness in machine learning. The field has produced essentially three mutually exclusive statistical definitions of ‘fairness’. None of them is sufficient to claim fairness aligning with our moral intuitions, as they all allow for blatantly unfair practices. Despite their shortcomings, these narrow definitions have been central in the debate about the legitimacy of machine learning-based decision-making, usually without thinking about other possible normative ideals.

For example, the municipality of Amsterdam only used one specific statistical perspective on fairness when they checked their welfare fraud profiling tool for bias. By doing this, they ignored other possible fairness criteria and implicitly took fairness to just mean a lack of unlawful discrimination. Their internal discussion and the resulting political debate then focused on this one particular form of bias without thinking about the legitimacy of the (automated) profiling decisions or how they might affect the power relations between the different affected parties. This limited perspective is the natural consequence of the currently skewed debate: technical jargon and legal considerations are at the forefront, while ethics and political philosophy seem to lag behind.

The neorepublican theoretical framework developed by (among others) Pettit and Lovett is a political philosophy with a vast potential to fill this gap. Its conception of political freedom as nondomination (nobody having arbitrary or unchecked control over the choices of another) serves as a fruitful ground for public philosophy. For example, neorepublicanism has made explicit recommendations on how controls around state surveillance should be institutionalized in a way that a more classic liberal idea of freedom as noninterference cannot do. More recently, ‘digital domination’ has become a powerful lens for critically looking at big tech and making its power accountable.

Neorepublicanism pays special and unique attention to reigning in any potential abuse of power and arbitrary control. That focus forces us to “rethink issues of legitimacy and democracy, welfare and justice, public policy and institutional design”. Neorepublicanism can intervene forcefully on the issue of the legitimacy of machine learning-based profiling, allowing the concept of fairness in this context to be part of this rethink too. The main research question, therefore, becomes as follows:

How can the neorepublican framework inform normative and institutional thinking about the profiling of individuals by machine learning algorithms?


I’ll make sure to share my progress on this blog.