What exploitation is

The forthcoming article “What exploitation is” by Benjamin Ferguson, Peter Hans Matthews, David Ronayne, and Roberto Veneziani is summarized by the author(s) below.

What does it mean to say someone is being exploited? The word is often used in debates about sweatshops, migration, or gig work, but philosophers and social scientists have long disagreed about its precise meaning. Some argue that exploitation is about unfair outcomes—when one side gains much more than the other. Others think it is about power—when one party can disproportionately dictate terms. Some emphasize disrespect or bad luck. But until now, nobody had systematically asked how experts and lay subjects actually apply the concept.

Our study set out to do just that. We surveyed more than 2,000 people—around 550 professional philosophers and 1,500 members of the public. Each person read short scenarios, or vignettes, about everyday transactions (buying and selling mugs), in which we varied key features such as unequal payoffs, market power, unmet basic needs, prior injustice, or disrespectful attitudes. Participants then rated how exploitative each scenario was on a scale from 0 (“not at all”) to 100 (“maximally”). In total, we collected over 23,000 ratings.

The results were striking. First, exploitation is not an empty label: people clearly distinguish exploitative from non-exploitative interactions. In baseline scenarios where both parties benefitted equally and no one had special power, the vast majority rated them as “not at all exploitative.”

Second, both inequality and power matter. When one side gained more, people judged the scenario more exploitative. The same was true when one side had monopoly power. But the real force came when inequality and power were combined: people judged those scenarios as even more exploitative than the sum of either factor alone. In other words, subjects are very likely to apply exploitation when unfair gains and unequal power reinforce one another.

Third, certain background conditions amplify judgments. Exploitation is seen as especially severe when power stems from an injustice. By contrast, disrespectful attitudes or sheer bad luck were much weaker drivers.

Finally, experts and laypeople largely agreed. Both groups of subjects displayed a shared understanding of what makes an interaction exploitative. While philosophers emphasized power slightly more, and the public put more weight on unequal outcomes and disrespect, the similarities between the groups far outweighed the differences.

These findings matter beyond academic theory. They suggest that public concerns about sweatshops, predatory loans, or migrant work are not simply about inequality or coercion alone, but about their interaction—power exercised to secure unfair advantage, often against a backdrop of injustice. The results challenge narrow theories that treat exploitation as either purely distributive or purely about domination, and instead support hybrid accounts that capture both.

By mapping the ordinary meaning of exploitation, our study provides a common foundation for future debates in ethics, politics, and policy. If lawmakers, activists, and employers want to take exploitation seriously, they must attend not just to unequal outcomes or to power imbalances, but to the ways these combine—especially when they leave people with unmet needs or result from prior injustice.

About the Author(s): Benjamin Ferguson is a Professor of Philosophy at the University of Warwick and the director of Warwick’s Philosophy, Politics, and Economics program, Peter Hans Matthews is the Charles A. Dana Professor of Economics at Middlebury and Distinguished Visiting Professor at Aalto University in Helsinki, Finland and the Helsinki Graduate School of Economics, David Ronayne is an Assistant Professor of Economics at the European School of Management and Technology (ESMT) Berlin, and Roberto Veneziani is a Professor in Economics at the School of Economics and Finance, Queen Mary University of London. Their research “What exploitation is” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Using large language models to analyze political texts through natural language understanding

The forthcoming article “Using large language models to analyze political texts through natural language understanding” by Kenneth Benoit, Scott De Marchi, Conor Laver, Michael Laver, and Jinshuai Ma is summarized by the author(s) below.

LLMs Can Read and Locate Policy Positions from Political Texts Better Than Experts 

For decades, political scientists have faced a frustrating trade-off when analysing political texts. We could recruit human experts to read documents for meaning, capturing nuance and intensity, but this approach is prohibitively expensive and doesn’t scale. Alternatively, we could use automated “text-as-data” methods that count words and identify patterns, but these remain blind to what texts actually mean. 

Large language models (LLMs) have broken this impasse. 

In our study, we developed protocols for using LLMs to estimate political parties’ policy positions from their manifestos. Rather than treating manifestos as bags of words to be counted, we asked LLMs to read each document holistically, summarise what it says about key policy issues, and then score those positions on defined scales, much as a human expert would. 

The results exceeded our expectations. Across six policy dimensions (economic policy, social policy, immigration, European integration, environment, and decentralisation), correlations between LLM estimates and benchmark expert surveys typically ranged from 0.87 to 0.92. This approaches the theoretical upper bound: the level of agreement we’d expect between two independent expert surveys measuring the same thing. 

Crucially, these findings are robust and replicable. When we repeated our analysis three months later using the same LLMs, results correlated above 0.95 with the original run. When we replicated using entirely different, open-weight models (DeepSeek, Llama, and Gemma), the results remained consistent. This is replication in the true scientific sense, not mere mechanical reproducibility. Like highly reliable human coders who reach the same substantive conclusions despite inevitable minor variations in individual judgements, different LLMs converge on the same estimates even though each run involves some stochastic variation. This matters enormously for scientific credibility. 

We also applied our method to coalition government agreements, documents for which no expert benchmarks exist. Here, LLM estimates significantly outperformed traditional hand-coding in conforming to theoretical predictions about where coalition policy should fall relative to member parties’ positions. 

What are the implications? LLMs offer a practical way to generate expert-quality estimates of policy positions at massive scale, in virtually any language, at minimal cost. Projects like the Manifesto Project spent decades and millions of dollars to code thousands of documents. Similar analyses can now be conducted by individual researchers in days, for hundreds, not millions of dollars. 

This doesn’t mean LLMs are perfect. On issues like decentralisation, where manifestos systematically avoid stating unpopular positions, LLM scores diverged from expert judgements. This reveals not a flaw in the method, but something interesting about how parties strategically craft their public commitments. 

The broader lesson is that LLMs, used carefully with appropriate protocols, can serve as legitimate scientific instruments for political text analysis. As these models continue to improve, their potential to democratise research and enable scholars anywhere to conduct sophisticated analyses without massive resources is transformative. 

About the Author(s): Kenneth Benoit is Dean of the School of Social Sciences and Professor of Computational Social Science, Singapore Management University, Scott De Marchi is a Professor of Political Science and Director of the Decision Science program at Duke University, Conor Laver is a Lecturer at Northeastern University, Michael Laver is an Emeritus Professor of Politics at New York University, and Jinshuai Ma is a Research Officer in Quantitative Text Analysis at the London School of Economics. Their research “Using large language models to analyze political texts through natural language understanding” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

You and whose economy? Group-based retrospection in economic voting

The forthcoming article “You and whose economy? Group-based retrospection in economic voting” by Christoffer Hentzer Dausgaard is summarized by the author below.

The economy plays a major role in elections, and decades of research suggest that voters are predominantly sociotropic, focusing on the national economy when evaluating incumbents. Yet, the dominance of sociotropic voting presents a puzzle. Citizens have diverse, often conflicting interests, and political leaders inevitably align with some groups in society over others. Ignoring these distributional conflicts seems at odds with what we know about voter behavior: that social group memberships profoundly shape how people think about politics and that voters care about group interests.

In this paper, I address this puzzle and argue that voters sanction incumbents for the economic performance of their own social in-groups, beyond the nation as a whole and their own pocketbooks. Group-level economic trends offer a more reliable signal than individual circumstances about whether the incumbent’s economic management serves group members’ interests. Importantly, I theorize that voters are especially sensitive to how their groups perform relative to the national trend: they punish incumbents when their groups fall behind a growing economy and reward them when their groups outperform a struggling one. This “group-based retrospective voting” thus introduces important limits to sociotropic voting.

Isolating the effect of group-level economic conditions is difficult. Existing studies of, e.g., local economic voting have mostly relied on observational cross-sectional comparisons that face two key challenges. First, group economic outcomes are endogenous, as incumbents may strategically favor pre-existing supporters. Second, even if this relationship were causal, it is unclear whether voters care specifically about group performance or are simply responding to their own improved finances (pocketbook voting) or using local conditions as a signal of broader national trends (sociotropic voting).

To overcome these problems, I test the theory using two complementary approaches. First, I analyze British panel survey data showing that changes in the economic performance of class and regional in-groups predict changes in incumbent support, holding sociotropic and pocketbook evaluations constant. Second, I conduct three pre-registered experiments in Denmark and the United States, randomizing true economic information about 34 different social groups. The experimental results consistently show that voters respond more strongly to economic information about their own group, especially when their group’s performance diverges from the national trend.

These findings help explain patterns of economic voting that don’t fit standard sociotropic models, such as why economically secure voters sometimes support populist movements, or why strong national growth doesn’t always translate into incumbent support. They also have implications for electoral accountability, suggesting that incumbents can build electoral support by favoring pivotal groups over national growth.

About the author: Christoffer Hentzer Dausgaard is a postdoctoral researcher in the Department of Political Science at the University of Copenhagen. Their research “You and whose economy? Group-based retrospection in economic voting” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Long-run confidence: Estimating uncertainty when using long-run multipliers

The forthcoming article “Long-run confidence: Estimating uncertainty when using long-run multipliers” by Mark David Nieman and David A. M. Peterson is summarized by the author(s) below.

Our paper tackles a longstanding problem in time series analysis: how to estimate uncertainty for the long-run effect of a predictor in a regression model that includes a lagged dependent variable. This is a pervasive challenge in political science, where time series are often short and the test for ascertaining their properties underpowered. Conventional uncertainty estimates—essential for hypothesis testing—break down under such conditions.

We address this issue using a Bayesian estimator with a semi-informed prior that yields theoretically informed estimates of uncertainty even in short or noisy time series. We start by using a bounded, uniform prior for the estimated coefficient on the lagged DV. The semi-informed prior accommodates series of X and y with unclear dynamic properties by limiting the range of the coefficient on a lagged DV to its theoretical bounds for either stationary or integrated series. By giving equal density to the values between these bounds, however, the prior does not bias point estimates.

We then estimate the model via Markov chain Monte Carlos (MCMC). The use of a sampling-based method, like MCMCs, allow for direct estimation of the variance of the long-run multiplier, without requiring large sample sizes. This is made possible by exploiting a well-known property of MCMC methods, namely, that one can estimate and summarize the distribution of functions of parameters (e.g., ratios of coefficients) directly from the posterior distribution.

Our proposed method leads to more accurate and reliable estimates of uncertainty than alternatives that rely on asymptotic assumptions that may not hold. Moreover, our framework requires minimal additional assumptions over existing approaches and is easy to estimate in most existing software. We highlight the advantages of this approach via Monte Carlo experiments and replicate several studies to show that our method clarifies long-run relationships that were inconclusive using existing techniques.

About the Author(s): Mark David Nieman is an Assistant Professor in the Department of Political Science and Trinity College, as well as an affiliate of the Data Sciences Institute and David A. M. Peterson is the Lucken Professor of  Political Science in the Department of Political Science at Iowa State University. Their research “Long-run confidence: Estimating uncertainty when using long-run multipliers” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Perversity, futility, complicity: Should democrats participate in autocratic elections?

The forthcoming article “Perversity, futility, complicity: Should democrats participate in autocratic elections?” by Zoltan Miklosi is summarized by the author below.

Multiparty, competitive elections are a hallmark of democracy. However, such elections are not unique to democracies. A growing number of countries around the world are described by political scientists as electoral autocracies. They are autocratic because they significantly curtail media freedom, they weaken the independence of the judiciary, and they apply the law unequally: government critics and opposition politicians are often prosecuted on frivolous grounds while allies of the ruling party engage in large-scale corruption with impunity. But they are electoral autocracies, because genuine opposition parties are allowed to compete in elections and sometimes even win. However, autocratic elections are partially unfree and massively unfair: opposition candidates and activists often face physical and legal harassment and intimidation, while the ruling party freely uses the financial and administrative resources of the government. Such regimes confront democrats with a dilemma. On the one hand, if they participate in autocratic elections as voters or candidates, they contribute to the false appearance of democracy and help autocrats claim democratic legitimacy. On the other hand, elections are often though not always the most effective tool to foster democratic regime change, as I hope to show in the paper. Even if rarely, autocrats sometimes lose elections, as happened in Mexico in 2000, Malaysia in 2018, or Poland in 2023, for instance. Therefore, if democrats decide to boycott elections, they give up what is often their best chance to defeat autocracy. Here, I argue that usually, democrats should participate because often that is the least bad option. At the same time, I also argue that while in democracies elections are the only legitimate means of achieving a change of government, this is not so in autocracies. Here, democrats are morally permitted to choose other strategies of challenging autocracy such as boycott or resistance, and the alternatives ought to be assessed case by case in light of facts on the ground.

About the author: Zoltan Miklosi is an Associate Professor at Central European University Their research “Perversity, futility, complicity: Should democrats participate in autocratic elections?” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

What political theory can learn from conceptual engineering: The case of “corruption”

The forthcoming article “What political theory can learn from conceptual engineering: The case of “corruption”” by Emanuela Ceva and Patrizia Pedrini is summarized by the author(s) below.

When people hear “corruption,” they picture bribes or embezzlement. But many practices that impair institutional functioning—clientelism, nepotism, or the tight coupling of politics to private money—do not reduce to simple trades of favors for cash.

Conceptual engineering offers a way to capture and assess this broader reality by deliberately refining the concepts we use. Instead of treating corruption merely as “the use of entrusted power for private gain,” recent studies in political theory have re-engineered the concept as a deficit of office accountability. In this view, officeholders exercise power under a mandate—judges to deliver impartial justice, ministers to serve the public, regulators to ensure fair competition. Corruption arises when the use of that power can no longer be justified with reference to the mandate. Relevant instances may include the misbehavior of some “bad apple,” such as a mayor appointing a cousin to a public post, as well as the ill-design of an entire system, like in the case of campaign-finance arrangements that bind elected officials to major donors. Across such instances corruption occurs as a break in the accountability chain that should link officeholders to their mandates—even without personal enrichment.

The accountability lens thus unifies individual wrongdoing and structural flaws. Nevertheless, its reach is not assumed. Mandates and accountability practices may vary across institutional settings. In authoritarian polities, offices are often personalized and counter-powers weak; in private organizations (corporations, NGOs, sport federations), mandates are defined by institutional purposes that are not public in the democratic sense. To make the characterization of corruption relevant for such plural and complex contexts, engineering the concept of corruption further can help through iterative specification. This means addressing research efforts to build on the core idea (deficit of office accountability), but tailor it to the institutional context rather than exporting a one-size-fits-all definition.

This matters analytically, normatively, and empirically. Analytically, it helps explain cases that standard definitions miss—for example, favoritism in public appointments or procurement inside an NGO where no bribe is paid, yet the use of institutional power cannot be vindicated by that power mandate. Normatively, it shifts anticorruption policy from a focus on criminal sanctions to strengthening accountability practices: mutual supervision among officeholders, deliberative engagement with decision-rationales, protection of whistleblowers, and enhanced responsibility for lobbying and financing. Finally, empirically, it suggests that corruption indicators need to catch up. Measures centered on visible abuses (e.g., bribery perceptions) risk understating corruption where the primary problem is systemic accountability failure. More informative metrics would track the quality of accountability practices themselves.

While conceptual engineering alone cannot settle the debate, it bears a significant promise to reframe it around office accountability and carve out the space for the concrete methodological contribution that political theory can give to corruption studies and beyond.

About the Author(s): Emanuela Ceva is a Professor of Political Theory in the Department of Political Science and International Relations at the University of Geneva and Patrizia Pedrini is a Senior Researcher at the University of Geneva. Their research “What political theory can learn from conceptual engineering: The case of “corruption”” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Seeing like a citizen: Experimental evidence on how empowerment affects engagement with the state

The forthcoming article “Seeing like a citizen: Experimental evidence on how empowerment affects engagement with the state” by Soeren J. Henn, Laura Paler, Wilson Prichard, Cyrus Samii, and Raúl Sánchez de la Sierra is summarized by the author(s) below.

Our study from the Democratic Republic of Congo reveals a counterintuitive truth: when citizens are empowered to stand up to corrupt officials, they actually end up paying more taxes and fees to the government—not less.

We worked with households and small businesses in Kinshasa to test two approaches to citizen empowerment. Some received weekly phone consultations providing information about what they legally owed for various government services. Others were connected to a powerful civil society organization that could advocate on their behalf against predatory officials demanding bribes.

The results challenge conventional wisdom. Rather than using this newfound power to avoid the state entirely, empowered citizens—particularly those with protection—increased their formal payments to the government by about one-third. They started paying for services they had previously avoided, like electricity connections and business licenses. 

Why would protection from corruption lead to more government payments? The answer lies in understanding the vicious cycle many developing countries face. When citizens expect to be shaken down for bribes, they avoid government services altogether. They stay in the shadows, foregoing benefits like legal protections, official documents, and public services. This creates what researchers call a “low revenue, low engagement equilibrium”—the state collects little revenue and provides few services, while citizens remain disconnected and vulnerable.

By reducing the threat of extortion, the protection intervention made citizens more willing to engage with the state formally. They could access government services without fear of unlimited informal demands. The intervention was especially effective for households and for services that were highly negotiable or uncertain in price. 

This research offers hope for breaking the cycle of weak states and disengaged citizens. It suggests that strengthening civil society and empowering citizens doesn’t undermine government revenue—it can actually enhance it by bringing more people into the formal system. The path to stronger, more accountable government may start with ensuring citizens can engage with the state on fair terms.

About the Author(s): Soeren J. Hennan is an Assistant Professor in Political Science at the University of Wisconsin-Madison, Laura Paler is a Provost Associate Professor in the Department of Government at American University’s School of Public Affairs, Wilson Prichard is an Associate Professor of Global Affairs and Political Science at the University of Toronto, Cyrus Samii is a Professor of Politics at New York University, and Raúl Sánchez de la Sierra is an Associate Professor at the University of Chicago Harris School of Public Policy. Their research “Seeing like a citizen: Experimental evidence on how empowerment affects engagement with the state” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Reviewing fast or slow: A theory of summary reversal in the judicial hierarchy

The forthcoming article “Reviewing fast or slow: A theory of summary reversal in the judicial hierarchy” by Alexander V. Hirsch, Jonathan P. Kastellec, and Anthony R. Taboni is summarized by the author(s) below.

In recent years, a debate has emerged about the U.S. Supreme Court’s use of its “shadow docket,” which generally describes cases in which the Supreme Court acts without the benefit of full briefing, oral arguments, and signed opinions.  Many critics of the shadow docket have argued the Court’s institutional performance suffers when it decides cases too rapidly. This concern has even been mounted by some of the justices themselves; for example, dissenting in a 2025 shadow docket decision regarding the Trump administration’s termination of federal education grants, Justice Elena Kagan wrote, “The risk of error increases when this Court decides cases–—as here–—with barebones briefing, no argument, and scarce time for reflection.”

In this article we focus on one tool in the shadow docket arsenal through which the Court operates in a mode of “quick review”: summary reversal, when the Court reverses a lower court without written briefs on the merits or full arguments. Summary reversal stands in contrast to the “full review” that the Court undertakes when it holds oral arguments, deliberates over several months, and then provides full written opinions (often with concurrences and dissents).  We develop a formal model that evaluates the tradeoffs between quick review and full review in the judicial hierarchy.

The model shows how access to summary reversal creates both benefits and costs for the Supreme Court. On the benefits side, the possibility of summary reversal causes ideologically distant lower courts to comply more often; as a result, summary reversal can generate additional compliance on top of what is gained from full review. On the other hand, having summary reversal poses a subtle cost on the higher court (and the hierarchy as a whole) – sometimes, a better-informed lower court that is ideologically aligned with the Supreme Court will choose a disposition with which neither court agrees to avoid the risk of being summarily reversed. This result—which we can think of as “pandering” by lower court judges—means that, somewhat counterintuitively, being able to summary reverse lower courts can actually make the Supreme Court worse off than if it were obligated to engage in full review.

Collectively, these results have important implications for understanding the use and consequences of summary reversals by the Supreme Court, and point towards a broader theoretical understanding of the importance of the shadow docket.

About the Author(s): Alexander V. Hirsch is a Professor of Political Science at the California Institute of Technology, Jonathan P. Kastellec is a professor in the Department of Politics at Princeton University, and Anthony R. Taboni is a post-doctoral research fellow in the Department of Government at the University of Texas at Austin. Their research “Reviewing fast or slow: A theory of summary reversal in the judicial hierarchy” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

An ecclesiastical court: Christian nationalism and perceptions of the US Supreme Court

The forthcoming article “An ecclesiastical court: Christian nationalism and perceptions of the US Supreme Court” by Miles T. Armaly, Jonathan M. King, Elizabeth A. Lane, and Jessica A. Schoenherr is summarized by the author(s) below.

In recent years, the term “Christian nationalism” – previously relegated to academic and activist circles – has entered the mainstream political lexicon. Christian nationalism is an ideology that blends the belief that the United States was founded as a Christian nation with the notion that its laws and should reflect and protect Christian values and ways of life. This ideology is not simply about personal faith; it is a political vision that calls for embedding Christianity more deeply into public life and the law.

As Christian nationalism has gained traction, many argue that its influence on American institutions and politicians has grown. The U.S. Supreme Court has increasingly issued rulings that align with the values of Christian nationalists, most notably in decisions like Kennedy v. Bremerton School District (i.e., the football coach prayer case) and Dobbs v. Jackson Women’s Health Organization. Court observers have also raised concerns that certain justices are demonstrating not just a personal religious faith, but a judicial philosophy sympathetic to Christian nationalism.

With the Court’s public religious profile as a backdrop, our study investigates how Christian nationalism shapes attitudes toward the Supreme Court and its decisions. Using both observational and experimental approaches across two large, nationally representative samples, we examined three main questions:

  1. Do Christian nationalists support the Court’s decision to overturn abortion rights? 
  2. Are Christian nationalists more likely to agree with the use of religious and non-legal reasoning in Court decisions? 
  3. Does seeing a justice associated with Christian nationalist symbols increase support for religious reasoning in the law?

The answer to all three questions is yes. Observationally, individuals who score high on our measure of Christian nationalism were significantly more likely to support the Dobbs decision. They were also more likely to endorse the idea that justices should rely on religious and other non-legal decision making factors, as opposed to strictly legal reasoning, when deciding cases.

Experimentally, we tested the effect of Christian nationalist symbols. Exposure to real-life incidents – one where Justice Alito was caught on tape agreeing that we must “return the country to a place of godliness” and another where he flew an ‘Appeal to Heaven’ flag at his vacation home – increased support for the idea that religious logic is acceptable in Supreme Court decisions, especially among those not already sympathetic to Christian nationalism. As Christian nationalism becomes more visible in American politics, it also becomes a more powerful legitimating force for a particular kind of jurisprudence that blurs the lines between church and state.

The implications are profound. If more Americans come to see the Court as aligned with a specific religious-political ideology, its perceived legitimacy may become polarized along those lines. Any skepticism about the Court’s status as neutral arbiters of the law in a pluralistic society may deepen existing rifts. The intertwining of Christian nationalism and judicial authority, as well as the public’s reaction to that intertwining, raise urgent questions about the future of American democracy, constitutional interpretation, and the exact place of religion in American society.

About the Author(s): Miles T. Armaly is an Associate Professor of Political Science at the University of Mississippi, Jonathan M. King is an Assistant Professor in Political Science at the University of Georgia, Elizabeth A. Lane is an Assistant Professor of Political Science at North Carolina State University, and Jessica A. Schoenherr is an Assistant Professor in the Department of Political Science at the University of Georgia. Their research “An ecclesiastical court: Christian nationalism and perceptions of the US Supreme Court” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

The public agglomeration effect: Urban–rural divisions in government efficiency and political preferences

The forthcoming article “The public agglomeration effect: Urban–rural divisions in government efficiency and political preferences” by Theo Serlin is summarized by the author below.

Why do cities vote for the left? This pattern, which holds in almost all economically-developed democracies, is puzzling, given that urban voters on average have higher incomes and so should stand to lose in relative terms from left-wing economic policies. Typical explanations, especially in the US, focus on the role of cultural issues. Rural voters are more likely to be white, Christian, and socially conservative, and vote Republican because of the parties’ stances on issues around race and abortion. While these factors are important for the urban-rural divide now, they can’t explain the chronology of the urban-rural divide. In the US, the urban-rural divide emerged in the 1930s, before the Republican party was especially popular among white, Christian, or racially conservative voters. 

I introduce an alternative explanation for the urban-rural divide: agglomeration effects. Much research in urban and spatial economics finds that businesses are more productive in cities. The forces behind that phenomenon apply all the more so to public sector provision. The public sector has natural economies of scale due to fixed administrative costs and nonrivalry. If the public sector is more productive in urban areas, the tradeoffs that urban and rural voters face between taxation and government provision are different. Urban voters receive more valuable government services for a given dollar of taxation. We would expect them to be more willing to accept higher taxes in return for more public services.

If this factor explains urban-rural divides in voting, we should only observe cities voting for the left when questions about the size of government divide left from right. This fits the timing of the emergence of the urban-rural divide in the US. In 1932, the main issues dividing the parties were prohibition and the tariff, and there was no clear urban-rural divide in voting. In 1936, the New Deal had reoriented US politics around government spending; cities moved into the Democratic camp. I also develop a number of measures of government efficiency and show that counties with more productive governments moved towards the Democrats as the parties diverged on redistribution. Surveys from the era show that urban voters were more supportive of higher taxes and the New Deal, making it more plausible that the emergent urban-rural divide was driven by this mechanism.

The US is not unique in having an urban-rural political divide. In the UK, the timing of the emergence of the divide also fits with this mechanism. There was no clear relationship between urbanization and left-wing voting at the turn of the 20th century. By the 1930s, when Labour had replaced the Liberals as the main left-of-center party, urbanization strongly correlated with vote choice. In Canada, the urban-rural divide came into being in the 1960s, when the Liberals moved left and set up the single-payer healthcare system. Around the world, the left-right urban-rural divide is a feature of economically-developed democracies with broadly programmatic politics, where the size of government is a major component of the left-right divide. Greater government efficiency in urban areas due to economies of scale and nonrivalry alters preferences for government spending and creates political cleavages.

About the author: Theo Serlin is a Lecturer (Assistant Professor) in the Department of Political Economy at King’s College London. Their research “The public agglomeration effect: Urban–rural divisions in government efficiency and political preferences” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

 

The American Journal of Political Science (AJPS) is the flagship journal of the Midwest Political Science Association and is published by Wiley.