The forthcoming article “Classification algorithms and social outcomes” by Elizabeth Maggie Penn and John W. Patty is summarized by the author(s) below.
Beyond Labels: How Algorithms Really Shape Our Lives (and Our Behavior)
Algorithms, from credit scores to job applications, healthcare, and even how police decide where to patrol, influence everybody’s life. Some see these systems as neutral tools that simply categorize or “classify” things, like whether you’re creditworthy or a flight risk. In “Classification algorithms and social outcomes,” we present a theoretical analysis of what happens when algorithms don’t just classify people, but also change how people behave.
For example, imagine a student preparing for a test. The test is designed to measure their college readiness, but if the student believes a high score will get them into a good college, they’ll likely “study for the test”. Similarly, if an algorithm is designed to detect fraud, people might commit less fraud, even if that wasn’t the algorithm’s direct stated goal. This is the core idea: people adjust their “life choices” based on how they expect an algorithm to classify and reward or punish them.
This social scientific perspective on algorithmic design leads to some surprising and critical insights:
- Accuracy Isn’t Always Neutral or Fair: One might assume an algorithm designed purely to be “perfectly accurate” would be “fair.” Our analysis shows that even with highly accurate data, an accuracy-maximizing algorithm can actually worsen inequality, pushing some groups towards desired behaviors (like obeying the law) and actively discouraging it in others.
- Algorithms Might Deliberately “Introduce Noise”: Counterintuitively, an algorithm designed to be accurate might — by design — make seemingly “inaccurate” decisions or reward people probabilistically. Again, this “noise” isn’t a flaw; it’s a strategic choice by the algorithm to manipulate behavior more effectively.
- A More Punitive Designer Can Sometimes Be Better (for Individuals!): Another surprising finding is that increasing an algorithm’s payoff for punishing individuals (such as a ticketing algorithm) may actually make all individuals being classified better off in expectation. This happens because the algorithm, seeking to penalize more, might make non-compliance more appealing, a trade-off all individuals might prefer.
This research highlights that traditional ideas of “algorithmic fairness,” which often focus on statistical parity in classification errors, might be incomplete. When people’s behavior changes in response to an algorithm (a concept known as “performativity”), we need new ways to think about fairness. We offer one new way, which we call “aligned incentives,” which is satisfied by any algorithm that offers individuals in different groups similar behavioral incentives. Finally, we believe our findings have implications for public policy, affecting areas like:
- Housing and Lending: Algorithms influencing who gets housing or credit can shape financial decisions.
- Hiring: AI in hiring affects not just who gets jobs, but also who applies.
- College Admissions: Algorithms here can influence student preparation and application strategies.
- Tax Audits: The IRS’s algorithms, designed to detect under-reporting, also deter fraud, showcasing the tension between accuracy and behavioral goals.
Ultimately, we argue that, to truly understand the impact of algorithms, we must look beyond their technical definitions and consider the goals of those who design them and the resulting, sometimes unexpected, behavioral consequences for society. It’s a call to broaden our perspective on both how algorithms shape our world and how to evaluate the fairness of them.
About the Author(s): Elizabeth Maggie Penn is a Professor of Political Science and Quantitative Theory and Methods at Emory University and John W. Patty is Professor of Political Science and Data & Decision Sciences at Emory University. Their research “Classification algorithms and social outcomes” is now available in Early View and will appear in a forthcoming issue of the American Journal of Political Science.

Speak Your Mind