Dialogue on Rationality

— At the Social Change Workshop for Graduate Students, a conference I ran a couple weeks ago, John Tomasi and I did a little workshop session called “Ideal Justice, Real Institutions” about the constraints social scientific evidence about the possibilities for social organization place on theorizing about justice. It was a good time. John and I were really just thinking out loud with the workshop participants. And it really got my juices flowing. Some further reflection led me to consider just how a moral philosopher (untainted by hallway conversations with Gordon Tullock) might react to a social scientist's misgivings about philosophical theorizing about the best society. The following (hastily composed) dialogue encapsulates a good bit of my reflection.

Comments, please!

—-

Political Philosopher (PP): Behold! Here is my picture of the best social order!

Social Scientist (SS): Well, it's unrealizable. [Throws Calculus of Consent on the table.] Look, you're assuming perfect voluntary compliance/costless enforcement, and perfect alignment of incentives between agents of the state and the citizens they are supposed to serve. The social world doesn't work like that. Your scheme isn't obviously utopian, but you can't get there from here.

PP: SS, your conclusions about the untenability of my ideal society assumes a conception of rationality that is both false and degrading. And you're simply missing the point. The IDEA of normative political theory is to create a picture of what society could look like if we behaved BETTER. Why should I accept the assumption that we're all going to behave badly? Nobody ever said justice would be easy.

SS: I don't think you quite understand. OK, OK. There is no homo economicus. Rational choice ain't so rational. Whatever. But I don't have to assume a strict homo economicus model of behavior to get all the problems I mention (and I could mention more). All you need is to understand that people do not at all times (if ever) deliberate over their alternatives in the guise of a citizen of the Kingdom of Ends. All I need for my conclusions is the fact that each person sometimes make choices in the guise of, say, mother or son or employee or friend or artist or businessman, etc., and that a great many people are making choices on the basis of these kinds of practical identity at any given time. And although each person may in principle endorse your conception of justice, their motivation as individuals trying to realize their life-plans and capacities… to do the best they can for their children, to fulfill their obligations as an employee, or whatever, is often in conflict with the motivation they would need to act upon if your ideal were to be realized.

PP: OK. You sound a lot like philosopher for a social scientist. But I still think you're assuming too much. Let me try to sound like a social scientist. Suppose we're in a society that is deeply impoverished, deeply corrupt, deeply distrusting, and also violent and dangerous. Sadly, such societies are all-too-common. In such a society it would be rational, in your terms, to place a very high “discount rate” on expected future benefits, because the future is so uncertain, others are so unreliable, and there is no systematic assurance that I will ever see benefits from cooperation or collective action. Even if we all badly need certain public goods to be provided, it will be “rational” for each of us to take $10 from the treasury today, thus draining the treasury, rather than leave the $10 in treasury as part of the pool of funds that could provide us all with the public goods that could have many, many times $10 value to each of us. Now, this description seems true enough to life. And so perhaps you economists are right and the _explanation_ here is a superduper high discount rate. But is this a moral JUSTIFICATION of the discount rate? Is this really supposed to constrain my theorizing about justice for this society? Do I have to assume right at the beginning that cooperation or collective action is impossible? The question is: SHOULD people have such a high discount rate? Wouldn't they all in fact be better off if they trusted each other more? So SHOULDN'T they be more trusting and cooperative? The point is that morality is precisely what overcomes the constraints of rationality, in your cool sense of rationality. I mean, if I think like you, it would seem that we're stuck in a sort of basin here without the momentum to get over the lip. That's WHY people need to behave morally. That's the point of my theory: to show us where we can get if we behave better.

SS: OK. I see where you're coming from. But ought implies can, right? Is there in fact some Rawls-flavored “sense of justice” that disposes people to comply with the principles of justice once they come to rationally (in your moral philosopher's sense of 'rationally') endorse them? Can they just take a leap of faith to trust and cooperation once they see they morality of it? I don't think you can just stipulate this, any more than I can just stipulate utility maximization as the principle of choice. In order to have a constructive conversation, I think we need to come up with a characterization of rationality that is not blithely descriptive. I agree that your moral theorizing should not be so tightly constrained by regularities of behavior that may be a consequence of people acting worse than they could be acting. But we need a characterization that is also not dreamily aspirational. We need to know HOW good people can be, and under what conditions they WILL be good. The experimentalists show us that people are in fact more cooperative than rational choice led us to expect. So we can't just assume defection in coordination games. And that gets us SOMEWHERE. But cooperation and trust in these games are context sensitive. It depends on the way the game is structured (logically identical games aren't necessarily played identically), and how people represent the games they're playing. Perhaps “morality” is a kind of lens through which to represent games, such that people seeing the game in this way will commit to the cooperative outcomes and enable larger cooperative surpluses. But then we've just pushed the question back to a cognitive problem. Given the de facto social psychology of a people, is there a psychological/cultural route to a schema of representation, say, a moral schema, that will facilitate trust and cooperation? Or are there cognitive and cultural path dependencies that limit the range of feasible representational schemas we can get to from our starting point?

PP: Well, I'm not sure. But I'm inclined to assume that persons have a fundamental nature as moral beings that enable them to take up the moral point of view at any time. This is consititutive of our moral personhood.

SS: Well, I'm not inclined to assume this. The moral point of view strikes me as a contingent and conventional cultural achievement.

PP: I hope you're wrong.

SS: Me too.