Can You Be Wrong Aboout How Happy You Are?

I accept a more or less functionalist account of the mind, according to which mental states are individuated by their functional role in the economy of cognition and behavior. I also believe in the possibility of what is sometimes called the “Cartesian Fallacy,” the assumption that our own mental states are transparently accessible to consciousness. Functionalism together with anti-Cartsianism about introspective access imply that we may not know what words in our language mean, even if we use them correcly, and that we may have false beliefs about what we believe.

I've brought this up before in an earlier discussion of “meta-atheism,” [pdf] roughly the idea that people may sincerely believe they believe in God when they do not in fact. The disposition to avow a belief that P is neither necessary nor sufficient for believing that P. Actually believing P requires that one is generally disposed not only to say that one does, but that one is disposed to make certain inferences, to behave in certain ways, and more. This raises the possibilty that people may sincerely believe that they are happy or unhappy, when they actually aren't.

It is hard to believe that one could make a mistake about whether one was in a state of pain, say. But happiness probably isn't like that. If happiness is a complex, partly historically and socially constructed condition composed of dispositions to experience certain basic emotions and moods in a distinctive combination, togteher with dispositions to have certain thoughts, and to behave in certain ways, then it may be pretty plausible that we could just be wrong about whether we are happy, or about how happy we are.  

I don't think I want to press this view very hard, but it strikes me as a real possibility, and another reason why self-report is not the most promising technique of measurement.  

Analytical Egalitarianism

I've been following Bryan Caplan's posts on analytical egalitarianism with interest, and I agree with him: the point of a model is to track truth and enforcing moral and epistemic norms is the function of institutions and culture. I'd like to point out an additional worry I have about the same bits of Sandra Peart's reply that Bryan excerpted:

Without AE, we wonder whether truth-seeking is incentive compatible…

[…]

Consider models with agents of different fixed types. Suppose a modeler proposes to pick who is in the “better” and who in the “worse” class. If the modeler can do this and policies follow from the exercise, the modeler may benefit. That’s one incentive issue. We consider rewards from both material sources and applause.

[…]

[O]nce we allow for difference to creep into the analysis, the incentives are asymmetric: the theorist gains more by showing difference than similarity.

Isn't there a weird kind of question-begging going on here? The incentive-compatibility of truth-seeking depends on how you model it.  If the model recognizes the heterogeneity of motivation, then one can say that some people will opportunistically abuse the heterogeneity of models that allow it, but others won't. Peart's argument for AE turns on assuming it in her implicit model of motivation. If AE assumptions about motivation are false, the moral and/or epistemic argument for assuming them anyway falls apart. That said, even assuming rational choice homogeneity of motivation, institutions and the internalization of social norms can raise the price of defecting from truth-oriented epistemic norms making truth-seeking incentive compatible.

In one of their papers, Peart and Levy point out the anti-AE views of Edgeworth in Mathematical Psychics, which fascinated me, as it illustrates the fallacy of thinking utilitarianism is an egalitarian doctrine in which everyone is equal because everyone's pains and pleasures count equally. What counts equally are equal units of pleasure and pain. If I remember from my later reading of Edgeworth, his fundamental unit of analysis is the smallest discenible duration of experience. Each such moment of experience will have an intensity on a pain/pleasure scale. Now, Edgeworth claimed that some races have a narrower range than others. This inequality in hedonic range can easily justify all kinds of odious injustice. As Edgeworth himself says, if you've got lamps that shine bright and lamps that shine dim, and a limited amount of energy, you'll mazimize light by reserving it all for the bright.

Now, as I see it, the problem isn't Edgeworth's violation of AE, but his vulgar utilitarianism. It is an empirical matter whether inequality in hedonic capacity is true or not. It may be. But a satisfactory moral theory or theory of justice ought to be indifferent to this particular empirical contingency. That Edgeworth's theory isn't indifferent–indeed, that massive morally relevant consequences turn on it–refutes his normative theory. But if heterogenity of hedonic capacity is a fact, it's a fact, and a good model of hedonic capacity will say so.