Introduction
Among my many sins, I have lied. I have told white lies and black lies, half-lies and whole lies, pathetic little lies and truly dark, evil lies. In pretty much every case, I should not have done it. This post is not an attempt to "muddy the waters about the concept of lying" and thereby to exonerate myself of anything I've ever said and done—honestly. Lies weaken friendships, unravel families, and launch wars. I am against lying.
With that out of the way: I don't think the concept of lying is as simple as it first appears. And I think that there's a lot to be learned by exploring it further.
_____
Philosophers, like the rest of us, traditionally define a lie as something along the lines of “making a statement believed to be false, with the intention of getting another to accept it as true”. Since this definition is both contestable and somewhat imprecise, various philosophers have offered alternative definitions. As always, the
Stanford Encyclopedia of Philosophy has a decent summary.
All the definitions on the SEP, however, and all that I have come across from philosophers on my own, suffer from what seems to me to be a deep confusion about the idea of the “true meaning of a statement”. By unpacking what exactly a lie is, therefore, we may gain clearer intuitions about communication in general.
Furthermore, the above confusion seems to underlie much—perhaps most—of the reasoning given in support of Kantian ethics. So exploring lying may also shed some light on the reasons for disagreement between consequentialists and Kantians more generally—and even, perhaps, on the reasons why there is such widespread ongoing disagreement in moral philosophy.
1. Strategic Information Transmission
There is an insight about communication that comes naturally to mind on reading some of the relevant game theory. Consider, for example,
Crawford and Sobel (1982), and the subsequent economics literature on information transmission. This is the standard setup:
There are two common-knowledge-rational people, each with a common prior over each other’s utility functions and over the states of the world. Person 1 receives a private signal, with some valuable information about the world. Then, 1 gets to send 2 a “message”. Finally, after learning what there is to learn from the message, 2 chooses an act so as to maximize her own expected utility.
The “messages”, here, are just elements of a non-descript “message space”. Perhaps it is best to think of them as numbers: Message #1, Message #2, and so on. For the messages to mean anything in 2’s ears, therefore, it must be established in advance which message 1 will send—or, since 2 doesn’t perfectly know 1’s preference-type, the probability with which 1 will send each available message—conditional on receiving each signal.
If 2’s utility function is the same as 1’s, 1 will send as precise a message as possible. That is, if the message space is at least as big as the signal space, 1 will assign a unique message to each signal he could possibly receive. That way, when 2 uses the information to maximize her own utility, she’ll be maximizing 1’s as well.
If 2’s utility function is the negative of 1’s, 1 will send as uninformative a message as possible. That is, he will assign the same message to any signal he might receive. This is because, if there were any mutual information between the message and the signal, 2 would learn something from the message, and she would use this information to increase her expected utility, which would decrease 1’s.
The interesting cases are in between. If the utility functions are similar but not identical, 1 will partition the signal space somewhat coarsely, and assign each partition element a single message. That is, he will reveal some but not all of the information he receives. The message he sends “means what it means”—that is, 2 will accurately infer from it that the signal is in the corresponding partition-element
p—because the act that maximizes 2’s expected utility, conditional on the signal being in
p, will give 1 higher utility given
each signal in
p than the act that would maximize 2’s EU conditional on the signal being in any other partition.
_____
In some sense, this is all just a long-winded way of saying that 1’s “signal-to-message” function must be incentive compatible. But it is also in some sense a more profound insight about language in general. This insight, rather than the actual content of any theorems about strategic information transmission, is what I want to dwell on.
In most contexts, I am speaking English. That is, when my friend has a crayon and a pencil and I ask for the “crayon”, the friend accurately infers that I want the crayon, because we have somehow pre-established a “signal-to-message” function (i.e. the English language) that pairs my internal information that I want a stick of colored wax with the message “crayon” and my internal information that I want a stick of wood and graphite with the message “pencil”. But when I am in a French-speaking context, the friend will accurately infer from my request for a “crayon” that I want a pencil (since the French word for pencil is “crayon”). He can infer this because he knows that I know that if I ask for a “crayon”, a pencil is what I’ll be getting.
Likewise, in most contexts, the conversation is fully collaborative, so I make noises with my mouth that reveal what I know as fully as possible. But in other contexts, the context is somewhat adversarial, and I make noises that reveal only a coarse-partition-element of what I know. For instance, suppose someone wants feedback on her artwork, and it is plain that I want to give her some constructive feedback without enduring the awkwardness of potentially offending her. I might say “it’s incredible” if it’s incredible, and “it’s good” if it is anything less than incredible. That is, I may want to send Message #1 if I receive Signal #1, but send Message #3 whether I receive Signal #2, #3, or #4. And so, at least in principle, the artist, on receiving Message #3, will infer that the true signal was #2, #3, or #4, in proportion to their prior probabilities.
The point is that, in all contexts, incentive compatibility is not merely a more- or less-stringent side-constraint on the set of feasible signal-to-message functions. Incentive compatibility is
what gives all messages their meaning.
To carry this observation to its conclusion: when agents have common knowledge of rationality, there is
no such thing as lying. 1 may convey more information or less, but it is impossible for him to “deceive” 2. The meaning of each message
m is simply the prior probability of each state, conditional on 1 sending
m. And whatever message 1 sends, 2 accurately updates her beliefs in accordance with this meaning. Whatever message 1 sends, in other words, is true.
2. Example: Lying about Obamacare
For years, John McCain told his Republican constituents that he disapproved of Obamacare. Then, as fate would have it, last fall he found himself the marginal voter on a bill to repeal it. He voted not to. All the times that McCain said he disapproved of Obamacare, was he lying?
In a world of common knowledge of rationality, what happened, to simplify a bit, was this.
The common prior stipulates that, among people in McCain’s situation over the past few years, some (say 90%) of those who will send some message
m disapprove of Obamacare, and some (say 10%) approve of it (but just want to blend in with the disapprovers to some extent).
m may take any form; perhaps it is the sentence “I disapprove of Obamacare”. But just as “crayon” means “pencil” when I’m speaking with a French person, “I disapprove of Obamacare” literally means “there is a 90% chance I disapprove and a 10% chance I approve of Obamacare”, when someone in McCain's situation over the past few years is speaking. Remember, the incentive-compatible distribution over signals, conditional on the message having been sent,
is the message's meaning.
Since this was the probabilistic, context-dependent meaning of the sentence “I disapprove of Obamacare”, it was literally honest for McCain to assert it.
_____
Consider the following analogy.
There are 20 balls behind a wall, 20 doors in the wall, and a crowd of people on the other side of the wall. 10 of the balls are black and 10 are white. Painted on 10 of the doors is the message “This ball is black”; painted on 10 of the doors is the message “This ball is white.” Clearly, if it is my job to arrange the balls behind the doors so that, for someone opening the door, the message on the door is accurate, I should put all 10 black balls behind the “This ball is black” doors, and all 10 white balls behind the “This ball is white” doors.
Likewise, what should I do if painted on 10 of the doors is the message “There is a 90% chance this ball is black and a 10% chance this ball is white”, and painted on 10 of the doors is the message “There is a 10% chance this ball is black and a 90% chance this ball is white”? The answer, I think, is straightforward: I should put 9 black balls and 1 white behind the first set of doors, and 1 black ball and 9 white behind the second set of doors.
_____
Suppose McCain had literally spent the years saying, “There is a 10% chance I like Obamacare and a 90% chance I don’t”. And suppose he regularly communicated in probabilities like this, and suppose the probabilities were well calibrated (i.e. when he said there was a 10% chance he supported something, and it was later revealed whether he supported it, 10% of the time he really did). People would call him eccentric, but they wouldn’t call him a liar.
In a world of CKR, this is, implicitly, how everyone communicates all the time. So, in a world of CKR, McCain didn’t lie. He just happened to be the one off-colored ball out of ten.
3. Some Preliminary Steps Toward a Definition of Lying
Of course, assuming McCain didn’t change his mind about Obamacare at the last minute or anything, McCain did lie. What the above illustrates isn’t that lying is impossible—just that the definition of lying depends crucially on what those involved know about each other’s rationality. (Crawford, one of the authors of the Strategic Information Transmission paper mentioned above, explores this relationship in more detail in
Crawford (2003).)
With that in mind, here is one first-pass attempt at a useful, precise definition of the word:
Lying is sending a different message than one would have sent in the “nearest possible world” in which common knowledge of rationality obtains.
Unfortunately, this is not quite right—it would imply that “We do not have common knowledge of rationality” is always a lie, for example. A fuzzier but less flawed definition might be:
Lying is an attempt to take advantage, through communication, of an absence of common knowledge of rationality.
I could spend some time thinking about how to make this more precise without messing up, but for now I think that that is decent enough. At any rate, I think it is an improvement on “making a statement believed to be false, with the intention of getting another to accept it as true”.
4. Kantianism
The above observations about the relationship between incentive compatibility and meaning, or between lying and CKR, do not seem to have made their way into mainstream Kantianism.
According to Kant[ianism], moral behavior consists in not performing acts that violate the “Categorical Imperative”. Kant formulates this Imperative three ways, and asserts that they are equivalent: that all three formulations rule out the same class of acts. This is a bold claim, with little evidence given in support of it; in fact none of the formulations are very precisely posed (as the long, tortuous history of disagreement among Kant scholars testifies), and it is not obvious that any are even coherent, let alone compatible with the other two, let alone equivalent to the other two. It is possible, in other words, that one criterion is a false guide to morality, while the others are valid. That said, this post will limit itself to quibbling with the two commonly cited formulations. The first is the Criterion of Universalizability, which goes as follows: “Act only on a maxim that you can at the same time will to be universal law”. The second is the Criterion of Humanity as an End in Itself, which goes as follows: "Act in such a way that you treat humanity, whether in your own person or in the person of another, always at the same time as an end and never simply as a means."
Since it is not clear what either of these sentences mean, introductory Kant texts typically follow them with an example. And the example usually seems to be Lying.
From J. David Velleman’s
“Brief Introduction to Kantian Ethics”:
Rational creatures have access to a shared perspective, from which they not only see the same things but can also see the visibility of those things to all rational creatures. Consider, for example, our capacity for arithmetic reasoning. Anyone who adds 2 and 2 sees, not just that the sum is 4, but also that anyone who added 2 and 2 would see that it's 4, and that such a person would see this, too, and so on. The facts of elementary arithmetic are thus common knowledge among all possible reasoners, in the sense that every reasoner knows them, and knows that every reasoner knows them, and knows that every reasoner knows that every reasoner knows them, and so on.
... But what does arithmetic reasoning have to do with acting for reasons? Well, suppose that the validity of reasons for acting were also visible from a perspective shared by all reasoners—by all practical reasoners, that is.
...
[L]ying violates the fundamental requirement “Act for reasons”. [Why? Because—] To lie is intentionally to tell someone a falsehood. When we tell something to someone, we act with a particular kind of communicative intention: we say or write it to him with the intention of giving him grounds for believing it. Indeed, we intend to give him grounds for belief precisely by manifesting this very communicative intention in our speech or writing. We intend that the person acquire grounds for believing what we say by recognizing that we are acting with the intention of conveying those grounds.
Now, suppose that our wanting to give someone grounds for believing something constituted sufficient reason for telling it to him, whether or not we believed it ourselves. In that case, the validity of this reason would be common knowledge among all reasoners, including him. He would therefore be able to see that, in wanting to give him grounds for believing the thing, as was manifest in our communicative action, we already had sufficient reason for telling it to him, whether or not we believed it. And if he could see that we had sufficient reason for telling it even if we ourselves didn't believe it, then our telling it would give him no grounds for believing it, either. Why should he believe what we tell him if we need no more reason for telling him than the desire, already manifest in the telling, to give him grounds for believing it? So if our wanting to give him grounds for believing something were sufficient reason for telling it to him, then telling him wouldn't accomplish the result that we wanted, and wanting that result wouldn't be a reason for telling him, after all. Wanting to convey grounds for belief can't be a sufficient reason for telling, then, because if it were, it would not be a reason at all.
From Christine Korsgaard’s
“The Right to Lie”:
This gives us another way to formulate the test for treating someone as a mere means: suppose it is the case that if the other person knows what you are trying to do and has the power to stop you, then what you are trying to do cannot be what is really happening. If this is the case, the action is one that by its very nature is impossible for the other to assent to. ... This is of course not intended as a legal point: the point is that any action which depends for its nature and efficacy on the other's ignorance or powerlessness fails this test. Lying clearly falls into this category of action: it only deceives when the other does not know that it is a lie.
Sometimes it is objected that someone could assent to being lied to in advance of the actual occasion of the lie, and that in such a case the deception might still succeed. One can therefore agree to be deceived. I think it depends what circumstances are envisioned. I can certainly agree to remain uninformed about something, but this is not the same as agreeing to be deceived. For example, I could say to my doctor: "Don't tell me if I am fatally ill, even if I ask." But if I then do ask the doctor whether I am fatally ill, I cannot be certain whether she will answer me truthfully.
Korsgaard and Velleman are two of the world’s preeminent scholars of Kantian ethics.
Conclusion
I have now spoken with several self-identified Kantians about this, including two University of Chicago philosophy professors: Ben Laurence and Anubav Vasudevan. I don't think I'm just missing something obvious, since neither professor had thought about information transmission from the standard game-theoretic perspective, nor about any implications it (or other applications of game theoretic reasoning) might have for Kantianism.
This is not to say that no one who knows game theory jargon could take Kant seriously. A few people (including Douglas Hofstadter) have
identified Kant's universalizability criterion with the concept of "superrationality", for instance.
But as far as I can tell,
most defenses of Kantianism are, as written, predicated on straightforward and easily preventable mistakes, that no one who knows game theory jargon would take seriously. I may be mistaken, but I expect that Velleman and Korsgaard have not read and stewed on the game theory of information transmission. On doing so, it seems obvious that you “cannot be certain whether the doctor will answer you truthfully” about your illness regardless of whether or not you have previously asked her to lie to you. Likewise, it seems obvious in this framework that the probability you should assign to being ill, conditional on the doctor telling you that you are not, depends on your past requests to the doctor—and also on everything else that might in any way affect her incentives to say “You are ill” or “You are not ill”, conditional on each state of the world (including her beliefs about your rationality). Finally, it seems that Velleman, in the second sentence of the last excerpted paragraph, confuses the notion that people are all
reasoners with the notion that they have
common knowledge of each other's status as reasoners—the one condition in which there are no opportunities to "lie" (take advantage of CKR's absence)—and so makes the false inference that the message you
actually have reason to send must be the "honest" message you
would have reason to send under CKR.
In short, with the help of some game theory, perhaps moral philosophers can get a bit closer to convergence about why exactly lying is the immoral act it (at least almost always) is.