When the Scofields and Karen Armstrongs of the world talk about how the new atheists just aren’t aware of the liberal, tolerant, sativa smoking, feminist, genderqueer god concept, my response is “I don’t believe in that motherfucker, either.” She’s just as poorly evidenced as the old fashioned patriarchal god. She’s also not the predominant god concept impacting the African American community.
And likewise, there is only one best charity: the one that helps the most people the greatest amount per dollar. This is vague, and it is up to you to decide whether a charity that raises forty children’s marks by one letter grade for $100 helps people more or less than one that prevents one fatal case of tuberculosis per $100 or one that saves twenty acres of rainforest per $100. But you cannot abdicate the decision, or you risk ending up like the 11,000 people who accidentally decided that a pretty picture was worth more than a thousand people’s lives.
After my last post about how lying for political gain is actually a massive risk that often goes completely unrecognized by those who naively propose it, some people pointed out a number of related issues. One common reply was that deception in the political arena is actually par for the course, and not an abnormal state of affairs - people largely either don’t care what’s true, won’t take the effort to verify claims, or can’t get the word out even if they do uncover lies.
Even if that is the case, consciously deciding to lie in order to effect a certain goal still means betting that your willful deception won’t be revealed. It means hoping that a critical mass of people simply won’t notice or won’t care that you’ve lied to them. In politics, that is far from guaranteed, especially in the long run.
Sometimes, it may work for a while:
But not necessarily forever:
Even taking an especially cynical stance, one must acknowledge that lies (such as the purported inferiority of racial minorities, women, sexual and gender minorities, the disabled, and so on) are sometimes noticed, including large and widespread lies. With this realization, action may be taken. Political apathy, common and infinitely renewable as it may seem, is not a constant, and not all lies can be passed off without consequence.
In choosing to lie, you’re taking the risk that your deception will come to light, and potentially backfire. Further, you do not have an accurate sense of the likelihood of this happening, or of what the consequences of its revelation may be. You’re aiming for a certain outcome, but with no idea of what might happen if you were to fail, or how probable your loss is. This can scarcely be considered an informed decision on your part. As a strategy for success, it’s essentially just recklessness, worse even than a blind bet: you do not even know how much you stand to lose. And being so confident that your lie will be accepted that any possibility of loss can be ignored is not warranted by historical trends.
Others posed the familiar problem of the inquiring murderers: If you are concealing Jewish refugees in your home, and Nazis come to your door to ask if you’re hiding any Jews, would it be wrong to lie? The simplicity of this scenario makes the answer both easy and useless for any wider purpose. This is the entirety of the question: Action A (not lying) will result in the unjust deaths of innocent people (negative outcome). Action B (lying) will result in their survival (positive outcome). You pick one, the scenario is concluded, and you’re left with the results of your decision.
In an unrealistically simple hypothetical situation like this, the answer is obvious assuming you value the lives of innocent people over their deaths. You pick Action B, lie to the Nazis, and your guests survive. The simulation ends and there is nothing more to it than that. There are no wider consequences. There is nothing unanticipated.
And what happens if your lie is discovered? If the Nazis find out, the results would be no worse than if you had told them the truth in the first place. (Even with this possibility added into the mix, lying is still the superior choice, as the probability of the deaths of your refugees is only 1 times the chance that the Nazis will catch you lying, as opposed to a probability of 1 altogether if you had not lied.) And if, after the war and the fall of Nazi Germany, others find out that you lied and call you to account for this? Then you tell them you were placed in a hypothetical scenario where lying was 100% certain to save innocent lives, and not lying was 100% certain to result in their deaths, with no consequences beyond that. And they tell you “Oh, okay, we understand. We would have done the same thing.”
If every ethical question involving honesty or dishonesty were as uncomplicated, limited in scope and consequences, and absolute in the certainty that given outcomes will result from specified actions, then the answers would always be this easy. In reality, they rarely are this simple, and neither are the answers. Some people posited scenarios such as “There are people who want to kill gay people because they think it’s a (wrong) choice”. Such hypotheticals fall under the same category as the inquiring murderers problem: unrealistically simple and useless for any practical purpose.
People make the same mistake when they envision the question at hand as actually being this basic: “If we tell people Statement A, they will hold Belief A, and this will be good for us. If we tell people Statement B, they will hold Belief B, and this will be bad for us.” With this reasoning, they conclude that we should tell people Statement A, and that’s that. No consideration is given to whether a certain belief is indeed reliably induced by exposure to a particular statement, or what other beliefs might result instead, or what precisely those “good” and “bad” results for us truly consist of, or even the actual truth of any of these statements and the possible consequences of willful dishonesty.
This oversimplified model thus fails to reflect reality in any way whatsoever, and those who use it as a guideline for how to act are neglecting to take the complexities of reality into account. I could just as well say that telling people that gay people choose their orientation gives them hope that gay people can change, and makes them less inclined to kill gay people than they otherwise would be. But have I provided any evidence that their current beliefs are leading to the alleged result, or that their holding of the proposed belief would lead to the desired result, or that they would even hold the belief we want them to if we told them this? No, and my claim would not be persuasive. Similarly unjustified claims are similarly unpersuasive.
Obviously, the same criticism applies equally whether you’ve poorly thought out the potential results of lying or telling the truth. There can be risks and uncertainties in either case. This is not as easy as a straightforward calculation of utility - it is a complex calculation of utility. But when you choose to lie, one thing is assured: You have made it the truth that you aimed to deceive people. It’s now an aspect of reality that is there for people to discover. It creates a vulnerability that telling the truth does not. Don’t forget to factor that in.
Even if we could solve these problems, there may be another one we’d then have to worry about. Let’s say we were able to create a robot that targets only combatants and that leaves no collateral damage—an armed robot with a perfectly accurate targeting system. Well, oddly enough, this may violate a rule by the International Committee of the Red Cross (ICRC), which bans weapons that cause more than 25% field mortality and 5% hospital mortality. ICRC is the only institution named as a controlling authority in IHL, so we comply with their rules. A robot that kills most everything it aims at could have a mortality rate approaching 100%, well over ICRC’s 25% threshold. And this may be possible given the superhuman accuracy of machines, again assuming we can eventually solve the distinction problem. Such a robot would be so fearsome, inhumane, and devastating that it threatens an implicit value of a fair fight, even in war. For instance, poison is also banned for being inhumane and too effective. This notion of a fair fight comes from just-war theory, which is the basis for IHL.
We had a discussion about this in my school’s GLBTQ center earlier this week. And we all agree, that something like this cannot be said from someone who holds a celebrity status. Her stating this sends us back decades in the fight for equality.
But her personal view of her sexuality has to be taken into account and why she said what she did. Cynthia Nixon has chosen not to label herself. She is attracted to both men and women, that isn’t something she chose. However, from what I understand about what she said, she stated that at this point in her life she’s chosen to just be with women. Not that she chose to be attracted to them. The attraction is there, but she is actively choosing to only act on part of her sexuality.
No one can define anyone’s sexuality but them. And for some, the choose to act on one aspect rather than another. However, care must be taken when stating things publicly because not all people (especially those who are anti-lgbtq) understand that sexuality is a very personal matter.
Whenever someone says “Even if this is true, we shouldn’t acknowledge it publicly because it would put us at a political disadvantage”, or otherwise advocates lying and dishonesty (like omitting true things) in order to achieve a certain goal, they’re implicitly presenting themselves as being omniscient, even if they may not consciously realize it. They’re assuming that they will be able to anticipate any and every change of events that may come up in the future, and that they will be able to account for it adequately. They’re acting as though the parallel virtual reality of lies and half-truths they’ve constructed is so infinitely extensible, and they themselves are so skillful at shaping and adapting it for every contingency, that it will always remain just as believable as the real world of actual truths.
But you do not have that ability. No one alive does. The people who choose to deceive others in this way are banking on not being caught, whether through their own skill or the simple inattention of anyone else listening. Such lies may pass muster on the small scale, in matters involving only a handful of people who don’t care to verify the issue at hand, or couldn’t even if they wanted to. But when scaled up, cracks inevitably emerge. Using lies to control the beliefs of a few people on a narrow topic may seem easy, but it would be an enormous mistake to assume it’s just as easy to control the beliefs of an entire globally networked society on a hotly debated social and moral question. It is difficult to articulate how significant a mistake this is. It’s almost like assuming that, because your tennis shoes are sufficient to take you to the corner store and back, they will also enable you to walk through a lava flow as wide as the moon. In the latter circumstance, you simply will not survive. Neither will your lies - they will be destroyed by reality.
If you had instead chosen to present the truth to the best of your knowledge - without reservation, without leaving out aspects you feel might be inconvenient to your goals - then, should you be wrong about something, you always have the option of admitting you were simply incorrect or misinformed or mistaken. What options does lying leave you with? Should you be caught up in your deceptions when they visibly fail to correlate with reality, you’ll either have to generate more lies to give the impression that the contradiction has been resolved, or admit to something much worse than having merely made an error in good faith. You’ll have to admit that you actively worked to deceive people, to plant false beliefs in their minds, in the pursuit of your own ends.
Adhering to the truth endows you with the benefits of correction, openness, and simple honesty. Concealing the truth by constructing falsehoods condemns you to secrecy, suspicion, further deception without end, and perpetual wariness that your lies will be exposed by something you didn’t see coming. Lying means making yourself responsible for the stewardship of an alternative reality, and for ensuring that everyone believes it is reality. Do you really think you can do that on a civilization-wide scale? You can’t. You shouldn’t even think of trying.
Because, when I look over my history, I find that my ethics have, above all, protected me from myself. They weren’t inconveniences. They were safety rails on cliffs I didn’t see.
I made fundamental mistakes, and my ethics didn’t halt that, but they played a critical role in my recovery. When I was stopped by unknown unknowns that I just wasn’t expecting, it was my ethical constraints, and not any conscious planning, that had put me in a recoverable position.
You can’t duplicate this protective effect by trying to be clever and calculate the course of “highest utility”. The expected utility just takes into account the things you know to expect.[emphasis added -ZJ] It really is amazing, looking over my history, the extent to which my ethics put me in a recoverable position from my unanticipated, fundamental mistakes, the things completely outside my plans and beliefs.
Ethics aren’t just there to make your life difficult; they can protect you from Black Swans. A startling assertion, I know, but not one entirely irrelevant to current affairs.
And Sam Harris in “Lying”:
Lies beget other lies. Unlike statements of fact, which require no further work on our part, lies must be continually protected from collisions with reality. When you tell the truth, you have nothing to keep track of. The world itself becomes your memory, and if questions arise, you can always point others back to it. You can even reconsider certain facts and honestly change your views. And you can openly discuss your confusion, conflicts, and doubts with all comers. In this way, a commitment to the truth is naturally purifying of error.
But the liar must remember what he said, and to whom, and must take care to maintain his falsehoods in the future. This can require an extraordinary amount of work—all of which comes at the expense of authentic communication and free attention. The liar must weigh each new disclosure, whatever the source, to see whether it might damage the facade that he has built. And all these stresses accrue, whether or not anyone discovers that he has been lying.
Tell enough lies, however, and the effort required to keep your audience in the dark quickly becomes unsustainable. While you might be spared a direct accusation of dishonesty, many people will conclude, for reasons that they might be unable to pinpoint, that they cannot trust you. You will begin to seem like someone who is always dancing around the facts—because you most certainly are. Many of us have known people like this. No one ever quite confronts them, but everyone begins to treat them like creatures of fiction. Such people are often quietly shunned, for reasons they probably never understand.
To a connoisseur of normative moral theories, nothing says “outmoded and ridiculous” quite like utilitarianism. This view is so widely reviled because it has something for everyone to hate. If you love honesty, you can hate utilitarianism for telling you to lie. If you think that life is sacred, you can hate utilitarianism for telling you to kill the dying, the sick, the unborn, and even the newborn, and on top of that you can hate it for telling you in the same breath that you may not be allowed to eat meat (Singer, 1979). If you think it reasonable to provide a nice life for yourself and your family, you can hate utilitarianism for telling you to give up nearly everything you’ve got to provide for total strangers (Singer, 1972; Unger, 1996), including your own life, should a peculiar monster with a taste for human flesh have a sufficiently strong desire to eat you (Nozick, 1974). If you hate doing awful things to people, you can hate utilitarianism for telling you to kidnap people and steal their organs (Thomson, 1986). If you see the attainment of a high quality of life for all of humanity as a reasonable goal, you can hate utilitarianism for suggesting that a world full people whose lives are barely worth living may be an even better goal (Parfit, 1984). If you love equality, you can hate utilitarianism for making the downtrodden worse off in order to make the well off even better off (Rawls, 1971). If it’s important to you that your experiences be genuine, you can hate utilitarianism for telling you that no matter how good your life is, you would be better off with your brain hooked up to a machine that gives you unnaturally pleasant artificial experiences. No matter what you value most, your values will eventually conflict with the utilitarian’s principle of greatest good and, if he has his way, be crushed by it. Utilitarianism is a philosophy that only… well, only a utilitarian could love.
Most people have subjects they’re bad at no matter what, subjects they’re good at with no effort and subjects they can be good at if they try a lot. The difference for me was not so much like failing a math class and getting an A+ in English. I had a good attitude and I was pretty good at school, so it was more like studying every afternoon and getting a B+ or A- in a math class I kind of liked, or just showing up in English and writing something I loved to earn a good grade. When it came time to choose a major in college, I could have gone into either field or both depending on what I wanted to do with my life, but English is my talent - the one I’m naturally made for. Same with homosexuality.