Morality in high definition: Emotion differentiation calibrates the influence of incidental disgust on moral judgments

Think about the last time that you judged something to be morally wrong. How did you know that affair was sordid, or that wartime atrocity was atrocious? In new research, Keith Payne, John Doris, and I found evidence that the way you make a moral judgment depends on how good you are at reading your own emotions. We think that this work sheds light on how people “just know” when something is wrong.

Some philosophers think you make moral judgments by reasoning carefully about moral principles; others think you rely on emotions. Social psychologists have gathered a lot of evidence in favor of the emotions view. Even feeling disgusted by something as ethically irrelevant as smelling fart spray can make you more likely to judge something as morally wrong. These kinds of “incidental” influences are striking because they suggest that our moral judgments can be pushed around by flimsy emotional factors that have no moral relevance. From a moral point of view these findings seem crazy -- why should a disgusting smell make you more willing to  condemn a criminal?

We examined whether people can tame the impact of irrelevant emotions by becoming more aware of their emotions. People who can discern their emotional states more clearly (“I feel mostly hopeful with a hint of nostalgia”) have a better idea of what they are feeling and why they are feeling it than those who make simple emotional distinctions (“I’m good”). We expected that these emotion experts could override irrelevant emotions more effectively. We had participants complete a task in which they had to judge whether certain actions were universally morally wrong. But right beforehand, they were flashed with images, some of which were disgusting (i.e., a cockroach) and some of which were emotionally neutral (i.e., a light bulb). Overall, the disgusting images made people more likely to judge actions as morally wrong, just like the fart-spray study mentioned above.

Yet this disgust priming of moral judgments depended on emotion differentiation. The irrelevant pictures influenced moral judgments only for people who had difficulty distinguishing their emotions. That’s good news for skilled differentiators, but are poor differentiators doomed to forever make capricious moral judgments? In a second study we trained emotion differentiation by having some participants differentiate their emotions (like disgust, anger, and fear) toward emotional images. Whereas untrained participants made moral judgments that were biased by the disgusting pictures, the trained participants did not.

We hope that these findings will illuminate the relationships between emotions and morality. It’s not that emotions always bias moral decisions. Rather, emotions can be used wisely or foolishly, depending upon the skills we bring to bear. Aristotle said that wise judgment consisted of feeling the right emotions toward the right people in the right amount. We think that paying attention to the nuances of one’s emotions can help people make moral judgments more consistent with their values and principles.

I like this Follow
15 replies
    • Aaron
    • · 1 year ago
    • · flag
    Personally, I'm of the belief that like begets like...

    positive stuff in = positive stuff out.

    It's no wonder to me, then, that negativity might provoke judgement & criticism.

    I wonder how your study might evolve if you were to start showering the participants with positive images/feedback?
    • Matt
    • · 1 year ago
    • · flag

    Neat stuff! I can't help but wonder how many of our contemporary mores are the result of these incedental influences, or how many court cases have been effected by a foul odor entering the court room at the wrong time.

  • Hi Aaron, there's some great work showing that incidental positivity can lead people to make more lenient moral judgments. The same broader concern might be raised: should being amused from watching a funny stand-up comedy clip really have any influence on my moral judgments? To highlight this concern and address Matt's point, there was a recent paper which suggested that judges made more lenient decisions after they had just gone on lunch break! These are the kinds of incidental influences that we may want to guard against, especially for important moral decisions. Presumably, emotion differentiation would help us to guard against incidental positive emotions, just like it does for incidental negative emotions; but that's an empirical question that would need to be tested.
  • So, how do I use this in court?


    We already know that dressing nicely and getting a haircut helps. Should I douse myself with lilac or Italian seasoning? I can't really be a bad person if I smell like a nice, fresh pizza, can I? If I sit there eating a salad?

    • ICAM
    • · 1 year ago
    • · flag

    We should not over-generalize.

    Let's say the finding that an "unpleasant smell" makes us more likely to convict a criminal (based on the same evidence) than we would be if we smelled something pleasant; or perhaps I misunderstand, and the bad smell makes us more more likely to want a harsher sentence for a guilty man. We still have many unaswered questions.

    Does the (presumed) variation in my momentary sense of right-and-wrong affect my longer view of what is just, and should it? Have we "evolved" the our traditional procedures of quiet, mutli-person deliberation, and the common principle of "sleeping on it," for consequential decisions in recognition that at any given moment any individual might be experiencing a perturbation of his "true" judgment, which will be renormalized by both time and by testing my reasonableness against the judgment of others?

    I think it is going to far to say that it is "crazy"--or even novel and unexpected--that the quality of our judgment varies, from a variety of "irrelevant" influences, at different times. Our traditions around making important decisions seem designed to take such vacillations into account, and to minimize their impact.

  • Does the (presumed) variation in my momentary sense of right-and-wrong affect my longer view of what is just, and should it?

    ICAM, this is an interesting point, and I know that some research has already begun to investigate this.  To the best of my understanding the results so far indicate that people generally act first and justify their decisions "rationally" after the fact.  When forced to spend time deliberating prior to making a decision, even a short time, that decision will veer more towards the rational, or what I believe you are referring to as the "true" judgement.  The picture that takes the shape is quite similar to what you describe in your final paragraph, with the understanding that minimizing these perturbations still runs the risk of the effects being investigated in this paper taking hold, though, one would hope, to a lesser extent.  I think it would be very interesting to see some experimental results that examine that relationship.  (Does the long tempering outweigh the immediate impacts of the environment of the decision maker?)

    Also, another question pertaining to moral judgments has been piquing my interest lately, so I wonder if anyone on here has heard about (or been a part of) any experimental work that might address the following:  It's understood that as learning beings, our moral judgments may change over time.  As armchair philosophers, we can deliberate morality and condemn and condone as we like, but this doesn't necessarily reflect our behavior.  I would be curious as to what has more effect on our moral activity, critically examining moral decisions theoretically or actually committing moral (or immoral) actions.  Would our post-reactionary justifications skew our perceptions of the morality of the action?  Additionally does long term exposure to environments similar to those described in this paper (disgusting images, smells, etc.) alter our perceptions of morality more permanently?  For instance, would living in a war zone tend to alter one's perception of morality more permanently?  Perhaps providing greater justification for, or tolerance of violent actions?  Could the contrary prove true also, where long term exposure to a positive environment has the opposite effect?

    Any insight into this would be greatly appreciated!

    • ICAM
    • · 1 year ago
    • · flag

    Bias: I am very suspicious of the validity of "empirical ethics."  I remember the stuff out of Princeton a few years ago, with forced-choice scenarios: to me, the researchers didn't seem to take into account that our perception of the probability (reality) of the scenario would greatly affect the accuracy with which the chosen answer would reflect the moral conditionals under which the subject was making his choice--as in, if it isn't real, then who cares?  And while fMRI gives us pretty pictures, I'm not convinced yet that it is telling us anything important qualitatively about moral choice.

     

    Similarly (and regarding your question), I think we may have at least as much to learn from our history of changing (collective) morality as we will from any experiment focusing on the (often idiosyncratic) moment-to-moment moral judgments of individuals.  We have seen (or have terrific documentation of) changes in the moral position on slavery; on women's suffrage and other rights; on civil rights for racial minorities; and on gay marriage.  We have issues such as gun rights/gun control where the societal moral position is molten right now.  These are wonderful areas to look at.

     

    As you suggest, it would make sense if certain environmental factors (war, famine, etc.) affect at least the individual's (and perhaps the society's) "immediate" sense of justice and good.  I'm not so sure that it affects the "perfect world" moral sense--what ethical path we would choose if we could "afford" to do that.  The impact of economics--the (accurately?) perceived costs and benefits of certain decisions--is clearly active in ethics, though the balance among which costs will be borne predominantly by "society" at large and which by classes of persons--or specified individuals--seems a non-economic, normative decision that is at the very core of a society's identity.

  • ICAM, you seem skeptical of moral psychology so I'm going to address your points first. First, I agree that we shouldn't overgeneralize. There are lots of studies that look at incidental emotion biases of moral judgment, but less work has been done in real-world settings (though I mentioned the courtroom study). As with all science, more work needs to be done to expand the scope of where these effects are thought to hold. Just because many studies show that irrelevant emotions can bias moral judgments, does not mean that such a bias will necessarily happen the next time you or someone else go to make a moral decision. That said, one of the goals of science, in addition to generalizing across situations, is to pursue what *can* happen: in this case, we see that in many cases it is quite possible for incidental emotions to bias moral judgments without people realizing it. That doesn't mean it always happens. But the fact that it *can* happen in some situations, fairly regularly, means that we may want to have a healthy skepticism about the source of our moral intuitions. We will always have unanswered questions about the scope of scientific findings, but that doesn't make the initial findings any less real. And that's what makes science interesting: we have to go out and test our intuitive assumptions against reality.

    Your point about institutional "safeguards" against intuitive biases is very interesting and needs to be studied further. We cannot assume outright that such safeguards are effective without empirical testing; but it's a very intriguing speculation that such checks developed organically as a response to potential biases.

    About the merits of "experimental ethics." Although it is acceptable to critique the merits of specific methods, such as forced choice dilemmas, that is but one method (and it's not the one I used); and it should not be taken to imply that the field itself lacks value. As scientists, we refine our methods in light of critiques in order to seek the truth--this self-correction is the hallmark of good science. I disagree that the existing literature fails to tell us anything of qualitative importance about moral choice, but perhaps you can elaborate your concerns.

    I do agree with your point that much of moral psychology focuses on "in the moment" moral judgments. This is primarily due to standard psychology paradigms, which focus on short-term judgments and decisions. Studying such moments can be very important, because influences on such decisions can be as much because of the situation around us as our personal life history. Although there can be idiosyncracies in such situations, the studies I've described suggest systematic trends in behavior due to biasing influences.

    You are right that we need to look at moral development over the long term to see how life-changing processes like moral conversion might take place on a psychological level. I'd recommend looking at work by Paul Rozin on how emotions are recruited once we "moralize" something; for instance, how moral vegetarians often begin to feel disgust toward meat after they have morally condemned it. But this is a fascinating topic to explore further, and thank you for your insightful suggestions!

    Rextacular, you raise a lot of great questions. You aptly describe Haidt's (2001) social intuitionist model, in which reasoning often comes after our emotional reactions, and usually comes up with one-sided justifications to support those emotional reactions (kind of like a lawyer for his/her defendant). As for whether moral judgments or moral behavior has more effect on our "moral activity", I think that both are important to understand within the larger framework of the psychology of human morality. Emotions tend to motivate behavior, so we may expect moral judgments and behavior to line up much of the time. Interesting scientific cases arise where moral judgment and moral behavior don't line up; for example, incarcerated clinical psychopaths often can tell you which actions are morally right and wrong, but then they behave in morally atrocious ways. Finally, your long term exposure question is also fascinating. Although many in the field have speculated that long term exposure to hostile environments--such as warzones--might shift moral intuitions in a substantial way, this awaits more systematic empirical testing.

    Finally, Sam's Mom, I think you may want to try baked goods. There's a classic study by Alice Isen which found that people who smelled baked goods in the vicinity were more likely to be pro-social toward someone in need of help, compared to those who had not. In general, knowing that incidental cues of good or bad feelings can bias other people's judgments suggests that you might use these cues strategically. Advertisers and lawyers certainly know this, and anecdotally seem to use such incidental influences to sway consumer behavior and jury decision-making. Personally, my morality depends on the smell of freshly ground coffee beans.

    • ICAM
    • · 1 year ago
    • · flag

    Re: "We cannot assume outright that such safeguards are effective without empirical testing; but it's a very intriguing speculation that such checks developed organically as a response to potential biases."

    Thanks; the speculation reflects back to the work of David Sloan Wilson: his central thesis that societies, especially when they exhibit consistent or "well-preserved" social features, have evolved those features over time and seen them preserved by multi-level selection among competing societies.  To the extent that we believe that it makes "evolutionary sense" to assume that they convey a societal benefit, we perhaps have reason then likewise to assume, for now, that consistent social structures (like calm, extended, multiperson deliberations) do "work."

    I am "skeptical" of everything.  Just as I would not wish to infer from an in vitro test that a chemical will have similar in vivo (or "in homo") activity, so I would be cautious about inferring what these interrogative experiments on indivduals tell us about "morality"--which I feel is definitionally a social phenomenon.  I am sure morality is connected to the individual "senses" of fairness and outrage, but I think the connection may be as distantly related as the individual's reaction to pain is related to a modern pain clinic.

    It may well be that, as we study "morality" scientifically, we will need to develop a vocabulary that distinguishes more precisely between an individual's (actionless) moral sense and the more involved moral justification for choice and action; and also the process of tempering and modifying those by interaction with other moral beings, whether in the flesh or abstractly via societal norms and laws.

  • I understand the overall concept, if something makes us feel "icky" it allows to pass moral judgements more harshly. A couple of news channels are pretty good at this. I was watching one the image on the screen was abused animals and the words scrolling across the screen were about some neutral action President Obama had taken. I remeber thinking it was a sort of subliminal messaging that Obama abuses puppies. I have no doubt that was the purpose of putting those two items together. Advertising also uses a lot of this. It is good to see these analyzed.

  • ICAM, you raise two more interesting points. Evolutionary considerations surely inform modern social psychology theory. However, just because a societal feature worked well over evolutionary time doesn't mean it is necessarily well-adapted to modern contexts. There could also be dangers in extended multi-person deliberations; for instance, social psychologists have studied lapses in group decision-making such as groupthink, where pressures to conform to a majority consensus can lead to influence by less relevant information at the expense of more reasoned, logical analysis. Much work in social psychology suggests that deliberation may be less efficacious than our intuitions would lead us to believe--rather than serving as an objective evaluator of evidence and facts, it's more often meant to convince others, and ourselves, of our pre-existing emotional intuitions.

    I agree with you that morality needs to be studied more in its social context, in interaction with other human beings. Some of the older studies of incidental positivity and helping behavior did do this: for instance, smelling baked goods makes you more likely to intervene when a stranger needs help. But many moral judgment paradigms look at participants isolated, making a moral judgment when not in the literal presence of others. It's possible that the imagined presence of others may be playing some role; merely imagining what others may think about your private moral judgment may have a normative influence on you. We need more systematic examinations of moral judgments in public v. private contexts to see if there are psychologically meaningful differences between the two.

    But just because a moral judgment is private does not mean it is actionless; to the degree that emotions are critically involved in moral judgments, and emotions are motivating of behavior, then we may expect that moral judgments tend to motivate corresponding moral actions among most populations (excluding clinical populations such as psychopaths where judgment and behavior dissociate).

    Lilah J., you raise a good example. Plenty of political campaigns use incidental disgust cues to imply moral character flaws of opponents. And the danger is that even if we become aware of these manipulation attempts, we often do not know how to correct, or how much to correct, for these influences.

    • ICAM
    • · 1 year ago
    • · flag

    Thanks for your response.

    You are of course right that group and solo moral deliberations have their distinct advantages and disadvantages.  I merely meant to point out that details of the individual's moment-to-moment process may be weakly correlative with the more time-stable, social phenomenon that is often meant by "morality."

    I think we all note the difference between deciding a moral question within a given set of values and the very different event of choosing a (personal) set of values by which to make those decisions.  It seems to me that the latter certainly involves emotion (and perhaps some additional, consciously inaccessible processes that may not be best called "emotion"), but it is unclear to me that emotion is the primary mover of that process, rather than an important input.

    I think an emotion-as-driver model drives a corollary assumption that morality is based on how humans "feel" (emtionally); and such a statement is at odds with the known evanescence and malleability (over minutes or days) of emotion, the relative stability (over decades individually; and longer societally) of moral standards--and the sense that some moral standards (against, for example, the murder of innocents) are so constant and imperative that they bring up the idea of "objective" morality.

    Some of these questions are even difficult to frame, as we start looking into our intuitions about our intuitions!  My pet question is, in what manner will fMRI rewrite Kant?

  • I do not think that we should accept these priming studies at face value. This particular study reported two results of fairly complex analyses (especially the first), which were barely statistically significant. The published report leaves out many relevant details. In general, experiments on incidental effects of emotion have been difficult to replicate. Before we draw such broad conclusions as many commentators seem to draw, let us make sure that these effects are real.
  • ICAM, very interesting points, especially about the tension between emotional influences on morality and its seeming objectivity. One response is that although moral standards seem relatively stable over time, their application in concrete instances might vary as a function of transient emotional states. Another, perhaps more unsettling possibility is that the visceral, compelling nature of emotional experiences may be the very psychological force that makes a given moral assessment seem objectively true. There is some great philosophical work on this question by J.L. Mackie, Richard Joyce, and Jesse Prinz. Looking into intuitions about our intuitions--our metacognition--is a fascinating line of inquiry that is just now getting started in moral psychology, but could have really important implications for moral philosophy (i.e., for debates over moral realism; moral internalism; and generally what the features of human morality are posited to be). I share your curiosity with what neuroscience implies for moral philosophy--to the degree that moral theories have empirical content, they are subject to scrutiny from scientists who study these questions--and think the field should give greater consideration to the kinds of moral questions you raise (i.e., choosing moral values to live by, generally the domain of virtue ethics).

    Jon, ​thanks for your post. Given the general audience of this forum it is probably not the best suited for hashing out the statistics of two-way interactions (which are reported in the paper), but what's probably more important is that the paper showed the same pattern of results across two studies (three if you count the pilot study) using the same priming procedure to present irrelevant emotional cues. And for what it's worth, within-subjects priming tasks like this actually show very consistent effects of irrelevant emotions on judgments (see work using the affect misattribution procedure, on which my priming task was based--Payne, Cheng, Govorun, & Stewart 2005 in Journal of Personality and Social Psychology; for a meta-analysis of the predictive and convergent validity of sequential priming tasks like this, see Cameron, Brown-Iannuzzi, & Payne, 2012 in Personality and Social Psychology Review). Replication is obviously very important, and it would be nice to see a meta-analysis of emotion priming of moral judgments to get a clearer sense of whether there's an extensive backlog of unpublished studies that fail to reliably produce effects.

    • ICAM
    • · 1 year ago
    • · flag
    Re: "Another, perhaps more unsettling possibility is that the visceral, compelling nature of emotional experiences may be the very psychological force that makes a given moral assessment seem objectively true."

    A great point. I think there is a parallel point to be made about the "objective truths" of science. While the methods of science are well known, so too are its logical problems: when to accept a repeatedly observed phenomenon as an "objective fact", and when to accept a theory that has survived attempted disproving as "objectively true" are interesting questions for both the individual and the group.

    The logical posture of the "pure" scientist is eternal doubt, yet it is not at all uncommon for scientists to become "convinced" that their theory is right...and I suspect that such conviction may well be an emotional state, or at least heavily informed by emotion. So too perhaps for moral judgments--and maybe doubly so when the scientist believes his or her work has ethical meaning.

    Thanks for the references.
  • Add a reply
Reply

TOPIC STATS

  • 7 People who like this
  • 1 year agoLast active
  • 15Replies
  • 4036Views
  • 9 Following this »