Negative Utilitarianism FAQ


ABSTRACT

"Negative" views in population ethics are views that don't place any (strong) importance on adding new happy individuals to the world. This FAQ covers both negative preference utilitarianism (NPU) and negative hedonistic utilitarianism (NHU).

Part 1 explains the terminology and what the views are about, and thus includes some meta-points.

Part 2 provides arguments for and addresses objections to negative utilitarianism (NU). “Making people happy/well/content or goal-satisfied” – instead of “making happy/blissed-out or goal-satisfied people”, at the cost of miserable or goal-unsatisfied ones – seems to make a lot of intuitive and theoretical sense. This part also includes some meta-points.

Part 3 discusses the practical implications of NU. In particular, it explains why NUs too have an interest in global stability and artificial-intelligence (AI) risk reduction, i.e. why it would be very bad for negative utilitarians to attempt to reduce to total amount of suffering by means of increasing the risk of human extinction.

Comments on this draft are highly appreciated!



TABLE OF CONTENTS

1. TERMINOLOGY

1.1 What is NPU?

1.2 What is NHU?

1.3 Why is it called “negative” utilitarianism?

1.4 Do negative utilitarians believe that “nothing is good”?

1.5 Does negative utilitarianism solve ethics?

2. THE CASE FOR NU

2.1 Why NU? Can you please indicate why anyone would believe that?

2.2 I’m starting to understand the reasons behind this NIPU thing. Let’s go to the other one, NHU. What is it and why would I want it?

2.3 What is negative-leaning utilitarianism?

2.4 What is lexical NU?

3. PRACTICAL IMPLICATIONS

3.1 Are the priorities of NIPUs different from the ones of NHUs?

3.2 Should NUs try to increase existential risks?

3.3 Which interventions should NUs be pursuing now?

3.4 If NU becomes popular, should we be worried that people naively attempt to destroy the world, even if this would be antithetical to the goal of reducing expected suffering?

3.5 How should other value systems react to NUs becoming a big topic?



1. TERMINOLOGY

1.1 What is NPU?

Negative preference utilitarianism, which is the view that what makes the world a better place/what’s morally important is the minimization of thwarted preferences. A preference is given by something you want to achieve, or something you want to be the case. For instance, most people have a preference that their close relatives are happy rather than miserable. One plausible form of NPU is negative ideal preference utilitarianism (NIPU): An ideal preference is given by something you would want to achieve upon reflection, if you had all the knowledge relevant to your decisional situation. For instance, someone might have a stated preference against the existence of homosexual behavior, but if this person had accurate beliefs about the world (including about their own intuitions and what they imply), this preference would likely disappear and therefore wouldn’t be part of that person’s set of ideal preferences. (There is no unique way of specifying which knowledge you want to allow to influence your current stated preferences, though.) Your set of preferences make up your goal in life, so NIPU is about trying to help everyone achieve their “true” goal in life.

1.2 What is NHU?

Negative hedonistic utilitarianism focuses on feelings or experiences, not preferences, which may be unrelated to the former. It claims that what’s important is the minimization of suffering, i.e. of experiences that are unpleasant or bothersome in any way. It’s thus about making sure that if consciousness exists, it’s well and not bothered by anything it contains. (The words “unpleasant” and “bothersome” tend to trivialize the ghastliness of serious suffering, though.) Negative utilitarians argue for their position in two distinct ways: Lexical negative utilitarians view experiences in terms of the often-used pleasure-suffering axis, but unlike classical utilitarians, they believe that the suffering-part of this axis counts for infinitely more. Negative utiltiarians who subscribe to Buddhist axiology (see below) on the other hand only think in terms of one axis, which can be described as “whether the immediate, internal evaluation of an experience is in any way bothersome or not”. This approach places intense pleasure on the same evaluative footing as meditative tranquillity. This FAQ will predominantly focus on the Buddhist version of negative utilitarianism, which NHU from here on refers to.

1.3 Why is it called “negative” utilitarianism?

The term “negative utilitarianism” was originally used to describe a principle similar to NHU that was introduced by Karl Popper in 1945. Popper was not a negative utilitarian according to the above definition, though, because he only intended the principle as a heuristic for policy-making. “NU” here either refers to NHU or is used as an umbrella term for both NHU and NPU, the minimizing feature being the characteristic that distinguishes it from other forms of utilitarianism, where, in addition to reducing suffering/thwarted preferences, it is also considered ethically important to create more of “what is good”, i.e. (intense) pleasure or preference-satisfaction.

1.4 Do negative utilitarians believe that “nothing is good”?

What do we mean by “good” in the first place? Happiness is “good” or “optimal” for NUs in virtue of being a state absent of any suffering. The important difference is that happiness/pleasure is not the only world-state that NUs consider ethically unproblematic. The label “negative” creates an unfortunate framing: While it is correct that NU is only focused on minimizing suffering (as opposed to classical utilitarianism, which is about maximizing happiness and minimizing suffering and thus considers inanimate world-states ethically problematic in some sense), negative utilitarians believe that “good” world-states, i.e. world-states that they consider ideal, are all the world-states that do not include suffering, or do not include thwarted preferences. The less suffering/preference frustration they contain the better they are. Happiness is “good” or optimal for NUs in this (weak) sense, and so is meditation and subjectively-fine-muzak-and-potatoes and dreamless sleep and non-existence. (More on muzak and potatoes in section 2.2.2.) While classical hedonistic utilitarianism is concerned with only pleasure (intensity and quantity), NHU is about happiness/contentment in a broader sense.

1.5 Does negative utilitarianism solve ethics?

If it’s supposed to be universally compelling and action-guiding, then ethics is nothing that can be “solved”. (Moral realists would disagree with this, and if they are right, then negative utilitarianism would indeed be a candidate view for solving ethics.) In the beginning of any discussion about ethics, we first need to specify how we are going to use terms like “good” or “moral”. If we specify them to mean e.g. “what I (upon a certain kind of reflection) want to achieve”; or “what makes the world a better place (for others/everyone in it)”, then negative utilitarianism can be a plausible answer to these questions. (Although the definition of “better” won’t be universally compelling again, so the two questions are not entirely separate.)

2. THE CASE FOR NU

2.1 Why NU? Can you please indicate why anyone would believe that?

A general point first: Many common objections to “NU” are actually only objections to its hedonistic version or to hedonism in general. NPU, or NIPU more specifically, is arguably the population-ethical view that is most intuitive (or rather: least counter-intuitive) from a common-sense perspective. In order to illustrate the aspects of a negative view on population ethics, it makes epistemic sense to start out with NIPU, where people are least likely to have confounding objections. The following reasons or intuitions may favor NIPU:

1) Classical preference utilitarianism would advocate the creation of new preferences in order to fulfill them. Here we might have an intuition like: Ethics is about problem-solving/solving problems, not about creating solved problems (= satisfied preferences) where there would otherwise have been none. Similarly, we might think that ethics is about making people happy, not making happy people. NHU/NPU are the most coherent theories incorporating something like: “Let’s ensure that everyone who is ever going to exist is happy”. They are the only theories where, if you exist, you don’t need to fear that horrible things are going to happen to you if you’re in moral agents’ sphere of influence unless this is necessary to prevent even worse suffering/problems. On all other theories, suffering may be inflicted on you/you may be left alone with suffering because there are other things to do that are not abolishing greater suffering.

2) The main utilitarian alternatives to NU imply that we that in a world where the average life is miserable, we should add as many just slightly less miserable lives as possible (average utilitarianism) or that we should, prima facie, create arbitrarily high numbers of arbitrarily miserable lives in order to bring a sufficient number of lives into existence that are just barely “worth living” (classical utilitarianism). The latter example contains confounding variables (aggregation of harm, non-harm deontology, etc.) that are also present in some of the absurd conclusions for NU; nevertheless, they exemplify that “Why would anyone believe this?” (as an initial reaction to NU) is something one could just as well say about its alternative. NU, and especially NIPU arguably, may well turn out to be the views with the least repugnant implications.

2.1.1 Does NIPU imply that there’s nothing bad about destroying the world?

NIPU is actually in line with intuitions that there is something very bad about destroying the world. What is bad about it according to NIPU is that people have strong intrinsic or instrumental preferences against extinction that would all be thwarted by it. Extinction would only be unproblematic if no existing beings had preferences against it – in which case it’s unclear anything counterintuitive would remain! Otherwise, extinction classifies as a great evil and would only be justified on the grounds of preventing an even greater evil.

2.1.2 If people are killed, they have no preferences anymore, so there would be no thwarted preferences either. Wouldn’t NIPU imply that killing people is good?

There is an important sense in which a preference to go on living doesn’t just disappear when we die, but is actually violated. How can the preference-framework account for this? There are several methods one could use:

2.1.3 To those who use Method 1 above: If we find the graves of an ancient civilization whose inscriptions suggest that these people wanted their remains to be sent to the moon, we should put efforts into granting their wishes?

“Silly” preferences like these wouldn’t stand an idealization-procedure, as they would likely be based on false beliefs.

2.1.4 The space of all possible preferences is large. What if, for whatever reason, there were people who keep such preferences even after an idealization-procedure?

Well, then NIPU (in one version at least, based on Method 1 above) would say that it’s important to send their remains to the moon. But this point is not unique to negative preference utilitarianism, it may apply to all forms of preference utilitarianism. What makes such preferences “silly” may just be the fact that we don’t happen to have them (upon reflection). We could not pass the “silly” judgment if we ourselves happened to be programmed to (upon reflection) really care about our remains being sent to the moon. In other words: If your brain was wired the way the brains from this hypothetical dead civilization were, you would find it quite important, too! It is a category error to assume that your own values are fundamentally less arbitrary.

If you still find this conclusion unpalatable, consider hedonistic/experiential utilitarianism instead. But bear in mind that your acceptance of the hedonistic view depends on you having an ideal preference favoring it – you could not accept it and act in accordance with it if you were wired differently. Also, it seems that you would (rationally) want others to treat you ideal-preferencistically, not hedonistically: If you’re ignorant about your ideal preference, or if the slightest doubt remains about hedonism (i.e. about your ideal preference favoring hedonism), then you win in any case with preferencist treatment. With hedonistic treatment, on the other hand, you might lose big time.

2.1.5 Back to destroying the world, doesn’t NIPU still imply that extinction would be best, because if there will be a lot of people in the future, their unsatisfied preferences combined are worse than the preferences being thwarted by extinction?

With this premise, NIPU would indeed imply this (in theory!  for an analysis of the practical implications of both NHU and NPU, see part 3 of the FAQ, where we list reasons why NUs too have a strong interest in global stability and thus why it would be very bad according to their values to increase existential risk). However, counting this a significant argument against NIPU would be getting things backwards: This conclusion depends heavily on empirical circumstances (which our intuition is e.g. scope-insensitive about): If we lived in a world where comparably more people exist presently than will exist in the future, or if the life-quality of future people will improve sufficiently until there are no or virtually no thwarted preferences, then extinction would be deemed worse (assuming that currently existing people care about there being future people). For every view on population ethics that places disvalue on something, we can imagine empirical situations where extinction would be better than the alternatives, simply by imagining that the future in expectation contains a sufficient amount of the bad stuff.

2.1.6 But isn’t it counterintuitive in theory that creating beings with 99% satisfied preferences is negative?

“Negative” here simply means that non-creation would have been (slightly) better. Yes, this is plausibly quite counterintuitive. It is worth noting, however, that there are formal proofs (“impossibility theorems”) showing that no coherent population ethical theory exists that does not violate at least one very strong commonly held intuition.

2.1.7 So are we doomed? What is your general methodology for thinking about ethics?

The impossibility results are one reason why counterintuitiveness alone cannot be a decisive argument against any moral theory. We should investigate the reasons why something seems counterintuitive to us, and once we understand them, decide whether we consider these reasons to be “biases”, or whether they expose an unacceptable problem in our assumptions. Counterintuitiveness is a probabilistic indicator of us having incorporated axioms that we, upon reflection, would consider unacceptable, but when we don’t yet know what exactly we find counterintuitive, we should be careful with abandoning views too early, especially in the light of impossibility theorems.

So the questions are: which view is the least counterintuitive, and/or where is the counterintuitiveness based on confounders rather than problems inherent in our assumptions? Compared to alternatives like the Very Repugnant Conclusion (see section 2.1, item #2), the conclusion here – 99% preference-satisfaction being slightly worse than non-creation – seems to be similarly absurd. In any case, it is worth thinking more about the reasons behind each view, and why we might find them counterintuitive. “Repugnant” conclusions are implied by all views (cf. impossibility theorems), so a particular counterintuitive conclusion likely isn’t a good reason to immediately dismiss the corresponding view.

2.1.8 OK, tell me how it’s not unacceptable that creating beings with 99% satisfied preferences (i.e. nearly perfect lives, much better than the happiest humans currently alive!) would be negative?

Let’s consider potential confounders. Would it make a difference whether the world is 99% ideal for every being living in it, or whether we are opting to play a lottery that 99% of the time produces a world full of perfect lives, and 1% of the time produces a world full of misery? Utilitarians are risk-neutral and place no intrinsic value on how experiences are distributed among individuals, so there should be no difference in theory. However, the second framing will likely elicit more objections. Granted, these objections might stem from some of the same intuitions why non-utilitarians reject utilitarianism (e.g. risk-aversion or prioritarian intuitions). It nevertheless seems unlikely that this is all there is to it because even people who accept the utilitarian torture over dustspecks conclusion might still have the impression that the two cases are relevantly different. We seem to have the intuition that short periods of suffering are not that bad, all else being equal (i.e. no compensating effects due to memories/future prospects being different), if they are part of an otherwise very good life that as a whole is “worth living”. This intuition can be questioned because how the life will go as a whole changes nothing for all the consciousness-moments that are in misery. Given a reductionist view on personal identity – a “person” is just a grouping of consciousness-moments according to some degree of spatio-temporal proximity and/or causal dependence and/or qualitative similarity and/or memory referencing – it would be discriminatory to treat consciousness-moments differently merely because they belong to a particular person-cluster.

If instead of considering a case with 99% preference-satisfaction per newly created being, we consider a case where we create a paradise with 99% likelihood and a horrible world with 1% likelihood, then suddenly the intuition that this would be a good thing to do might become less clear. Risk-aversion is of course a strong potential confounder in this second case, but we don’t need to be contrasting a likely paradise with an unlikely hell; it suffices to contrast the likely paradise with an unlikely outcome that contains slightly more suffering than happiness per person. If the option is to either create nothing or go for this lottery, the intuition that “it’s not worth the risk” would likely not be (solely) due to risk-aversion, but rather due the view/intuition that creating happy people is OK but not morally imperative (the existence of inanimate pieces of matter not posing a moral problem), and that it is morally imperative to not create miserable people. If it’s intuitive to think that non-existence poses no problem – because not being born doesn’t frustrate anyone’s goals or create any inconvenience to anyone – then why is it counterintuitive that a 99% happy and 1% unhappy life poses a slight problem?

2.1.9 What about prior-existence preference utilitarianism, the view that creating new people/preference-bundles is neutral as long as their lives are worth living, and negative otherwise? Wouldn’t this be even more in accordance with common sense?

This is roughly the view Peter Singer introduced in Practical Ethics (Christoph Fehige and possibly others have published on similar positions), and at first glance, it seems intuitive indeed. There are two major problems with it, however:

1) As soon as we try to specify what is meant by “worth living”, we run into difficulties. Is there an objective way to assess whether a life is worth living, other than that the creature itself (upon reflection) prefers to go on existing? If not, if we ultimately need to rely on this self-assessment, would we be willing to let an evil scientist engineer beings living in misery with a very strong preference for continued existence (which surely are possibilities in mind-space)?

2) Even if the above problem can be fixed: The prior-existence view implies intransitivity. Suppose someone has the option to create two possible beings, A or B. A will live a perfect life, B will live a life that is worth living but overall still pretty bad. If the already existing person for whatever reason has the slightest preference for creating person B, then the ethical choice would be to create B. Because all lives worth living are treated the same, as “neutral”, the difference between a good life and a merely decent life vanishes, at least for all cases of deciding about future people. This conclusion is likely unacceptable for most people.

On top of this, the prior-existence view is quite ad hoc and obviously derived as a patchwork-solution to problems other population ethical views face. If possible, it would be much more impressive, for whatever that’s worth (it depends whether we care about this sort of thing when choosing our values), if a view without unacceptable conclusions could be found where the axioms make sense from the very outset.

2.2 I’m starting to understand the reasons behind NIPU. Let’s go to the other one, NHU. What does it state and why would I find it appealing?

The traditional view when it comes to evaluating experiences is hedonism, with a symmetrical pleasure-pain axis: pleasure is what is good/important to create; suffering is what is bad/important to avoid. However, NUs attribute no ethical importance to the pleasure-part of the axis. This comes down to a fundamentally different approach to what matters (“axiology”). Negative utilitarians, instead of thinking in terms of the traditional hedonistic axiology, see the world in terms of Buddhist axiology, where what matters is not the degree or intensity of pleasure, but rather the absence of anything bothersome -- i.e., the immediate, internal evaluation of a consciousness-moment as being “subjectively perfect”.
[Disclaimer: Any references to "Buddhist Axiology” and explanations thereof are taken from a forthcoming paper on the topic (Gloor & Mannino), with the authors’ permission.]

2.2.1 Why do negative utilitarians favor Buddhist axiology, and what does it say exactly?

Buddhist axiology deals well with some of the potential problems that standard welfarist axiology faces. Inherent in welfarist axiology is the premise that a pleasurable but not maximally pleasurable experience is somehow problematic, i.e. that it is ethically important to replace such a state by a more intense one (potentially at the cost of suffering). This implies that all experiences that aren’t as pleasurable as possible become “tarnished” in their evaluation, even if they feel perfectly fine in themselves. This component of comparing everything to maximal pleasure seems to be an underlying reason why many people find the repugnant conclusion(s) of classical utilitarianism unacceptable: Exponentially increasing one being’s suffering in order to super-exponentially increase another beings pleasure seems morally frivolous because pleasure of any degree is perfectly fine, and there’s no need whatsoever for anything to change (unless the pleasure is regularly interrupted by bothersome cravings for more diverse or qualitatively higher pleasures – in which case higher pleasure-intensity is indeed important).

By contrast, Buddhist axiology is all about momentary internal evaluation, the “inside view”, where a conscious state is only non-optimal or problematic if this is directly experienced, not if the state doesn’t match up in some comparison we make from the outside. According to Buddhist axiology, all states that are subjectively free of anything bothersome, where there is no desire whatsoever for the moment to go by or change, are considered perfect.

Again, let’s bear in mind that words like “bothersome” tend to trivialize the
seriousness of (intense) suffering – in Brian Tomasik’s words:

[...] when I see or imagine extreme suffering – such as being eaten alive or fried to death in a brazen bull – it seems overwhelmingly apparent that preventing such experiences is the most important thing in the world, and nothing else can compare. This intuition seems clear enough to most of us when we imagine the suffering happening nearby. If someone was being tortured in a way that could be prevented in the room next door, few of us would hesitate to stop whatever we were doing and go help. But when distance and uncertainty stand in the way, this intuition fades, and people become preoccupied with goals like ensuring interesting, complex, and awesome futures.
And more on the overwhelming horror of suffering:
Take this one: "Turkish girl, 16, buried alive 'for talking to boys'.". [...] Imagine yourself as this girl, trying to claw your way out from the dirt. As you breathe, dirt fills your nose and mouth. You cough and choke. It becomes hard to get enough air. You claw more, but the dirt is too much to budge. Another deep breath; it's not enough. After some time, you feel the sting of carbon dioxide in your blood. Your heart races, and your mind screams. You try to breathe once more. Choke, cough. The sting of carbon dioxide is like a knife throughout your body. It cuts stronger, stronger; it seems it can't get any worse, yet it does. And ... the remainder is too painful to imagine. This experience is unremittingly awful; it is not compensated by other person-moments enjoying themselves (see Appendix).

2.2.2 What’s the NU response to Toby Ord’s Indifference Argument?

Ord's argument runs as follows:

Suppose there were a world that consisted of a thriving utopia, filled with love, excitement, and joy of the highest degree, with no trace of suffering. One day this world is at threat of losing almost all of its happiness. If this threat comes to pass, almost all the happiness of this world will be extinguished. To borrow from Parfit's memorable example, they will be reduced to a state where their only mild pleasures will be listening to muzak and eating potatoes. You alone have the power to decide whether this threat comes to pass. As an Absolute Negative Utilitarian, you are indifferent between these outcomes, so you decide arbitrarily to have it lose almost all of its overflowing happiness and be reduced to the world of muzak and potatoes.

Ord concludes that the example speaks for itself; he considers the outcome permitted by NU “catastrophically worse for everyone”. But why? For whom and when would there be a catastrophe if “muzak and potatoes” was experienced as completely fine and if there was nothing whatsoever that bothered anyone about it? Also, boredom, a potential confounder, would of course be excluded in such a scenario. Imagine a world of people listening to muzak and eating potatoes where everyone is always enjoying themselves. Everything is perfectly subjectively fine for everyone at all times – no one thinks and feels that anything is missing or that their life could be better in any way (in any other case, NUs wouldn’t be indifferent between the two outcomes). Now what is the problem? “Muzak and potatoes” may seem like a catastrophe to us, because we have an aversion to living in such a world. And we are probably projecting this feeling, which is based on our experiences in the current world, onto the imagined scenario – which is something one should try to avoid in a thought experiment. “Muzak and potatoes” would be disagreeable, but only if it were (about to) happen to us who are thinking about it. People living in a “muzak and potatoes” world can be perfectly fine at all times, and they are whom the thought experiment should be about.

2.2.3 But seriously, imagine you can either eat potatoes or pizza, isn’t it obvious that the latter experience is better (feel free to substitute a different example if you don’t love pizza)?

What do you mean by “better”? That most people would prefer it or develop stronger cravings for it? Sure. Comparing the experience of eating pizza to eating potatoes, I'm likely to prefer the former. My craving for the pizza is greater than my desire for eating potatoes, and it is probably accurate that the pizza-experience is of greater pleasure-intensity. However, if we are going to therefore conclude that what is more pleasurable is automatically better (and that the worse states are in need of improvement!), Buddhist intuitions will object: We have been looking at it from the wrong perspective! We have been comparing, from the outside, two different states and our current cravings for being in one or the other. Why not instead look at how the states are like from the inside? Assuming that, when eating the potatoes, I forget everything else around me and am fully enjoying the experience, with no desire whatsoever for my experiential content to change, then why should anyone conclude that something about the state needs improvement? For me, in the very moment, it does not! And that is all that matters to me in that moment. If someone does not forget everything else around her and wishes for things to be better (and correspondingly isn’t perfectly fine right now)  then NU will agree that the state needs improvement.

Buddhist axiology disagrees with John Stuart Mill, who believed that the happiness of a pig is somehow worse than the happiness of Socrates. It is understandable that to Mill – one of the smartest humans to ever have walked the Earth – the thought of being incapable of philosophical reasoning felt like a catastrophe. However, the pig itself doesn't notice that anything is lacking; nothing about being a pig bothers it in any way. Standard hedonistic welfare axiology disagrees with Mill’s “unhappy human > happy pig” opinion as well, but Buddhist axiology would criticize it on the grounds that it doesn’t take the “inside view” argument far enough.

2.2.4 I’m not convinced, the idea that a hedonistically neutral state could be equally good as happiness seems crazy! Am I missing something?

It might depend on what you envision by “hedonically neutral state”. In the context of everyday life, there are almost always things that (ever so slightly) bother us. Uncomfortable shoes, thirst, hunger, mild headaches, boredom, itches, worries about how to achieve our goals, longing for better times... When our brain is flooded with pleasure, we temporarily become unaware of all these negative aspects, we’re temporarily freed of all bothersome components to our experience. Pleasure-flooding is by no means necessary to achieve a conscious state that’s completely free of anything bothersome, completely content. But with our current brains in the current world, pleasure-flooding is the usual way to sweep all suffering away and attain contentment. This may lead us to view pleasure as the symmetrical counterpart to suffering and to the view that (intense) pleasure, at the expense of all other suffering-free world states, is what we’re really after and what matters to us. However, there are also (currently rare) conscious states devoid of any suffering that aren’t necessarily pleasurable but still totally fine, examples including some meditative states, or flow-states where one is completely absorbed in some activity, with time flying and a very low level of self-awareness. According to Buddhist axiology, all conscious states of non-suffering, including such “hedonically neutral” states, deserve the label “happiness”.

2.2.5 What about a composite states where I experience both happiness and suffering at the same time?

It seems that in our overall conscious evaluation, happiness either overshines the aversive component, in which case we’re good, or the aversive component still sticks out and bothers us. Sometimes there is perhaps room for conscious control, depending on what we’ll focus on. This effect wouldn’t mean that the momentary happiness is outweighing the suffering. It would simply mean that there is no suffering in the first place, due to happiness occupying the entire attentional (and thus experiential) field.

2.2.6 How do NUs define suffering?

To suffer means to want your current conscious state to end or change. “Wanting” not in a reflective/System 2/abstract-preference-way, but rather in a more immediate sense that usually isn’t subject to conscious control. Things like cravings, boredom or itches count as suffering as well, albeit as extremely mild suffering compared to e.g. severe depression or being burnt alive.

According to this definition, someone experiencing pain asymbolia is not suffering. If a mind isn’t bothered at all by the pain-flavor it’s experiencing, if it doesn’t recognize any urgency to act, then there is no moral urgency to act on its behalf.

2.2.7 According to hedonistic NU, would it be a good thing if the universe was painlessly obliterated?

In theory: yes. Just as it would be totally unproblematic if no consciousness-moment had ever existed. (By contrast, NIPU can justify a strong moral distinction between non-existence and no-longer-existence.) However, it wouldn’t be the only way that suffering could be ended, as the abolition of suffering through science would be an equally good outcome for NUs, and a much better outcome for all other value systems. Good outcomes are most likely to happen if there’s a strong consensus.

2.2.8 You said a painless end to the universe would be one ideal option. What is wrong with you?

Nothing! People have wondered whether NUs tend to be depressed or are psychologically weird, e.g. incapable of experiencing real pleasure. As far as we can tell, none of that would be a remotely plausible explanation of what is going on. Some NUs may be depressed (more so than other utilitarians?), but there are also NUs with very high hedonic set-points to whom NU is simply very philosophically appealing.

Isn’t it a plausible intuition that non-existence cannot be a problem, because no one is bothered by it? (Reasoning of this sort has a long tradition in philosophy, going back to antiquity, and the same holds for the definition of “happiness” in terms of the absence of suffering, of anything that consciously bothers us.) It is perhaps only when we contemplate the matter from the (heavily biased) perspective of already existing, biologically evolved beings with a strong System 1 drive towards continued life, that we may find the idea abhorrent.

Confusion about “personal identity” (see 2.1.8 above and 2.2.10/2.2.11 below) also fuels irrational fear of non-existence. There can be no “deep fact” about me_now being “the same person” as me_tomorrow; all there is is consciousness-moments with varying degrees of spatio-temporal proximity, causal dependence, qualitative similarity and memory referencing, based on which we arbitrarily group some together (“same person”).

Finally, again, the repugnant conclusions of alternative views seem at least as counter-intuitive, especially as compared to NIPU.

2.2.9 What about human values being complex?

NIPU is totally in line with that. It says: Minimize the number of unfulfilled terminal values, whatever their content. As was roughly argued above, this makes a lot of sense as the definition of “helping others” or “altruism”: It expresses how we ourselves would want to be treated and incorporates the view that value isn’t to be found free-floating out there, but is always conditional on there being someone with terminal values. This raises the question why “empty stars” should pose an altruistic problem. It’s not that they contain terminal values in need of help, problems to be solved. Sure, one can claim complexity of value for oneself, be only a partial altruist and want to modify empty stars with some other, selfish part of one’s terminal value set. This is not at all how the situation is usually construed, though – and it’s unclear why.

Against NHU, and HU more generally, the complexity of value objection does apply. The way it is commonly brought up, however, doesn’t seem very useful. It functions as a discussion stopper: “You’re wrong, here’s a link.” To the extent that the complexity point is uncontroversial, it seems to be a descriptive one, and as such doesn’t have any normative force by itself. Human moral intuitions are complex, which does not necessarily imply that terminal values need to be complex as well. Intuitions are System 1, whereas a terminal value is an idealized abstraction that System 2 “imposes” over System 1.

If your approach to thinking about your goals is that you want to incorporate every intuition you have, then NHU isn’t for you. (Although NIPU may very well be.) However, if you’re open to seriously questioning your intuitions and abandoning some of them in favor of other intuitions that you consider more fundamental, then no theoretical argument can be made to the effect that a complex intuitional starting point needs to output complex terminal values.

A terminal value is most likely not something you can read out simply from having a supercomputer attached to your intuitions. We first need to define what would count as a legitimate extrapolation procedure, and which changes we would reject as a failure of goal-preservation. This step, again, depends on what your terminal (meta-)value is. Without specifying it, no extrapolation procedure can get off the ground. It seems that, at the very bottom of it, we simply have to choose what sort of intuitions, arguments and axioms we want to count, and which one’s we’d be ready to discard.

If someone sticks to the strong (considered) intuition that they value their own happiness in the sense of Buddhist axiology, and in addition wants to apply this concern impartially/altruistically to all sentient beings (instead of just to “their own” future consciousness-moments), then this may very well be strong enough to override differing intuitions, and it would imply NHU.

2.2.10 Doesn’t everyone constantly accept suffering in order to be happy at a later time, e.g. when people go through a painful workout at the gym?

This poses no problem for NIPU: If certain suffering/happiness trades correspond to one’s ideal preference, and if one’s preference frustration would be greater without them, then NIPU commands the trades.

As for NHU: First of all, it’s not so clear that suffering/happiness trades are what’s actually happening. Think of a situation where you don’t go to the gym: You’ll feel awful about not going, you might get bored, you’ll have less willpower in the future, you might feel lazy worse about your body. It seems that whenever we are motivated enough to actively do something, we’re motivated by something (some craving/desire) that would cause us discomfort if we don’t act upon it.

Moreover, the intuition that there is an exchange rate between suffering and happiness seems much stronger in decision-situations that are construed of as “egoistic”. When you place people in a decision-situation they explicitly recognize as altruistic, they are much less willing to inflict or accept suffering in order to turn rocks into pleasure, or turn meditating monks into orgasmic monks, which suggests that trades in the “egoistic” case may be biased by a confused conception of “personal identity”. For instance, while people will say that it is good to walk barefoot over hot sand in order to enjoy the cool ocean afterwards (here again: what would the realistic counterfactual look like? – probably some quite unpleasant cravings), they are usually significantly less willing to simulate artificial consciousness from a controlling board that simulates some hot-sand pain over here and some ocean pleasure over there.

2.2.11 This example can go both ways. Why isn’t the intuition in the other-regarding case the one that’s biased?

One could indeed argue that as person-moments making “egoistic” hedonic trades, we are more directly and better informed about the nature of the conscious states we are choosing between (due to immediate experience and memory), and that therefore we should adapt our judgment in the altruistic case to the judgment in the “egoistic” case, not vice versa.

But this reply is unconvincing: Our own experience (and memory) also informs us about the nature of the conscious states we’re choosing between in the case we clearly recognize as altruistic. It’s not true that we’re less informed about them in this case. But we’re more informed about the fact that the decision is indeed an other-regarding one. When interested in altruistic decision-making, we should therefore rely more on intuitions about cases we clearly recognize as other-regarding. And if it turns out that the case we thought of as self-regarding is actually other-regarding, too, then we should adapt our judgment in the former case to the judgment in the latter case.

Also note that NIPU, again, allows for some (intuitive?) twists: Returning to the artificial consciousness controlling board from the previous question, a pre-existing simulation could have a preference against the hot-sand experience and a stronger preference for bringing an ocean experience simulation into existence, in which case NIPU commands doing so.

2.2.12 Don’t we know (from experience and memory) that happiness and suffering are on one scale and that there is an objective exchange rate between them?

If we know this, then we shouldn’t hesitate at all about making such trades in the cases we clearly recognize as other-regarding. But many do in fact, upon reflection, hesitate to make even Omelas-type trades, where the existence of happy billions can be achieved at the cost of one miserable life. (And many people do hesitate even without having egalitarian, prioritarian or non-consequentialist intuitions.)

We don’t seem to know that there’s just one hedonic scale. Many people are inclined to strongly introspectively disagree. Our current pleasure/pain-intensity ranges may be negatively skewed for evolutionary reasons, but this doesn’t provide a strong argument for people who are able to experience the most intense current pleasures and milder current pains and are convinced that there’s an asymmetry. Also, if there was just one hedonic scale (analogous e.g. to the temperature scale), then we should expect the “zero” point to be arbitrarily placeable – but it is uncontroversial that this is not the case.

More theoretically, how much suffering someone accepts for a given amount of happiness is a function of how strongly the person desires the happiness in question. Your exchange rate will differ depending on many variables, and there seems to be no clear foundational perspective from which we could consider some of these variables to be biased and others “objective”.

Imagine you’re laying in bed in the morning and notice in relief that it is two hours before your alarm clock will go off. You’re half-asleep and super-comfortable with perfect temperature in the room. Now someone offers you a pill that would give you two hours of a very intensely pleasurable experience, with the caveat that after the two hours, you will forget about everything that happened. There is a second caveat, too: The pill is at the other end of the house, so you’d have to mentally force yourself to get up and walk through the entire house. What if you decide to just stay in bed because the current state, although not intensely pleasurable, seems perfect and all that you want for the time being? Would this be irrational? It seems that you would only get up if you start to develop a strong anticipatory curiosity of a great feeling. If this happens, your counterfactual will change, too: You’re now not looking forward to comfortable half-sleep anymore, but instead you’d be faced with cravings and a strong sense of not wanting to regret something.

Do we just need to accurately visualize how the happiness will feel like and then we’ll know how much suffering to accept for it at most? Although this cannot be ruled out completely given our currently limited knowledge about the (philosophy and science of) mind and consciousness, it strongly seems that different minds would react very differently to this and that there wouldn’t be one “objective” exchange rate across mind-space. And even if there was, it might be horrific according to what we prefer. (This is analogous to a meta-ethical objection to the “divine command” theory of ethics: Divine commands might be horrific according to what we prefer.)

2.2.13 Aren't exchange rates between different kinds of suffering arbitrary, too?

Maybe they are arbitrary too, but by only considering a one-dimensional axiology, you at least don’t get two layers of arbitrariness on top of each other.

But maybe they’re not. Maybe pleasures of differing intensities can be non-arbitrarily intensity-compared; and the same goes for sufferings of differing intensities; but there’s no non-arbitrary bridging principle – because moral importance/urgency lies on the side of suffering. It is hard to make the case for the intrinsic importance/urgency of something being different (e.g. pleasure intensity being higher) if the thing itself, (i.e. the lower pleasure-intensity consciousness-moment) does not display any respective striving.

2.2.14 Doesn’t it speak against NU that it is such a minority position it hasn’t even been discussed, let alone endorsed by anyone?

The premise of this question is quite inaccurate, especially as regards the core intuitions behind formal NU. The intuition that the badness of suffering doesn’t compare to the supposed badness of inanimate matter (as non-pleasure) seems very common, and the same goes for the view that contentment is what matters, not pleasure-intensity. There are nearly 1.5 billion Buddhists and Hindus, and while Buddhism is less explicit and less consequentialist than negative utilitarianism, the basic (though not uniform) Buddhist view on how pleasure and suffering are being valued is very similar to negative utilitarianism; Hinduism contains some similar views. Ancient Western philosophers such as Epicurus and some Stoics proposed definitions of “happiness” in terms of the absence of suffering. Arthur Schopenhauer is a classic example of philosophical pessimism, emphasizing the enormity of the world’s suffering. The story of the fictional city of Omelas showcases the principle that it's wrong for some to suffer in order to create greater pleasure by others. Ursula K. Le Guin's idea for the story came from William James, who in turn was inspired by Fyodor Dostoyevsky. The theme appears in countless other places as well, such as the following passage from The Plague by Albert Camus: "For who would dare to assert that eternal happiness can compensate for a single moment's human suffering?" In contemporary philosophy, David Benatar has argued for the pleasure/pain asymmetry. Prioritarianism, maximin ethics and related principles put extra weight on suffering compared with happiness. David Pearce is a current proponent of formal (lexical) NU. Dan Geinster outlined the position “anti-hurt”, which is essentially identical to what this FAQ covers under Buddhist axiology. In 2014, Swiss philosopher Bruno Contestabile published a paper on Negative Utilitarianism and Buddhist Intuition, in which he also uses the term “Buddhist axiology” (independently of any influence of the EA movement in Switzerland). Richard Ryder has developed a theory he calls “painism”. Jonathan Leighton has developed a practical “negative utilitarianism plus”. Christoph Fehige, Roger Chao and Fabian Fricke have published academic defenses of variants of negative utilitarianism, focussing on theoretical reasons as well as on the fact that the very popular rejection of utilitarianism seems to significantly stem from people having classical (hedonistic) utilitarianism in mind: A focus on pleasure-intensity strikes us a frivolous; pain/pleasure trades seem illegitimate (unless favored by an individual preference); we want to avoid a duty to stack the universe with (happy) people and classical utilitarianism’s Repugnant Conclusions specifically.

Generally speaking, suffering reduction is widely recognized as the foremost principle in ethics: If there's one ethical principle that most people agree on, it's the importance of reducing suffering. It seems to be a widespread intuition that there's something particularly morally urgent about suffering.

Sure: Many of the people advocating “suffering priority” views but rejecting utilitarianism could probably be hard pressed and forced to give up or modify some of their claims. But it’s not at all clear that they would, upon reflection, settle for something other than formal NU. The system we’ve described here seems to be attractive to many people and in fact to be able to result in unusually satisfactory and stable ethical “reflective equilibria”: If you agree with the sentiment (probably apocryphally attributed to Mark Twain) that "I do not fear death. I had been dead for billions and billions of years before I was born, and had not suffered the slightest inconvenience from it”; or if you do fear death, but think the case of not being born is very different because no preferences/values are then being violated; or if you think that happiness as contentment – as opposed to happiness as pleasure-intensity – is what’s ethically important; then you are quite likely to find that formal NIPU or NHU is the most coherent expression of your normative views on altruism.

2.2.15 Where can I learn more about the arguments for NU sketched here?

Email nu.faq.feedback@gmail.com for access to drafts elaborating on NHU and NIPU.

2.3 What is negative-leaning utilitarianism?

Most commonly, negative-leaning utilitarianism (NLU) refers to classical hedonistic utilitarianism with an exchange rate that accepts an untypically low amount of suffering for a given amount of happiness. If the exchange rate is strongly enough opposed to suffering, NLU will be indistinguishable from absolute NU in its practical application. NLU can also result from moral uncertainty over negative utilitarianism and forms of classical utilitarianism (with differing exchange rates).

2.4 What is lexical NU?

It’s the view that suffering is lexically more important than happiness, or that reducing suffering is of moral importance, whereas creating pleasure is good but not morally required. Lexical NUs thus care intrinsically about pleasure and find it important to create new happy beings in a case where the amount of suffering is unaffected, but in practice, their focus is only on suffering (the same as for strict NUs).


3. PRACTICAL IMPLICATIONS

3.1 Are the priorities of NIPUs different from the ones of NHUs?

Most likely not. There are always possible scenarios where it would make a difference, but the two views have in common that suffering/thwarted preferences cannot be outweighed, so their focus will be on preventing bad (far-)future scenarios. The same goes for a number of related views that place a strong priority on suffering.

3.2 Should NUs try to increase extinction risk?

No, that would very bad even by NU standards. There are important and dominant reasons against doing this. The short answer: The difference between “no future” (i.e. no Earth-originating intelligence expanding into space) and a decent future, where concern for suffering and thwarted preferences plays some role (even though it would likely not be the only value) is much smaller than the difference between a decent future and one that goes awfully wrong. NUs will benefit more by cooperating and compromising with other value systems in trying to make the future safer in regard to (agreed-upon) worst-case scenarios, rather than by trying to prevent space colonization from happening at all. It would be a tragedy if altruistically-concerned people split up into opposing factions due to them having different definitions of “doing what is good”, while greed and bad incentives lead the non-altruistically-inclined people in the world win the race. Instead, those who share at least some significant concern for the reduction of suffering should join together.

The longer answer includes several points:

1) Attempting to destroy the world comes close to the worst things NUs can do from the perspective of other value systems (like e.g. classical utilitarianism or common-sense morality). If NUs go down this path, they provoke strong hostility and make potential cooperation impossible, which would be a tragedy because successful cooperation and compromise is robustly better in expectation than taking one’s chances through extreme defection.

2) Many forms of existential-risk reduction seem to also increase global stability and, thereby, the likelihood of a future where concern for suffering plays an important role. War, arms races or civilizational collapse have some chance of preventing space colonization, but in all the cases where they merely postpone it (if even -- arms races might well accelerate it), the resulting values could be much worse, or safety could become less of a concern due to increased competition.

3) Especially in regard to AI-related existential risks, NUs have strong incentives to work on making AI safe. Uncontrolled AI (“UFAI”) has huge risks. Naively, one would think that paperclippers are a good outcome for NUs, because paperclips don’t suffer. However, because UFAI has no concern for suffering, there is a risk that it will e.g. make (potentially sentient) ancestor simulations to gain knowledge about evolution, use suffering subroutines for various tasks and production processes, or that it will build a fleet of sentient mini-paperclippers that will help achieve its ultimate aim. While these risks may appear small, the expected disvalue remains considerable: If something goes badly, it will affect the entire lightcone. And it’s unlikely that an UFAI that ends human history would not go on to colonize space, too. Controlled AI (“FAI”), on the other hand, will very likely be driven at least in part by humane values and a strong concern for unnecessary suffering, which makes worst-case scenarios much less likely. A further important advantage of FAI is that it would more likely do the right thing and invest resources into preventing suffering, if it – through some type of black swan – turns out that there are sources of suffering out there to prevent. Not having an AI that incorporates the value of reducing suffering would thus be a risk.

4) Finally, it makes sense to have a strong heuristic against making irreversible decisions, especially when you’re in a decision-situation with multiple actors, where “unilateralist curse” considerations apply. This makes sense because of empirical uncertainty and potential black swans, but also because of evaluative uncertainty about one’s own precise goals. If future mind/brain technologies will grant us new and revolutionary insights into the nature of pleasure and suffering, this could lead to people readjusting their views on what they care about. (Maybe there would be a precisely isolated and agreed upon “moral importance/urgency” component in (most) pains, such that what to do about pleasures might then be more of an answerable empirical question.) For people who value happiness or eudaimonia intrinsically, a failure to colonize space and turn rocks into happiness/eudaimonia would be a catastrophe of unprecedented scope, so bad according to their values that even a very bad future for NUs would arguably not come close in comparison. Therefore, even if one only places a low credence in such a view, this could become relevant in expectation because for non-NUs, comparatively more is at stake (depending at least on the most parsimonious and intuitive ways of normalizing values across moral uncertainty).

3.2.1 What about forms of extinction risk reduction that have no additional effects?

Building shelters in order to decrease extinction risks would be one intervention that plausibly has no beneficial side-effects on global stability. A rebuilt human civilization might plausibly end up with less humane values at a certain level of technology, given that developments such as the Renaissance, the Enlightenment and the expanding circle may have been rather contingent. Therefore, from an NU perspective (and perhaps even from the perspective of classical utilitarianism), building shelters might be negative based on current empirical expectations. In case other people are set on building shelters, NUs should care to inform these people about the risks, but for reasons of cooperation, they should refrain from hostile opposition. And perhaps there will be possibilities to engage in moral trade, where e.g. both parties, NUs and extinction risk reducers, could decide to work on something else instead that is mutually beneficial.

3.3 Which interventions should NUs be pursuing now?

A very good candidate for priority is movement building with a focus on shared practical goals and concern for suffering (that needn’t be in the form of “strict” NHU or NIPU). Another very important focus area is research into specific pathways of how AI-outcomes can be affected, and into possibilities of cooperating with other value systems.

3.4 If NU becomes more popular, should we be worried that people naively attempt to destroy the world, even though this would be antithetical to the goal of reducing expected suffering?

Unfortunately, this is indeed a reason for concern. It is important that all NUs read about the arguments against increasing extinction risks, and that NU itself becomes associated with cooperation rather than defection. This is a strong argument to only promote NU in a way that makes it clear that it is cooperative, as exemplified here.

3.5 How should other value systems react to NU becoming a big topic?

The prior for population ethics being a crucial consideration is very high, and the best arguments for NU have so far been mostly neglected. So to the extent that the question is epistemic, contributions from NUs should be welcomed.

In terms of practical priorities, NU poses a concern for some other value systems because it makes humanity less enthusiastic about (non-AI-related) existential risk reduction. (Conversely, these value systems pose a concern for other value systems and for NU.) However, if NU always comes coupled with a focus on increased stability and cooperation with other value systems, then this effect will be negligible compared to all the benefits from increased altruistic resources going into far future priorities. And if the NUs with the highest influence draw other NUs away from defective ideas rather than towards them, then this provides a further reason why other value systems should welcome an NU presence.


* * *


The NU FAQ has been uploaded with permission of the anonymous authors. Many thanks. DP.


HOME