Categories
Philosophy Politics

Reclaiming Slurs through Conceptual Engineering

Images generated by MidJourney AI, based on a prompt about a conceptual hammer destroying an ideological structure.

Introduction

Ideology can leave us “stuck in a cage, imprisoned among all sorts of terrible concepts.”[1] Slurs are linked to an especially harmful kind of concept. Successfully reclaiming slur terms requires understanding and rejecting these concepts. Linguistic reclamation of slur terms, when combined with critique of the underlying concept, can put an oppressive weapon out of action and help liberate us from pernicious conceptual cages.

My analysis will not focus on the semantic theory of slurs or slur reclamation. Constructing a natural language semantics of slurs is primarily a matter for empirical linguistic research, not philosophy. Indeed, Cappelen (2017) argues that semantics should be left to specialists with the expertise to conduct empirical study and formal analysis of linguistic phenomena.[2] Of course, findings in linguistics will be very relevant for philosophers, and it is certainly within the purview of philosophy to interpret these findings and investigate the theoretical foundations of linguistics. The substantial philosophical literature on the semantics of slurs also demonstrates that philosophers can use interdisciplinary approaches to make meaningful progress in semantics. Developing theoretical semantic accounts of slurs has proven valuable. However, validating these theories will require empirical study of linguistic patterns in natural language use. Then, we can evaluate how operationalized forms of these semantic theories can explain the observed patterns. Ultimately, settling the differences between semantic theories of slurs requires linguistic research.

However, conceptual engineering and conceptual ethics are matters for philosophy. The task of philosophers is not just to describe linguistic tools, but to assess the representational features of these tools and find ways to fix their defective or harmful aspects. Therefore, instead of conducting descriptive semantics, this paper focuses on the concepts underpinning slur terms. Section 1 describes the concepts connected to slurs and explicates their normative flaws. Section 2 argues that fully successful reclamations of slurs must involve conceptual engineering, not just lexical change. Finally, section 3 addresses some important objections to this conceptual view of slurs.

1. Slurring Concepts

Slur lexical items are connected to underlying concepts (representational devices), which we can call slurring concepts. These concepts are defective and harmful in virtue of their key characteristics: they are thick, essentializing, reactive, and subordinating.

First, slurring concepts are thick concepts, with both descriptive and normative features. Slurs make a negative evaluation of some social group.

Second, slurring concepts are essentializing. As Neufeld describes, slurs designate an essence that is causally connected to negative stereotypical features of some social group.[3] This essence is a failed natural kind. For example, the N-word posits a “blackness essence” that is supposed to be causally responsible for negative features of Black people. The evidence for this semantic view is substantial, as it can explain features of slurs in natural language that other theories do not account for. For instance, it explains a systematic linguistic pattern: slurs are always nouns. This is because nouns are unique lexical devices that predicate things into enduring, essential categories like natural kinds. Neufeld’s theory has many other successful predictions and explanatory benefits. However, our primary concern is not in identifying the correct semantic theory, but in understanding slurring concepts and their defects. It is a sufficient to say that slurs must make use of essentializing concepts to refer to a targeted group in a stable way and to warrant negative inferences about this group.

Essentializing concepts are epistemically flawed ways to describe social groups. Using essentializing concepts for real natural kinds like rock and atom is appropriate. However, social groups like races, religions, and sexual orientations are not immutable essences with strict natural boundaries, and they cannot justify attributing inherent properties to their members. Essentializing social categories produces cognitive mistakes and bad inferences.[4] Furthermore, essentializing concepts have normative harms, as they encourage dehumanization and harmful stereotypes. Treating members of a targeted group as determined by their group membership, without the autonomy of a person, is clearly dehumanizing. Empirical research shows that essentializing concepts, like a biological conception of race, result in increased stereotyping and discrimination.[5] For example, people who endorse an essentializing biomedical concept of mental illness distance themselves more from those seen as mentally ill, perceive them as more dangerous, have lower expectations of their recovery, and show more punitive behavior.[6] Simply making an essentializing concept salient can cause members of the essentialized group to perform worse on various activities, even if the stereotypes associated with the group are neutral or positive.[7] These defects alone are strong reasons to reject the use of essentializing concepts for social groups.

Third, slurring concepts are reactive, as described by Braddon-Mitchell: a reactive concept automatically tokens a reactive representation, which is a representation that shortcuts the belief-desire system and includes a motivation for action.[8] For instance, the reactive concept kike can trigger a representation of Jews that encourages prejudicial actions against them, and includes a negative view of Jews that justifies these actions. Indeed, one study demonstrated that “category representations immediately and automatically activate representations of the related stereotype features.”[9] This makes slurs uniquely dangerous forms of linguistic propaganda, as they can bypass conscious processing to produce discriminatory representations and behaviors.

Finally, slurring concepts are subordinating. They are thick concepts with a specific kind of normative component: a negative evaluation that ranks the targeted group as inferior and legitimates discriminatory behavior toward the group.[10] This represents members of the target group in ways that justify derogating, intimidating, abusing, or oppressing them. Due to their specific features, slurring concepts do not just cause subordination, they constitute subordination. This constitutive claim is surprising: if a representation is just held mentally and does not manifest in any harmful actions, how can it be subordinating?

The act of conceptualizing a social group in an essentializing, negative way creates reactive representations that result in subordinating stereotypes and inferences. Because our social reality is shaped by the way others see us, being surrounded by people who represent you as inferior or subhuman is a kind of subordination itself, even if their representations do not lead to tangible actions. Furthermore, slurring concepts are so closely tied to subordinating effects that it is not sensible to separate this kind of representation from its consequences. Holding a slurring concept leads to unconscious, automatic discriminatory behaviors, and even members of the targeted group experience inhibitions and impaired performance when a slurring concept is salient.[11] Ultimately, whether slurring concepts are constitutive of subordination or only cause subordination, they vital point is that they are subordinating.

2. Slur Reclamation as Conceptual Engineering

Reclaiming slurs is often an intentional project carried out by oppressed groups to resist their oppression and to co-opt a tool of subordination for purposes of liberation. Taking ownership of a slur and imbuing it with positive associations is an act of “weapons control” that diminishes the word’s subordinating power, effectively putting the slur out of action.[12] For example, in the 1980s, LGBT activists applied the slur “queer” to themselves in positive and pride-evoking ways, and they were largely successful in changing the word’s connotation.[13] However, I argue that changing a lexical item’s meaning is insufficient for slur reclamation. Lexical change is not an effective form of weapons control because it fails to challenge the most dangerous weapon: the slurring concept remains intact. Fully successful slur reclamation requires conceptual change, and not just linguistic change. The slurring concept connected to the lexical item must be critiqued and dismantled.

2.1 Partial vs. Full Slur Reclamations

How can we explain slur reclamation? Under semantic theories of slurs like Croom’s,[14] one might describe reclamation as the process of adding positive properties to a term that become more salient than any negative properties. This explanation cannot account for slur reclamations that do not change the valence of a term but instead detach it from an essentializing social kind. For example, the term “gypsy” as it is used in the U.S. is disconnected from the Roma social group, but the term is still attached to negative properties and used as a pejorative. At least in the American cultural context, this slur has been neutralized – it no longer is linked to an essentializing concept. However, because it still has derogating force, “gypsy” has not been reclaimed.

In contrast, Camp’s perspectival theory holds that regardless of what perspective an individual holds when using a slur, the slur is still connected to a slurring perspective.[15] However, it is empirically clear that slurs can be detached from derogating perspectives through individual and collective linguistic actions. Camp’s theory cannot explain this reclamation without substantial revisions. Regardless, her perspectival approach is insightful in emphasizing that slurs are linked to a near-automatic, integrated way of thinking about a targeted group. Rather than interpreting signaling allegiance to a somewhat vague ‘perspective,’ we interpret slurs as uses of slurring concepts. As a result of the specific features of slurring concepts, their properties are similar to Camp’s perspectives.

Finally, under Neufeld’s account, just as a slur is created when a failed natural kind is causally connected to negative properties, a slur can be unmade when the kind is disconnected from these negative properties. For instance, the reclaimed slur “queer” is still used to refer to roughly the same social kind (people with non-conforming sexual and gender identities), but it is disconnected from negative properties, and instead is even attached to positive properties. In this case, the social kind connected to the term remained the same, but the valence associated with it was neutralized or reversed. In the “gypsy” case discussed above, the opposite occurred in the US – the negative properties of the word remained, while it was disconnected from the essentializing concept (of the Roma as a social kind). Neufeld’s explanation of derogatory variation can explain both kinds of slur reclamation: holding the level of essentialization fixed, more negative slurs are more derogating, while holding the negativity fixed, more essentializing slurs are more derogating. Disconnecting slurs from essentializing concepts and reducing their pejorative force are therefore two ways to carry out reclamation projects.

All of these theories fail to directly account for the importance of confronting the underlying concept in slur reclamation. If a mental representation like a slurring perspective or concept is critical to the meaning and force of a slur, then it follows that complete slur reclamation must fix these mental representations and not merely the lexical item. Indeed, Neufeld holds a meta-semantic view where terms inherit their linguistic meaning from the mental concepts we associate with them.[16] Partial reclamations can occur when a positive or neutral version of the slur term achieves linguistic uptake, or when the lexical item is no longer associated with an essentialized social group. However, this kind of reclamation is limited and insufficient. It only decouples a lexical item from a slurring concept and does not subvert the slurring concept itself. The most dangerous weapon, the slurring concept, remains at large, and will continue to manifest in other lexical items.

Partial reclamations can thereby constitute illusions of change. They play ‘whack-a-mole’ with lexical items while failing to address the root cause. Full reclamation involves not just lexical change, but a successful dismantling of the slurring concept. The importance of the underlying concept means that “ameliorative attempts that focus exclusively on the language used are unlikely to have much success in the long run.”[17] For example, the descriptive term for intellectually deficient individuals has been changed many times, from “moron” to “idiot” to “mentally retarded.” When they were initially introduced, these were non-pejorative descriptive terms, but all were rapidly adopted as slurs for people with intellectual disablements. This shows the insufficiency of merely changing language without critique and rejection of the slurring concept.

2.2 Conceptually Engineering Slurs

Reclaiming slurs therefore requires addressing the slurring concept. One fruitful method for carrying out full reclamation is conceptual engineering: the process of assessing our representational devices, reflecting on how to improve them, and implementing these improvements. As we have already diagnosed the flaws of slurring concepts, how can we go about fixing these representations? One obvious approach is to eliminate the slurring concept entirely. However, this is just elimination, not reclamation. It is also not clear how to eliminate a slurring concept. The characteristic features of slurring features give us a few lines of attack. For instance, we can reject the negative normative component of the thick concept and encourage adoption of either a pure descriptive concept (e.g. person of color) or a thick concept with a positive normative component (e.g. queer). However, this approach risks “reinforcing an essentialist construction of the group identity,”[18] as it maintains an essentializing concept of the targeted group. The slur can easily be reactivated and weaponized against its targets by reversing its valence, making this type of reclamation very precarious.

Another possible approach is to reduce the reactivity of slurring concepts. For example, perhaps training people to consciously recognize how slurs prompt automatic reactive representations of the targeted group can curb the impact of reactive concepts. Indeed, there is some evidence that implicit bias training can work to a limited degree.[19] However, this only mitigates the slurring concept’s effects. Additionally, slurring concepts are reactive because they are essentializing. Essentialism about social kinds is what leads to automatic, reactive processing about the groups targeted by slurs.[20] Likewise, attempting to undermine the subordinating force of slurring concepts starts at the end of the process, as it fails to address the features that make these concepts subordinating. Conclusively, all approaches to engineering slurring concepts lead us back to the same source: essentialism.

Disarming and rehabilitating a slurring concept therefore must start by rejecting essentialism. Failing to critique the essentializing concept leaves the conceptual foundations of the slur intact. In this sense, concepts like woman, race, mental illness, and homosexual are proto-slurring concepts. By essentializing a social category, these concepts function to lay the groundwork for slurs, making the essentialized group a target for oppression and subordination. Successful critiques of essentializing concepts can remove the ground that slurs stand upon. For example, Haslanger argues that woman is a failed natural kind used to mark an individual as someone who should occupy a subordinate social position based on purported biological features.[21] Shifting the meaning of “woman” to be more in line with its real social function can unmask this underlying ideology. Instead of conceptualizing womanhood as an essential biological category, we should treat woman as a folk social concept used to subordinate. In the same vein, Appiah critiques the essentializing concept of race, arguing that there is no biological or naturalistic basis for treating races as real categories.[22] Finally, many thinkers including Szasz and Foucault argue that mental illness is a failed natural kind used to justify social exclusion practices.[23] Conceptual engineering projects like these can undermine the essentialist foundations of slurs.

3. The Importance of Social Practice in Slur Reclamation

One objection to anti-essentialist conceptual engineering projects is that partial slur reclamations are successful precisely because they enable positive identification and solidarity within an essentialized group. For example, the N-word is a way for Black people to express solidarity and camaraderie as members of an essentialized and oppressed social category.[24] Rejecting the essentializing race concept could have at least two harmful consequences: (1) it precludes organizing and expressing solidarity along racial lines, (2) it can lead to false consciousness, pretending that the essentialized categories do not have continue to have real social effects simply because we have rejected the essentializing concept. However, solidarity does not require essentialism. Instead of treating race as an essential category, one can treat race as a social construction used to target groups for subordination. People within the targeted groups can then express solidarity not as common members of a real natural kind, but as fellow targets of arbitrary social oppression. Indeed, the liberatory, reclaimed form of the N-word does not require treating Blackness as an essential category. The reclamation can reject the essentializing concept while emphasizing the way this concept is still used to oppress and conveying solidarity and resistance amongst members the targeted group.

However, why try to reclaim slurs at all? Why not introduce a new lexical item to communicate a new, liberating, non-essentializing concept, instead of using a term tainted by being a former slur? It seems paradoxical to intentionally choose a lexical item that one considers deeply flawed. Slur terms might also have direct lexical effects, where the word itself produces negative cognitive reactions even if its meaning is changed.[25] (For example, the word “Hitler” has negative lexical effects regardless of its conceptual content or usage). This gives a prima facie reason to avoid the lexical item. However, there are important reasons why conceptually engineering projects should reclaim the slur word by associating it with a new concept, rather than abandoning it entirely. First, maintaining the original lexical item allows us to put an oppressive weapon out of action, and to actually turn it against the oppressors. Once reclaimed, the word no longer has its subordinating power. Instead, it can be used as a vehicle to for liberatory, non-essentializing concepts that replace the slurring concept. Second, language has an important role in shaping social reality. Reclaiming terms with preexisting impacts can allow us to ameliorate or even reverse these impacts on social reality, while introducing a new term will require building its social impact from the ground up.[26] The benefits of co-opting slur terms are sufficient to outweigh the costs of lexical effects.

Finally, one especially potent objection to concept-focused slur reclamation projects is that they prioritize changing representations over changing practices. As Táíwò emphasizes, our analysis of propaganda should focus not just on mental representations, but how these representations influence practice and action.[27] Even if a person does not hold a slurring concept, they can still act upon a public practical premise, treating members of the targeted group in essentializing and subordinating ways. The important feature of slurs is not the concept, but the way these slurs feature in oppressive social structures and license harmful actions. Therefore, it is misguided to emphasize mental representations, and our primary concern in reclamation projects should not be changing concepts. Rather, we should focus on the social structures and practices that give slurring concepts their power. Conceptual engineering is far too abstract and ideal, placing our priorities in the wrong places and failing to recognize the importance of practice. We need reality engineering, not conceptual engineering.

This objection is well-received, and I agree with Táíwò’s practice-first approach. Any attempt to fully reclaim a slur must coincide with material changes to prevent oppressive practices. However, harmful representations can be oppressive in themselves. Slurring concepts represent their targets as essentially subordinate kinds, and result in oppressive and limiting mindsets. Lifting the blinders of a slurring concept can itself be liberatory. Additionally, conceptual engineering is not exclusive with practical reform, and it can help enable and guide material changes. Furthermore, a key feature of slurring concepts is that they are reactive. This makes slurring concepts action-engendering, as they automatically motivate and encourage discriminatory action. Focusing on the harmful actions associated with a slurring concept is a treatment of a symptom, not the underlying conceptual disease. Finally, slurring concepts are integrated within larger oppressive conceptual systems that can be aptly characterized as ideologies. Therefore, reclaiming slurs and critiquing slurring concepts functions as a form of ideology critique. Conceptual engineering can make the essentializing, subordinating ideology more visible, discouraging complacency and false consciousness while promoting actions to resist this ideology.

Conclusion

Dismantling slurring concepts is an essential step in fully successful slur reclamation. This paper emphasizes the critical role of slurring concepts. I began by describing the key features of slurring concepts that enable slurs to serve their harmful function. Then, I argued that full reclamation requires not just lexical change but conceptual engineering, and that rejecting essentializing thinking is the key to disarming slurs. Finally, I addressed some objections and complications in the engineering of slurring concepts. Reclaiming slur terms and critiquing slurring concepts can serve a vital role in critiquing and resisting oppressive ideologies.

Bibliography

Appiah, Kwame Anthony. The ethics of identity. Princeton University Press, 2010.

Bolinger, Renee (forthcoming). The Language of Mental Illness. In Justin Khoo & Rachel Katharine Sterken (eds.), Routledge Handbook of Social and Political Philosophy of Language. Routledge.
PhilArchive copy v1: https://philarchive.org/archive/BOLTLO-7v1

Braddon-Mitchell, David. “Reactive Concepts: Engineering the Concept CONCEPT.” In Conceptual Engineering and Conceptual Ethics. Oxford University Press.

Camp, Elisabeth. “Slurring perspectives.” Analytic Philosophy 54, no. 3 (2013): 330-349.

Cappelen, Herman, “Why philosophers shouldn’t do semantics,” Review of Philosophy and Psychology 8, no. 4 (2017): 743-762.

Cappelen, Herman. Fixing language: An essay on conceptual engineering. Oxford University Press, 2018.

Carnaghi, Andrea, and Anne Maass. “In-group and out-group perspectives in the use of derogatory group labels: Gay versus fag.” Journal of Language and social Psychology 26, no. 2 (2007): 142-156.

Croom, Adam M. “Slurs.” Language Sciences 33, no. 3 (2011): 343-358.

Fawaz, Ramzi, and Shanté Paradigm Smalls. “Queers Read This! LGBTQ Literature Now.” GLQ: A Journal of Lesbian and Gay Studies 24, no. 2-3 (2018): 169-187.

Habgood-Coote, Joshua. “Fake news, conceptual engineering, and linguistic resistance: reply to Pepp, Michaelson and Sterken, and Brown.” Inquiry (2020): 1-29.

Herbert, Cassie. “Precarious projects: the performative structure of reclamation.” Language Sciences 52 (2015): 131-138.

Jeshion, Robin. “Pride and Prejudiced: on the Reclamation of Slurs.” Grazer Philosophische Studien 97, no. 1 (2020): 106-137.

Khoo, Justin. “Code words in political discourse.” Philosophical topics 45, no. 2 (2017): 33-64.

Langton, Rae. “Speech acts and unspeakable acts.” Philosophy & Public Affairs (1993): 293-330.

Maitra, Ishani. “Subordinating speech.” Speech and harm: Controversies over free speech (2012): 94-120.

Neufeld, Eleonore. An essentialist theory of the meaning of slurs. Ann Arbor, MI: Michigan Publishing, University of Michigan Library, 2019.

Nguyen, Hannah-Hanh D., and Ann Marie Ryan. “Does stereotype threat affect test performance of minorities and women? A meta-analysis of experimental evidence.” Journal of applied psychology 93, no. 6 (2008): 1314.

Podosky, Paul-Mikhail Catapang. “Ideology and normativity: constraints on conceptual engineering.” Inquiry (2018): 1-15.

Pritlove, Cheryl, Clara Juando-Prats, Kari Ala-Leppilampi, and Janet A. Parson. “The good, the bad, and the ugly of implicit bias.” The Lancet 393, no. 10171 (2019): 502-504.

Richard, Mark, A. Burgess, H. Cappelen, and D. Plunkett. “The A-project and the B-project.” Conceptual Engineering and Conceptual Ethics (2018).

Rieger, Sarah. “Facebook to investigate whether anti-Indigenous slur should be added to hate speech guidelines.” CBC News. Oct 24, 2018.

Stanley, Jason. How propaganda works. Princeton University Press, 2015.

Táíwò, Olúfémi O. “The Empire Has No Clothes.” Disputatio 1, no. ahead-of-print (2018).

Táíwò, Olúfẹmi. “Beware of Schools Bearing Gifts.” Public Affairs Quarterly 31, no. 1 (2017): 1-18.

  1. Nietzsche, Friedrich. The twilight of the idols. Jovian Press, 2018. Pg. 502.
  2. Cappelen, Herman, “Why philosophers shouldn’t do semantics,” Review of Philosophy and Psychology 8, no. 4 (2017): 743-762.
  3. Neufeld, Eleonore, An essentialist theory of the meaning of slurs, Ann Arbor, MI: Michigan Publishing, University of Michigan Library, 2019.
  4. Wodak, Leslie, and Rhodes, “What a loaded generalization: Generics and social cognition,” (2015).
  5. Prentice and Miller, “Psychological essentialism of human categories,” (2007).
  6. See Haslam (2011), Mehta and Farina (1997), Lam, Salkovskis, and Warwick (2005), Phelan (2005).
  7. Nguyen, Hannah-Hanh D., and Ann Marie Ryan, “Does stereotype threat affect test performance of minorities and women? A meta-analysis of experimental evidence,” Journal of applied psychology 93, no. 6 (2008): 1314.
  8. Braddon-Mitchell, “Reactive Concepts,” Conceptual Engineering and Conceptual Ethics (2020): 79.
  9. Neufeld, pg. 21. Quote is from a summary of a study by Carnaghi & Maass (2017).
  10. See Maitra “Subordinating speech,” (2012).
  11. See empirical evidence in Carnaghi and Maass (2017); Nguyen and Ryan (2008).
  12. Jeshion, Robin, “Pride and Prejudiced: on the Reclamation of Slurs,” Grazer Philosophische Studien 97, no. 1 (2020): 106-137.
  13. Fawaz, Ramzi, and Shanté Paradigm Smalls, “Queers Read This! LGBTQ Literature Now,” GLQ: A Journal of Lesbian and Gay Studies 24, no. 2-3 (2018): 169-187.
  14. Croom, Adam M, “Slurs,” Language Sciences 33, no. 3 (2011): 343-358.
  15. Camp, Elisabeth, “Slurring perspectives,” Analytic Philosophy 54, no. 3 (2013): 330-349.
  16. Neufeld, An essentialist theory of the meaning of slurs, pg. 3 (in footnote 8).
  17. Renee Bolinger, “The Language of Mental Illness,” in Justin Khoo & Rachel Katharine Sterken (eds.), Routledge Handbook of Social and Political Philosophy of Language (forthcoming).
  18. Herbert, Cassie, “Precarious projects: the performative structure of reclamation,” Language Sciences 52 (2015): 131-138. Pg. 133.
  19. Pritlove, Cheryl, Clara Juando-Prats, Kari Ala-Leppilampi, and Janet A. Parson, “The good, the bad, and the ugly of implicit bias,” The Lancet 393, no. 10171 (2019): 502-504.
  20. Prentice and Miller (2007).
  21. Sally Haslanger, “Going on, not in the same way,” Conceptual engineering and conceptual ethics (2020): 230.
  22. Kwame Anthony Appiah, The ethics of identity, Princeton University Press, 2010.
  23. See Jeremy Hadfield, “The Conceptual Engineering of Mental Illness,” jeremyhadfield.com (2020) for a review.
  24. Robin Jeshion, “Pride and Prejudiced: on the Reclamation of Slurs,” Grazer Philosophische Studien 97, no. 1 (2020): 106-137.
  25. See Cappelen, “Fixing Language,” (2018).
  26. Herman Cappelen, “Conceptual Engineering: The Master Argument,” Conceptual engineering and conceptual ethics, Oxford University Press (2019).
  27. Olúfémi Táíwò, “The Empire Has No Clothes,” Disputatio 1, no. ahead-of-print (2018).
Categories
Essays Philosophy

Why We Need Emotion to Interpret the World

Heidegger's Being and Time will be cited as BT with marginal pagination. 

Disclosing the world is a precondition for any engagement or concern with the world, as it makes the ready-to-hand “accessible for circumspective concern” (BT 76). Something must light up the world, making its totality of references, assignments, and tools available to us. But how is the world lit up or disclosed? Through the inseparably connected components of the care-structure, including attunement, understanding, fallenness, and discourse. This essay focuses on attunement, perhaps the most fundamental part of the care-structure, as it is what makes things to matter to Dasein in the first place (BT 137). Section 1 reconstructs Heidegger’s account of attunement and moods in the context of his broader existential analytic. Section 2 addresses some major methodological concerns for his account. Ultimately, Heidegger’s analysis of attunement illuminates key ontological structures of our experience and remains relevant even in a modern scientific context.

1. Attunement and Mood

Heidegger distinguishes between two concepts: an attunement or state-of-mind (Befindlichkeit), and a mood (Stimmung).[1] Unfortunately, Heidegger does not explicitly delineate these terms, and often uses them interchangeably. One interpretation is that attunement is the ontological existentiale, while mood is the ontic manifestation of attunement. In less technical terms, attunement is the fundamental condition that allows us to experience the world as meaningful and ‘mooded.’ Mood is the term for more specific modes of attunement, like fear, anxiety, joy, anger, or focus. Moods are therefore derivative from attunement. Perhaps Heidegger does not need to distinguish between the two. After all, we never experience some abstract, free-floating, or content-free attunement. Instead, we are always experiencing a specific, concrete mood. Attunement is a concept for describing the character of moods in general, as they all share a common structure. What are the characteristics of this structure?

An intuitive view is that moods are occasional, transient emotional experiences that affect us temporarily. One can be more or less moody, or feel a particularly strong mood, but moods are not constant features of our experience. For Heidegger, moods are far more fundamental. We are always already in a mood, and “we are never free from moods” (BT 136). Dasein is Being-in-the-world: it is always absorbed in and engaged with a web of references and assignments that make a totality of equipment ready-to-hand (BT 76). Moods make things accessible to us as equipment, making them meaningful. For instance, a mood like “focus” reveals this laptop as a tool for-the-sake-of the project of writing this essay. I am able to encounter only what a mood has already disclosed to me. Moods thereby disclose the worldhood of the world.

Moods allow us to “encounter something that matters to us” (BT 138). In this sense, moods color the world. However, this metaphor is misleading, as it suggests attunement simply tinges or tints objects that are already revealed. As Schopenhauer writes, “subjective mood—the affection of the will—communicates its color to the purely viewed surroundings.”[2] For Heidegger, moods are not just tinted lenses that give already-revealed objects some emotional color. Attunement, the structure of mood, is more like an atmosphere than a tinted lens: moods are always present, even if not visible, and are necessary for any experience of the world whatsoever.[3] Attunement is how the world opens up to me – whether it is opened up as a burden, a fearful place, or a wonderland. For instance, fearfulness is the mood which allows me to discover threatening objects (BT 138). Furthermore, a mood is not from inside or outside the mind, “but arises out of Being-in-the-world” (BT 176). Heidegger again rejects the distinction between subject and object, as it “splits the phenomenon asunder” (BT 132). Moods are neither inner nor outer, within nor without, objective nor subjective. Rather, moods condition the way we encounter things within the unitary phenomena of Being-in-the-world.

Lee, "Stillwinds #8", Acrylic on Canvas, 30 x 36 in.
Lee, “Stillwinds #8”, acrylic on canvas. For Heidegger, art has a unique ability to communicate a mood.

Heidegger’s reasoning about attunement could fit into the pattern of a transcendental argument: (1) Being-in-the-world is the basic structure of experience as Dasein; (2) in Being-in-the-world, things are disclosed as meaningful and ready-to-hand; (3) there must be some way these things are disclosed and made meaningful; (4) attunement is a name for the way things are disclosed and made meaningful to Dasein.[4] Therefore, attunement is an ontological precondition for our experience of the world. As Heidegger puts it, “only because the ‘there’ has already been disclosed in a state of mind [attunement] can immanent reflection come across ‘experiences’ at all” (BT 136). Moods are not just a kind of experience or a way of being intentionally directed. Instead, moods are a condition that makes experience possible, making it “possible first of all to direct oneself toward something” (BT 137). This is why attunement is necessary for experience in general, and not just affective or emotional experience.

2. Methodological Problems for Heidegger’s Analysis

The first problem for Heidegger’s concept of attunement is a methodological one. If we are always already in a mood, it follows that even Heidegger’s existential analytic must be carried out in some mood. Therefore, we can ask what makes his mood, or any mood, existentially authoritative. Since moods condition experience in different ways, perhaps Dasein will reveal itself differently depending on the mood of the phenomenologist. Is there a ‘right’ mood for uncovering the real ontological structures of Dasein?

Initially, it is clear that Heidegger rejects the idea of a ‘pure’ phenomenology devoid of mood. For example, through the neutrality modification, Husserl aimed to “suspend everything connected to the will” to achieve a purer phenomenological method.[5] Heidegger argues that this is misguided. There is no pure, mood-free experience of objects, as mood is a precondition for being receptive to objects at all. Not “even the purest theory has left all moods behind it” (BT 138). We cannot get outside of moods and observe them from some external vantage point. Every investigation must have some mood that makes the objects of investigation accessible and meaningful.

Heidegger emphasizes that this does not mean we “surrender science ontically to ‘feeling’” (BT 138), but it does seem methodologically problematic for an existential analytic if ‘universal’ ontological structures are only visible in certain moods. One can understand why phenomenologists seek neutrality, to avoid this methodological subjectivity. A defender of Heidegger’s approach can make several responses. First, even if we only “see the ‘world’ unsteadily and fitfully in accordance with our moods” (BT 138), this may be the only way to analyze being as it truly manifests itself. If the investigation of being turns out to be mood-dependent and tumultuous, then so be it. We should not falsify our experience and create artificial uniformity, treating Dasein as always present-at-hand, just because this would make phenomenology seem more objective. Second, the existentiales Heidegger identifies are present regardless of mood: in “every state-of-mind…Being-in-the-world should be fully disclosed” (BT 191). Even if we are not explicitly aware of structures like understanding, Self, or the World, they still condition our experience. Indeed, Being will often be disguised and “covered up” to us (BT 35). Perhaps an in-depth analysis can reveal structures that are not visible in our average everydayness, but that are always present as ontological structures. Presumably, these structures will be recognizable in every mood, although in different ways and to different degrees.

Furthermore, not all moods are equal in their disclosure of Dasein. Information about Dasein is accessible to us through attunements, and more primordial attunements offer a greater possibility of accurately interpreting Dasein’s Being (BT 185). Heidegger argues that anxiety (angst) is the most primordial and disclosive attunement. Unlike fear in the face of some extant entity, we have anxiety in the face of Being-in-the-world as such, which is indefinite, unknown, and nowhere. Just as when our tools break, we become aware of them as present-at-hand objects, when our world breaks down, we become are aware of it as a world. Through anxiety, we see the networks of meaning we are normally absorbed in, realize our individuality and being-thrown, and recognize our freedom to live inauthentic or authentic possibilities. Anxiety also provokes feelings of uncanniness and homelessness in our once-familiar world. Thus, we usually flee from it, absorbing ourselves in projects and entities to “dim down” or tranquilize the anxiety (BT 189). Our ceaseless avoidance reveals the constant presence and primordiality of anxiety, showing that Dasein is anxious in the “very depths of its Being” (BT 190). Anxiety is therefore a primordial mood that can encourage authenticity and enable the analysis of Dasein.

Why You Need Anxiety to Be Creative and Authentic - Heidegger on The Daring  Ones - Overthinker's Journey
Digital art by Kyle Kerr. Angst is a mood that can disclose our authentic being and open up our possibilities.

However, Heidegger leaves serious methodological questions unanswered. Despite using the term “primordial” 371 times in B&T, he never offers a method for determining whether a phenomenon is more primordial than another. His evidence that anxiety is a primordial attunement rests on the claim that we are always fleeing from it. However, even if this is accepted as a phenomenologically apt description, it is not clear why this implies that anxiety is more primordial. Even more critically, Heidegger suggests that anxiety as a primordial mood is more disclosive – it offers us privileged epistemic access to Dasein and the worldhood of the world. Why does the fact that we flee from an attunement imply that it is primordial, and why does its primordiality imply that the attunement is more disclosive? In claiming that anxiety discloses primordial Being, Heidegger seems to be begging the question: he presupposes some significant knowledge of primordial Being. Without this preexisting knowledge, it is hard to see how Heidegger could claim that anxiety discloses more of the reality or primordiality of Being.[6] While perhaps we have an implicit awareness of Being that enables us to begin an investigation of Dasein (BT 7), Heidegger is assuming a much richer understanding of Being here.

Furthermore, it is not clear why a phenomenon like fallenness is not more primordial than anxiety. After all, it almost universally present, and being-fallen is the mode of being that we occupy proximally and for the most part. In contrast, “‘real’ anxiety is rare” (BT 190). We flee toward fallenness, and away from anxiety (BT 189). Why should the phenomena we flee away from be more primordial than the phenomena we flee toward? Often, it seems that Heidegger labels a phenomenon “primordial” to communicate normative preferences rather than descriptive claims about the reality of Being. This leaves serious concerns: how can we resolve epistemic disputes about the primordiality of phenomena? More generally, why should we accept Heidegger’s characterizations of Being? The primary method he employs is a description of phenomena in our experience, and logical analysis to make conclusions about Being based on these phenomena. At least to some degree, Heidegger relies on the aptness and explanatory power of his descriptions of our experience. Thus, the validity of his “fundamental ontology” is dependent on the resonance of his words in describing the human condition, and seems to be an aesthetic activity analogous to that of a novelist or fiction writer.

File:Van-gogh-shoes.jpg - Wikipedia
Shoes, Van Gogh (painting). Heidegger describes this painting as disclosing an entire life-world. Perhaps his own theory can be taken as an artistic depiction of the nature of Being, and not a rigorous ontological investigation.

Finally, in Heidegger’s time, the “psychology of moods” was a new, undeveloped field which “still lies fallow” (BT 134). Today, it has grown into the far more mature field of affective science. However, Heidegger would likely criticize even a more advanced, scientific, and explanatorily successful psychology as having critical problematic assumptions and a deeply flawed starting point. The sciences treat Dasein as a present-at-hand object which can be understood in a detached theoretical attitude, and this approach inevitably falsifies the phenomena. Empirical science is a restricted mode of disclosing being, and it is not epistemologically prior. Indeed, the existentiales that Heidegger elucidates are “a priori conditions for the objects which biology takes for its theme,” and the structures examined by any science can only be understood if they are first seen as structures of Dasein (BT 58). For instance, attunements are the fundamental conditions that render the world intelligible to us, making possible logical or theoretical investigation. Ontological structures like attunement must be presupposed by the sciences and can never be fully explained by present-at-hand analysis.

As it happens, many of Heidegger’s explanations of Being have proved fruitful in the sciences, and his work influences entire research areas like embodied cognition. The existential analytic of Dasein has been ‘naturalized,’ tested, and applied as a model of the extant human brain. For example, Ratcliffe (2002) argues that Heidegger’s account is “actually required as an interpretive backdrop for neuropsychological cases,” and provides a powerful framework for modern affective science.[7] Recent findings show that moods determine how the world is opened up to us, enabling cognitive processing, decision-making, and successful reasoning. These findings show that Heidegger’s analysis has explanatory power in science as well as phenomenology. Additionally, as they reveal the inextricability of emotion from cognitive processes like logic, these findings challenge the ‘purity’ of many theoretical methods and undermine the epistemological assumptions of the sciences.

However, attempting to use science to add credibility to Heidegger’s views implicitly accepts that his claims are legitimately interpretable and even testable in a scientific context. This implies that empirical sciences can offer meaningful knowledge about Dasein, a claim Heidegger would likely reject. If the existential analytic truly has ontological priority, then it does not require empirical validation through the study of present-at-hand beings, and it cannot be treated as a merely ontic science. In the process of applying Heidegger’s ideas, the sciences therefore may violate some of his most essential philosophical principles. However, the problems discussed above raise questions for Heidegger’s own methods. These methods may not be able to fulfill his own desiderata, as they do not reveal the phenomena in a sufficiently originary way and are not clearly epistemologically prior. Instead, Heidegger’s approach, insofar as it aims for explanatory power in its description of consciousness and being, could be interpreted as continuous with the natural sciences. After all, a strict division between the study of Dasein and the present-at-hand would commit a cardinal Heideggerian sin by splitting up unitary phenomena. Just as the sciences are not a privileged conduit to reality, perhaps the existential analytic of Dasein is just one limited but insightful way of disclosing Being.

Bibliography

Elpidorou, Andreas, and Lauren Freeman. “Affectivity in Heidegger I: Moods and emotions in Being and Time.” Philosophy Compass 10, no. 10 (2015): 661-671.

Heidegger, Martin. The fundamental concepts of metaphysics: World, finitude, solitude. Indiana University Press, 1995.

Heidegger, Martin. Basic Problems of Phenomenology. Albert Hofstadter, trans. Indiana University Press, 1988.

Heidegger, Martin. Being and Time. Trans. John Macquarrie & Edward Robinson. Harper Reprint, 2008.

Husserl, Edmund. Ideas for a pure phenomenology and phenomenological philosophy: First book: General introduction to pure phenomenology. Hackett Publishing, 2014.

Schopenhauer, Arthur. The World as Will and Idea – Vol. 2. Project Gutenberg, 2015.

Polt, Richard. Heidegger: an introduction. Routledge, 2013.

Ratcliffe, Matthew. “Heidegger’s attunement and the neuropsychology of emotion.” Phenomenology and the Cognitive Sciences 1, no. 3 (2002): 287-312.

  1. I will use “attunement” for Heidegger’s term Befindlichkeit, and “mood” for Stimmung. Many translators agree these English terms most accurately communicate Heidegger’s concepts. See Andreas Elpidorou and Lauren Freeman, “Affectivity in Heidegger I: Moods and emotions in Being and Time,” Philosophy Compass 10, no. 10 (2015): 661-671.
  2. Arthur Schopenhauer, The World as Will and Idea-Vol. 2, Project Gutenberg, 2015. Pg. 400.
  3. Heidegger, The fundamental concepts of metaphysics, pg. 45.
  4. Of course, attunement is not the only way things are disclosed – it is part of the whole care-structure.
  5. Husserl, Ideas I, §109, pg. 213.
  6. Ratcliffe, Matthew. “Heidegger’s attunement and the neuropsychology of emotion.” Phenomenology and the Cognitive Sciences 1, no. 3 (2002): 287-312.
Categories
Cognitive Science Essays Neuroscience Philosophy

The Psychological Representation of Imagination

Imagining plays a key role in thinking about possibilities. Modal terms like “could,” “should,” and “might” prompt us to imagine possible scenarios. I argue that imagination is the first step in modal cognition, as it generates the possibilities for consideration. The possibilities in the consideration set can then be partitioned into a more limited set of relevant possibilities, and ordered on some criteria, like value or probability.[1] Yet even imagination is not free, boundless, and unlimited. There are systematic constraints on imaginings. The three considerations that determine which possibilities are considered — physical possibility, probability or regularity, and morality — also influence which scenarios are imaginable or easier to imagine.

Ultimately, the evidence indicates that imagination uses a representation similar to the psychological representation of modality,[2] and operates under the constraints that apply to modal cognition in general. This paper has two key goals: (1) to strengthen the theory of a common underlying psychological representation of modality by applying it to imagination, and (2) to understand the imagination and its constraints better by illuminating the psychological representation it has in common with modal cognition.

1. Imagination as the Initial Generative Step of Modal Cognition

Modal cognition will be used as an umbrella term for any kind of thinking about possibility, including counterfactual thinking, causal selection, free will judgements, and more. Imagination is a sub-concept under modal cognition, as it is a form of “attention to possibilities.”[3] There are many types of imagination, but we can afford to gloss over most of the distinctions and instead use a broad definition. Imagination is to “represent without aiming at things as they actually, presently, and subjectively are.”[4] In other words, imagination is mental simulation. Since imagination is about non-occurrent possibilities – like fictional scenarios, images of the future, or counterfactuals – it is necessarily modal. But is modal cognition necessarily imaginative? In short: yes.

After all, we cannot represent possibilities based on a single proposition. Merely varying some proposition’s meaning or truth-value is a simple logical process that cannot characterize modal cognition in general, especially the rich kind of modal cognition involved in decision-making, causal judgements, and counterfactual reasoning. In modal cognition, we must conceive of a full scenario and then consider alternatives (possible worlds) for that scenario. This sounds a lot like imagination, which involves representing a situation: “a configuration of objects, properties, and relations.”[5] Considering the ways a captain could have prevented a ship from sinking, for instance, requires mentally simulating this scenario and varying its features to produce alternative possibilities.[6] Modal cognition relies on imagination to represent situations and generate their alternatives.

More precisely, imagination fits into modal cognition as the initial generative step: it produces the possibilities that are later considered and evaluated. This is inspired by the distinction between discriminative models and generative models in machine learning.[7] A discriminative model uses observed variables to identify unobserved target variables – for example, to find the probable causes of sensory inputs. These models often use a hierarchy of mappings between variables to represent an overall input-output mapping. In contrast, a generative model simulates the interactions among unobserved variables that might generate the observed variables. For example, graphics rendering programs can follow a set of processes to simulate a physical environment. Williams (2020) provides detailed evidence showing that both perception and imagination are best described as generative models.

While I will not repeat William’s arguments here, treating imagination as a generative model is valuable for a few additional reasons. First, imagination is governed by principles of generation: a set of (implicit or explicit) rules that guide our imaginings.[8] For example, in Harry Potter, “Latin words and wands create magic” is a principle of generation that readers can consistently use to simulate the imagined world. Rather than a graphics rendering program that deterministically yields a given outcome by following certain processes, the imagination generates a set of possibilities guided by the relevant principles of generation. However, imagination, like rendering, is a generative model that uses certain processes to produce (and explain) a set of phenomena.

Second, treating imagination as a generative model explains imaginative mirroring: unless prompted otherwise by principles of generation, our imagination defaults to follow the rules of the real world. If a cup ‘spills’ in an imaginary tea party, the participants will treat the spilled cup as empty, following the physics of reality.[9] In perception, we are always running a generative model of reality, using processes we derive from experience to simulate the physical world and predict its behavior.[10] Imagination involves running a generative model on top of this simulation of reality. Some processes are modified in the imagining, but the ones that are not modified are ‘filled in’ by our default generative model of reality. Further, we quarantine imaginative models from perceptual models, so that events in the imagining are not taken to have effects in the real world – imagined spills do not make the real table wet. Treating imagination as a generative model running separately but based upon a reality-based perceptual model is useful in explaining these effects.

Finally, the generative model view explains the systematic constraints on imagination and their function. Imaginings are not utterly free and boundless. Rather, imagination changes some aspects of the world, and then unfolds the impacts of these changes in a constrained way based on specific rules of generation. Later, I will show that imagination by default follows our world’s laws of physics and probability. We also resist imaginings that break normative limitations set by morality. If imagination is a generative model, then the constraints are the rules that determine how the generation process is carried out, analogous to rendering algorithms in animations or games. Imagination’s constraints allow it to serve a valuable and adaptive function in generating possibilities relevant to our real world.

In Kratzer semantics, a modal anchor is the element from which a set of possible worlds is projected.[11] In simpler terms, the anchor is the thing held constant in modal projection. For example, in the statement “people could jump off this roof,” the modal anchor is the situation of the roof. We project a domain of possible worlds that all include this roof and determine if people jump off the roof in at least one possible world. Imagination is the cognitive function that carries out modal projection, as it generates the possibilities prescribed by the modal anchor and its context. The modal anchor defines the processes of the generative model. Alternatively, modal anchors correspond to “props” in the philosophy of imagination, where a prop is the thing that prescribes what is to be imagined and the principles of generation to be used in imagining.[12] The modal anchor functions as a linguistic prop, prescribing an imagining that generates a set of possibilities relevant to the anchor.

Graphical user interface, text, application  Description automatically generated This sets the stage for a comprehensive picture of modal cognition. First, some prop or modal anchor elicits thought about possibilities and triggers the start of the process. Second, imagination acts as a generative model, creating a set of possibilities based on the rules of generation prescribed by the modal anchor. This produces the consideration set, the group of possibilities under consideration. Third, the generated possibilities can then be narrowed down further and partitioned into a relevance set.[13] Finally, the possibilities are ordered according to some criteria, so the possibilities most relevant to the task at hand are ranked the most highly.

While it is conceptually helpful to separate these steps, I do not claim the steps occur in a sequential, discrete order. These components can happen synchronously and are often blurred together. Steps two and three are especially entangled, as I will show that generation through imagination also involves constraints that winnow down the considered possibilities. The rest of this paper will examine step two in detail. I will focus on how the imagination is constrained, and how its constraints indicate that it involves the psychological representation of modality.

2. The Psychological Representation of Modality and Imagination

2.1 Constraints in Modality

A growing body of research shows that a common psychological representation underlies many kinds of thinking about possibilities. Using certain constraints, this representation supports quick, effortless, computationally cheap, and often unconscious modal cognition. The constraints of physics, morality, and probability influence which possibilities are considered relevant.[14] For instance, in counterfactual reasoning, we mostly consider probable events, evaluatively good events, and physically normal events. Evidence also indicates that a common psychological capacity underlies our judgements of moral permissibility and physical possibility.[15] Evaluative concerns and prescriptive norms play an especially critical role in constraining possibilities.

Phillips, Luguri, and Knobe (2015) show that morality plays a key role in limiting the set of relevant possibilities for many types of judgement. For instance, people are less likely to agree that a captain on a sinking ship was forced to throw his wife overboard than that he was forced to throw cargo overboard. With the added support of several other studies, researchers demonstrated this effect occurs because immoral possibilities are considered less relevant. Critically for my thesis, the researchers also showed that prompting participants to generate more possibilities led to significant effects on their judgements.[16] When participants imagined decisions the captain could have made, they were more likely to judgements that he was free and not forced. This demonstrates the importance of the initial generative step.

Further, Phillips and Cushman (2017) found that both children and adults under time constraints tend to judge immoral events as impossible.[17] Non-reflective modal judgements are “ought-like,” and exclude immoral possibilities from consideration. Given time to deliberate, adults can differentiate types of modal judgment and make more reasoned judgements about possibility. In this study, participants were presented with events and were asked to judge which events were possible. For example, for the person stuck at an airport, participants are asked if he can hail a taxi, teleport, sell his car, or sneak onto public transit. Importantly, the generative step is performed by the researchers. The participants do not have to imaginatively generate the options. Instead, they are given the options and asked to evaluate their possibility. This skips step 2 of modal cognition, and instead focuses on step 3. However, in most natural situations, we have to generate the available options ourselves. 

In general, research on modal cognition overlooks the mechanism that generates possibilities. Existing studies often ask participants to evaluate already-generated possibilities. This experimental design systematically misses the effects of the process that generates possibilities in the first place. One exception is Kushnir and Flanagan (2019), which tested whether a person’s ability to generate possibilities predicted their judgement that they have free will.[18] We tend to judge agents as free when we can represent alternative possibilities for their action. Thus, simply generating more possibilities may lead us to judge that agents are freer. Indeed, this experiment found that children’s fluency in generating ideas predicted their evaluation of their own free will. Performance on a task that involved generating ideas within an imagined world was the best predictor of a child’s judgements: the more fluent the children were in this imagination task, the more likely they were to judge themselves as free.

The researchers speculated that there may be a “direct pathway from idea generation to judgments of choice and possibility.”[19] In my view, the pathway is indirect, as existing research indicates that after possibility-generation we also evaluate the relevance of possibilities and rank them. However, the studies discussed above underscore the importance of the imagination as the initial generative step. The nature and quantity of the generated possibilities has demonstrable impacts on modal judgements. Furthermore, there may be important constraints on this generation process that lead to downstream effects on later processes in modal cognition.

2.2 Constraints in Imagination

The same constraints apply to both modality and imagination. This is surprising, as intuitively imagination seems far freer and more limitless than normal reasoning. We can easily imagine worlds where magic violates physical laws or where improbable events occur often. However, I argue that the default representation of imagination results in resistance to imagining possibilities that violate physical laws, irregular or unlikely possibilities, and immoral or evaluatively bad possibilities. Experimental results reveal that the imaginations of young children are limited by precisely these constraints. Adults are able to deliberately generate more and less constrained possibilities. However, just as adults can treat immoral possibilities as irrelevant, imaginative resistance shows that the adult imagination is inhibited against immoral possibilities. Conclusively, the imagination shows a startling resemblance to the psychological representation of modality.

Investigations of modal cognition often use developmental research to show constraints on children’s reasoning about possibilities, indicating a default representation of modality that is especially visible during early childhood.[20] Similarly, the imaginations of young children (ages 2-8) are surprisingly reality-constrained. Children tend to resist, or fail to generate, impossible and improbable imaginings. When prompted to imagine hypothetical machines, children judge that familiar machines could be real, but are reluctant to imagine possible machines that operate very differently from any object they have regular experience with.[21] Children also protest against pretense that contradicts their knowledge of regularity, expecting imaginary entities to have ordinary properties.[22] Even when pretending, kids expect lions to roar and pigs to oink, and they resist imagining otherwise. 

Furthermore, 82% of the time, children extend fantasy stories with realistic events rather than fantastic events, while adults extend fantasy stories with fantastic events.[23] Young children imagine along ordinary lines even when primed with fantastical contexts, filling in typical and probable causes for fantastical imaginary events.[24] Children show a strong typicality bias in completing fictional stories, favoring additions to the story that match their regular experiences in reality.[25] For example, even if an imagined character can teleport or ride dragons, a young child will say the character gets to the store by walking and arrives at school on a bus. Children’s bias toward adding regular events persisted even after experimental manipulations designed to encourage children to notice a pattern of atypicality in the story.[26] This is surprising: popular wisdom dictates that children are exceptionally and fantastically imaginative. However, this research shows that children have simple, limited, and relatively mundane imaginations that are constrained by regularity, probability, and typical reality. 

girl in black and red plaid jacket standing on white floor tiles
The imaginations of young children are not as free & creative as you might expect. (Image source: Kelly Sikkema)

Evaluative concerns are an additional constraint on the imagination. My theory predicts that children’s imaginations will show a bias toward generating evaluatively good possibilities, and a resistance to imagining possibilities that they see as evaluatively wrong. Some studies indicate that this is the case. For example, American children are more likely than Nepalese and Singaporean children to judge that they are free to act against cultural and moral norms.[27] This is likely because children in cultures with stronger or more restrictive evaluative norms find it harder to generate evaluatively wrong possibilities or see these possibilities as relevant. As free will judgements depend on representing alternative possibilities, these children see themselves as less free to pursue possibilities that violate evaluative norms. This means that morality is an additional constraint on the imagination, especially in early childhood. However, more research is needed to validate this hypothesis.

As children develop, the constraints on their imagination relax, leading to less restricted generation of possibilities. Older children are more likely to imagine improbable and physically impossible phenomena.[28] Explicitly prompting children to generate more possibilities leads them to imagine more like older children, producing possibilities less constrained by probability and regularity.[29] This shows that the initial generative step may underlie observed developmental changes in modal cognition. The imaginations of older children generate more total possibilities, including more irregular possibilities, and they are therefore more likely to judge irregular events as possible.

Viewing imagination as a generative model allows productive interpretations of this research. When imagining, young children apply a generative model with the same rules of generation used in perception to produce expectations about reality. This early imagination may use simple constraints and empirical heuristics to allow effortless and rapid generation of possibilities. For instance, if the child regularly encounters an event, they are more likely to imagine this event.[30] In later development and adulthood, the imagination generates possibilities in a more deliberative and analytical way. This suggests a dual process model of imagination.[31] Children may use a more uncontrolled, effortless, and unconscious imagination based on simple heuristics and experience-derived rules of generation. In contrast, adults use a more controlled, effortful and conscious imagination that generates possibilities based on relatively sophisticated and principled rules. 

Although adults can more easily imagine irregular events or events that violate physical laws, the developed imagination is still constrained by moral norms. Imaginative resistance refers to a phenomenon where people find it difficult to engage in prompted imaginative activities. For example, if a fiction prompts us to imagine that axe-murdering is morally good, we resist this imagining. Unfortunately, there are few empirical tests of imaginative resistance. In one study conducted by Liao, Strohminger, and Sripada (2014), participants exhibited resistance to imagining morally deviant scenarios.[32] For example, participants reported difficulty in imagining that it was morally right for Hippolytos to trick Larisa in the Greek myth “The Rape of Persephone,” even though Zeus declared the trickery was morally right.[33] Their imaginative difficulty was significantly correlated with their evaluation that this trickery was morally wrong. This effect was replicated in a second experiment with a different story. The experiments also showed that imaginative resistance was modulated by context and genre. Participants more familiar with Greek myth were less likely to resist imagining that Hippolytos’ trickery was right, and participants were more willing to imagine that child sacrifice is permissible in an Aztec myth than in a police procedural. Context-specific variation in imaginative resistance may explain some of the variation in modal judgements.

Further research has demonstrated the empirical reality of imaginative resistance. In one study, adults were asked to imagine morally deviant worlds, where immoral actions are morally right within the imagined world.[34] Most participants found morally deviant worlds more difficult to imagine than worlds where unlikely events occurred often, but easier to imagine than worlds with conceptual contradictions. Participants classified these morally deviant worlds as improbable, not impossible, although a subset reported an absolute inability to imagine a morally deviant world. Another study employed a unique design to avoid the effects of authorial authority and variation in prompts, asking participants to create morally deviant worlds themselves and describe these imagined worlds in their own words.[35] Participants still exhibited resistance to imagining moral deviant worlds, even when they were the authors of the world. Disgust sensitivity was correlated with imaginative resistance, while need for cognition and creativity were correlated with ease of imagining. Finally, Black and Barnes (2017) constructed an imaginative resistance scale to support future research on this phenomenon and its correlations with individual differences.

Taken as a whole, the research discussed above provides strong support for the view that imagination and thinking about possibilities involve the same psychological representation. This default representation is most visible in early childhood, but it still operates in adulthood, especially under time-constraints or in scenarios involving immoral possibilities. Imaginative resistance shows that the primacy of morality in limiting the imagination corresponds to the primacy of morality in limiting which possibilities are considered relevant. Overall, this shows that generation of possibilities through imagination and evaluations of possibility relevance both involve a common psychological representation that is present at all stages of modal cognition.

2.3 Neuroscience of Imagination & Modal Cognition

This paper primarily aims to describe imagination and modal cognition on Marr’s computational and algorithmic levels of analysis, without delving into the neural implementation. However, any complete model of modal cognition will describe the neural implementational details. Furthermore, an implication of my view is that interactions between imagination and modal cognition will be visible on a neural level. One falsification of my view could show that these two processes do not interact or involve very distinct neural pathways. As such, the limited review of the neuroscientific evidence below is meant only to establish the plausibility of two key claims: (1) modal cognition involves imagination, and (2) imagination and modal cognition use similar neural mechanisms.

Neuroscientific evidence shows that modal cognition and imagination involve the same neural correlates. There is a growing consensus that remembering the past, imagining the future, and counterfactual thinking all involve similar neural mechanisms in the default mode network (DMN).[36] Several studies show that the DMN is involved in simulating possible experiences, imagining, and counterfactual thinking.[37] At the outset, this indicates that modal cognition and imagination use the same parts of the brain. But more specifically, future-oriented and counterfactual thinking engages the posterior DMN (pDMN), centered around the posterior cingulate cortex.[38] Researchers showed this by asking participants in an fMRI scan to make choices about their present situation, and then prospective choices about their future. Their findings demonstrated that people often engage vivid mental imagery in future-oriented thinking, and that this process activates the pDMN while reducing its connectivity with the anterior DMN. This provides a candidate neural process that underlies imaginative generation of possibilities.

One prominent neuroscientific theory of the imagination. See "The Neurobiology of Imagination: Possible Role of Interaction-Dominant Dynamics and Default Mode Network." 

Furthermore, a key cognitive ability that underlies imagination is prefrontal synthesis (PFS), the ability to create novel mental images. This process is performed in the lateral prefrontal cortex (LPFC), which likely acts as an executive controller that synchronizes a network of neuronal ensembles that represent familiar objects, synthesizing these objects into a new imaginary experience.[39] Children acquire PFS around 3 to 4 years of age, along with other imaginative abilities like mental rotation, storytelling, and advanced pretend play.[40] Similarly, young children tend to lack a distinction between immoral, impossible, and irregular counterfactuals – they often conflate “could” and “should.”[41] While further study is needed, it is plausible that development of PFS is associated with mature modal cognition, making modal distinctions, and generating more sophisticated imaginings. 

3. Conclusion

This essay constructs a broad theory of modal cognition in which imagination plays a critical role. Namely, imagination serves as an initial step that generates the possibilities for consideration for later steps. Imagination is best described algorithmically as a generative model which operates based on rules of generation prescribed by a modal anchor. Furthermore, the evidence discussed in section 2 indicates that imagination and thinking about possibilities both use a default psychological representation with the same fundamental constraints. While this psychological representation is not always visible in adulthood, it is clear in early childhood, and it still has observable effects in adult cognition. The psychological representation of modality and imagination enables us to think about possibilities in rapid, effortless, and useful ways.

This theory also yields testable predictions that could be explored by future empirical research. For example, it predicts that young children will exhibit more imaginative resistance to violations of morality than adults. They will be more likely to classify morally deviant worlds as impossible or show a total inability to imagine these worlds.[42] Under time pressure, adults will exhibit more imaginative resistance, and they will be more likely to imagine valuable scenarios than dis-valuable scenarios – just as people are more likely to generate valuable possibilities under time pressure.[43] Correspondingly, people given more time and opportunity to engage the imagination might exhibit more willingness to imagine morally deviant worlds. With very limited time or significant cognitive pressure, adult imaginations may resemble the imaginations of young children. Finally, individual differences in openness to experience, creativity, and imaginative ability may predict some of the variation in possibility judgements, through differences in the generation of possibilities. For instance, people who naturally generate more possibilities will be more likely to judge agents as free rather than forced.

Existing research has not explicitly drawn this connection between the imagination and the psychological representation of modality. Even if this proposed model is not correct as a whole, I hope this paper can help integrate disconnected research projects on modal cognition and imagination in cognitive science, neuroscience, and philosophy.

Bibliography

Addis, Donna Rose, Alana T. Wong, and Daniel L. Schacter. “Remembering the past and imagining the future: common and distinct neural substrates during event construction and elaboration.” Neuropsychologia 45, no. 7 (2007): 1363-1377.

Barnes, Jennifer, and Jessica Black. “Impossible or improbable: The difficulty of imagining morally deviant worlds.” Imagination, Cognition and Personality 36, no. 1 (2016): 27-40.

Black, Jessica E., and Jennifer L. Barnes. “Measuring the unimaginable: Imaginative resistance to fiction and related constructs.” Personality and Individual Differences 111 (2017): 71-79.

Black, Jessica E., and Jennifer L. Barnes. “Morality and the imagination: Real-world moral beliefs interfere with imagining fictional content.” Philosophical Psychology 33, no. 7 (2020): 1018-1044.

Berto, Francesco. “Taming the runabout imagination ticket.” Synthese (2018): 1-15.

Bowman-Smith, Celina K., Andrew Shtulman, and Ori Friedman. “Distant lands make for distant possibilities: Children view improbable events as more possible in far-away locations.” Developmental psychology 55, no. 4 (2019): 722.

Cook, Claire, and David M. Sobel. “Children’s beliefs about the fantasy/reality status of hypothesized machines.” Developmental Science 14, no. 1 (2011): 1-8.

Cushman, Fiery. “Action, outcome, and value: A dual-system framework for morality.” Personality and social psychology review 17, no. 3 (2013): 273-292.

Gaesser, Brendan. “Constructing memory, imagination, and empathy: a cognitive neuroscience perspective.” Frontiers in psychology 3 (2013): 576.

Goulding, Brandon W., and Ori Friedman. “Children’s beliefs about possibility differ across dreams, stories, and reality.” Child development (2020).

Kind, Amy. “Imagining under constraints.” Knowledge through imagination (2016): 145-59.

Kratzer, Angelika, “Modality for the 21st century,” In 19th International Congress of Linguists, pp. 181-201. 2013.

Lane, Jonathan D., Samuel Ronfard, Stéphane P. Francioli, and Paul L. Harris. “Children’s imagination and belief: Prone to flights of fancy or grounded in reality?” Cognition 152 (2016): 127-140.

Leslie, Alan M. “Pretending and believing: Issues in the theory of ToMM.” Cognition 50, no. 1-3 (1994): 211-238.

Liao, Shen-yi and Tamar Gendler. “Imagination.” The Stanford Encyclopedia of Philosophy (Summer 2020 Edition). Edward N. Zalta (ed.). <https://plato.stanford.edu/archives/sum2020/entries/imagination/>.

Liao, Shen-yi, Nina Strohminger, and Chandra Sekhar Sripada. “Empirically investigating imaginative resistance.” British Journal of Aesthetics 54, no. 3 (2014): 339-355.

Moulton, Samuel T., and Stephen M. Kosslyn. “Imagining predictions: mental imagery as mental emulation.” Philosophical Transactions of the Royal Society B: Biological Sciences 364, no. 1521 (2009): 1273-1280.

Parikh, Natasha, Luka Ruzic, Gregory W. Stewart, R. Nathan Spreng, and Felipe De Brigard. “What if? Neural activity underlying semantic and episodic counterfactual thinking.” NeuroImage 178 (2018): 332-345

Pearson, Joel. “The human imagination: the cognitive neuroscience of visual mental imagery.” Nature Reviews Neuroscience 20, no. 10 (2019): 624-634.

Phillips, Jonathan, Adam Morris, and Fiery Cushman. “How we know what not to think.” Trends in cognitive sciences 23, no. 12 (2019): 1026-1040.

Phillips, Jonathan, and Fiery Cushman. “Morality constrains the default representation of what is possible.” Proceedings of the National Academy of Sciences 114, no. 18 (2017): 4649-4654.

Phillips, Jonathan, and Joshua Knobe. “The psychological representation of modality.” Mind & Language 33, no. 1 (2018): 65-94.

Phillips, Jonathan, Jamie B. Luguri, and Joshua Knobe. “Unifying morality’s influence on non-moral judgments: The relevance of alternative possibilities.” Cognition 145 (2015): 30-42.

Schubert, Torben, Renée Eloo, Jana Scharfen, and Nexhmedin Morina. “How imagining personal future scenarios influences affect: Systematic review and meta-analysis.” Clinical Psychology Review 75 (2020): 101811.

Shtulman, Andrew, and Jonathan Phillips. “Differentiating “could” from “should”: Developmental changes in modal cognition.” Journal of Experimental Child Psychology 165 (2018): 161-182.

Shtulman, Andrew, and Lester Tong. “Cognitive parallels between moral judgment and modal judgment.” Psychonomic bulletin & review 20, no. 6 (2013): 1327-1335.

Spreng, R. Nathan, Raymond A. Mar, and Alice SN Kim. “The common neural basis of autobiographical memory, prospection, navigation, theory of mind, and the default mode: a quantitative meta-analysis.” Journal of cognitive neuroscience 21, no. 3 (2009): 489-510.

Stuart, Michael T. “Towards a dual process epistemology of imagination.” Synthese (2019): 1-22.

Thorburn, Rachel, Celina K. Bowman-Smith, and Ori Friedman. “Likely stories: Young children favor typical over atypical story events.” Cognitive Development 56 (2020): 100950.

Van de Vondervoort, Julia W., and Ori Friedman. “Preschoolers can infer general rules governing fantastical events in fiction.” Developmental psychology 50, no. 5 (2014): 1594.

Van de Vondervoort, Julia W., and Ori Friedman. “Young children protest and correct pretense that contradicts their general knowledge.” Cognitive Development 43 (2017): 182-189.

Vyshedskiy, Andrey. “Neuroscience of imagination and implications for human evolution.” (2019). Preprint DOI: 10.31234/osf.io/skxwc.

Weisberg, Deena Skolnick, and David M. Sobel. “Young children discriminate improbable from impossible events in fiction.” Cognitive Development 27, no. 1 (2012): 90-98.

Weisberg, Deena Skolnick, David M. Sobel, Joshua Goodstein, and Paul Bloom. “Young children are reality-prone when thinking about stories.” Journal of Cognition and Culture 13, no. 3-4 (2013): 383-407.

Williams, Daniel. “Imaginative Constraints and Generative Models.” Australasian Journal of Philosophy (2020): 1-15.

Williamson, Timothy. “Knowing by imagining.” Knowledge through imagination (2016): 113-23.

Winlove, Crawford IP, Fraser Milton, Jake Ranson, Jon Fulford, Matthew MacKisack, Fiona Macpherson, and Adam Zeman. “The neural correlates of visual imagery: A co-ordinate-based meta-analysis.” Cortex 105 (2018): 4-25.

Xu, Xiaoxiao, Hong Yuan, and Xu Lei. “Activation and connectivity within the default mode network contribute independently to future-oriented thought.” Scientific reports 6 (2016): 21001.

  1. Phillips, Jonathan, Adam Morris, and Fiery Cushman, “How we know what not to think,” Trends in cognitive sciences 23, no. 12 (2019): 1026-1040.

  2. Phillips, Jonathan, and Joshua Knobe, “The psychological representation of modality,” Mind & Language 33, no. 1 (2018): 65-94.

  3. Williamson, Timothy, “Knowing by imagining,” Knowledge through imagination (2016): 113-23. Pg. 4.

  4. Liao, Shen-yi and Tamar Gendler, “Imagination,” The Stanford Encyclopedia of Philosophy.

  5. Berto, Francesco. “Taming the runabout imagination ticket.” Synthese (2018): 1-15.

  6. Phillips, Luguri, and Knobe. “Unifying morality’s influence on non-moral judgments: The relevance of alternative possibilities,” Cognition 145 (2015): 30-42.

  7. The difference between discriminative and generative models is (roughly) similar to the distinction between model-free and model-based reinforcement learning – see Cushman (2017).

  8. Walton, Kendall L, Mimesis as make-believe: On the foundations of the representational arts, Harvard University Press, 1990. Pg. 53.

  9. Leslie, Alan M, “Pretending and believing: Issues in the theory of ToMM,” Cognition 50, no. 1-3 (1994): 211-238.

  10. Williams, “Imaginative Constraints and Generative Models,” 2020.

  11. Kratzer, Angelika, “Modality for the 21st century,” In 19th International Congress of Linguists, pp. 181-201. 2013.

  12. Walton, Mimesis as Make-believe, pg. 47.

  13. Phillips, Morris, and Cushman, “How we know what not to think,” (2019).

  14. Phillips and Knobe (2018).

  15. Shtulman, Andrew, and Lester Tong, “Cognitive parallels between moral judgment and modal judgment,” Psychonomic bulletin & review 20, no. 6 (2013): 1327-1335.

  16. This was shown in the second “manipulation” studies for each type of judgement (1b, 2b, 3b, and 4b).

  17. Phillips and Cushman (2017).

  18. Flanagan, Teresa, and Tamar Kushnir, “Individual differences in fluency with idea generation predict children’s beliefs in their own free will,” Cognitive Science, pp. 1738-1744. 2019.

  19. Flanagan and Kushnir, pg. 5.

  20. For instance, see Shtulman, Andrew, and Jonathan Phillips, “Differentiating “could” from “should”: Developmental changes in modal cognition,” Journal of Experimental Child Psychology 165 (2018): 161-182.

  21. Cook and Sobel, “Children’s beliefs about the fantasy/reality status of hypothesized machines,” Developmental Science 14, no. 1 (2011): 1-8.

  22. Van de Vondervoort, Julia W., and Ori Friedman,” Young children protest and correct pretense that contradicts their general knowledge,” Cognitive Development 43 (2017): 182-189.

  23. Weisberg et al, “Young children are reality-prone when thinking about stories,” Journal of Cognition and Culture 13, no. 3-4 (2013): 383-407. Pg. 386.

  24. Lane et al, “Children’s imagination and belief: Prone to flights of fancy or grounded in reality?,” Cognition 152 (2016): 127-140. Pg. 131.

  25. Thorburn, Bowman-Smith, and Friedman, “Likely stories: Young children favor typical over atypical story events,” Cognitive Development 56 (2020): 100950.

  26. Thorburn, Bowman-Smith, and Friedman (2020).

  27. See Chernyak, Kang, and Kushnir (2019) and Chernyak et al (2013).

  28. Lane et al, pg. 6.

  29. See Lane et al, pg. 8; Goulding and Friedman, “Children’s beliefs about possibility differ across dreams, stories, and reality,” Child development (2020); and Bowman-Smith et al, “Distant lands make for distant possibilities: Children view improbable events as more possible in far-away locations,” Developmental psychology 55, no. 4 (2019): 722.

  30. Goulding and Friedman (2020).

  31. Stuart, Michael T, “Towards a dual process epistemology of imagination,” Synthese (2019): 1-22.

  32. Liao, Shen-yi, Nina Strohminger, and Chandra Sekhar Sripada, “Empirically investigating imaginative resistance,” British Journal of Aesthetics 54, no. 3 (2014): 339-355.

  33. Liao, Strohminger, and Sripada (2014), pg. 10.

  34. Barnes and Black (2016), “Impossible or improbable: The difficulty of imagining morally deviant worlds,” pg. 8.

  35. Black, Jessica E., and Jennifer L. Barnes, “Morality and the imagination: Real-world moral beliefs interfere with imagining fictional content,” Philosophical Psychology 33, no. 7 (2020): 1018-1044.

  36. Mullally, Sinéad L., and Eleanor A. Maguire, “Memory, imagination, and predicting the future: a common brain mechanism?” The Neuroscientist 20, no. 3 (2014): 220-234.

  37. Pearson (2019); Gaesser (2013); Addis et al (2007); Spreng et al (2009); and Winlove et al (2018).

  38. Xu, Xiaoxiao, Hong Yuan, and Xu Lei, “Activation and connectivity within the default mode network contribute independently to future-oriented thought,” Scientific reports 6 (2016): 21001.

  39. Vyshedskiy, Andrey. “Neuroscience of imagination and implications for human evolution.” (2019). Preprint DOI: 10.31234/osf.io/skxwc.

  40. Vyshedskiy, “Neuroscience of Imagination.”

  41. Shtulman, Andrew, and Jonathan Phillips. “Differentiating “could” from “should”: Developmental changes in modal cognition.” Journal of Experimental Child Psychology 165 (2018): 161-182.

  42. See Barnes and Black (2016).

  43. Phillips, Jonathan, and Fiery Cushman, “Morality constrains the default representation of what is possible,” Proceedings of the National Academy of Sciences 114, no. 18 (2017): 4649-4654.

Categories
Cognitive Science Essays Philosophy

The Conceptual Engineering of Mental Illness

How could the concept of mental illness be engineered? Should it be abolished, ameliorated, or reformed in some way? Can the existing concept be vindicated? This is preliminary exploration to scout the territory and identify questions for further research in the conceptual engineering of mental illness. This project is not simply an attempt to characterize the current semantic content of mental illness. The issue is not what we happen to mean, but rather what we should mean, given the concept’s immense roles in our social, political, and scientific practices. Inquiry into mental illness must involve conceptual ethics, not just conceptual analysis.

Therefore, this essay proceeds in three steps: conceptual analysis, conceptual ethics, and conceptual engineering. These steps roughly map onto Thomasson’s pragmatic method for normative conceptual work: (1) reverse engineering the concept to identify its current content and function, (2) identifying the function the concept should fulfill, and (3) actually engineering the concept to better serve this function.[2] Part 1 contains conceptual analysis of mental illness, addressing descriptive issues about the concept’s definition, content, current function, and conceptual history. Part 2 handles normative questions in conceptual ethics, assessing what function mental illness should have and critiquing the existing concept from both epistemic and practical perspectives. Finally, part 3 engages in conceptual engineering, constructing and evaluating a series of ameliorative options.

Mental illness will be underlined when specifically referring to the concept, will be in scare quotes when referring to the lexical item "mental illness," and will be left alone when referring to the colloquial meaning or phenomena of mental illness. 

1. Conceptual Analysis

1.1 What is mental illness?

For the purposes of this essay, “mental illness,” “psychological/psychiatric disorder,” and “mental disorder,” will all be considered labels for the same concept mental illness. These terms vary in connotation but have similar intensions and extensions. Additionally, mental illness is a type concept: it specifies a category that includes many other token concepts, like bipolar disorder and autism. I will abstract away from the token concepts here and concentrate on the broader type concept.[3]

This paper focuses on mental illness as defined by the 5th Edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5): a behavioral or psychological pattern in an individual that results in clinically significant distress or disability and reflects an underlying dysfunction.[4] This is a theoretical concept rather than a folk concept,[5] although the theoretical mental illness concept defined in the DSM-5 heavily influences the commonly used folk concept. The DSM-5 also lists several caveats, including that the behavior must not be simply social deviance or an expectable response to events. Mental illness should also have clinical utility, helping clinicians diagnose and treat patients. Essential to this definition is that mental illness is a medical concept, one intended to facilitate treatment in a clinical setting.

This definition prompts several questions. What is dysfunction? Is it deviation from norms, a divergence from evolutionary role, or maybe a harmful difference in neurobiology? The DSM-5 specifies that dysfunction can be psychological, biological, or developmental. But each of these options taken individually suggests different contents. If the dysfunction must be demonstrably biological or developmental, then most existing disorders would be excluded because researchers have not identified their neurobiological basis.[6] The definition states that mental illnesses must reflect an underlying dysfunction, but the DSM-5 does not state the etiology of any listed disorder. How can it then establish that underlying dysfunctions cause the harmful psychological or behavioral patterns? This reflects an inconsistency between the approach of the DSM-5, which does not identify underlying dysfunctions, and the definition of mental illness, which requires an underlying dysfunction.

Further, how much distress or dysfunction is enough to qualify a pattern as a mental illness? It is plausible that a personality trait like openness to experience could lead to significant distress and impairment, as it is strongly associated with harmful risk-taking behaviors.[7] The DSM-5 does not clarify these issues. Perhaps a mental illness must be harmful on balance, or must cause ‘net distress,’ without significant benefits that offset the harms. The effects of personality traits depend on the context, and arguably no trait is on-balance harmful. For example, openness has substantial benefits including higher creativity.[8] Personality traits may also lack clinical utility because they cannot be treated effectively and are difficult to diagnose precisely. Clearly, normative and practical concerns are at play here, not just descriptive and theoretical concerns.

The DSM-5 may be intentionally broad to include many types of dysfunction and distress. Vagueness is not necessarily a problem. After all, the DSM-5 clarifies that mental illness is more of a dimensional concept than a categorical concept: there is a continuous spectrum between pathologies and non-pathologies rather than a rigid distinction.[9]

Further, mental illness is a thick concept: it has both descriptive and normative features. It describes a set of behaviors, psychological conditions, and neurobiological states. But it also contains a normative judgement: these conditions are harmful, non-valuable, or negative, causing distress and dysfunction. A value-neutral account of dysfunction is unachievable, as it requires some normative reasoning to explain why a certain kind of function is more positive or better than others. Some token mental illness concepts may be thicker than others, but all involve evaluative components.[10] Mental illness combines both fact and value, although it may be difficult or impossible to disentangle fact from value.[11]

Conclusively, this conceptual analysis has shown that mental illness is a type, thick, and dimensional concept. The next section will address the function of the concept in our existing conceptual scheme.

1.2 System function

Thomasson defines system function as the capacity a concept serves in the system it is embedded within.[12] What role does mental illness play in our current system? The DSM-5 specifies that its definition was developed for clinical, public health, and research purposes.[13] Thus, one aspiration of the concept is to improve health and scientific understanding. The concept may serve this role to some extent. However, it also has other current functions.

For instance, mental illness has substantial economic, political, legal, and scientific functions. Over forty thousand psychiatrists in the US rely on the concept to some extent.[14] The global psychiatric market is valued at over $197 billion,[15] while the global market for psychiatric drugs is worth over $88 billion.[16] The DSM itself originated to provide a way for insurance and law to evaluate psychological damages.[17] Furthermore, mental illness is essential to legal concepts like the insanity and diminished capacity defenses, disability evaluations under the ADA,[18] civil competencies, and personal injury lawsuits.[19] The mental illness concept is also indispensable to certain structures of power. Psychiatric power is remarkable in that it seems to even transcend political sovereignty – “madness is, in essence, the ultimate exclusion.”[20] For example, when King George III was diagnosed as insane, he was removed from his authority and placed in isolation.[21] The 25th Amendment also enshrines a provision that could theoretically remove a president diagnosed with a mental illness.[22] Finally, mental illness guides research efforts in psychiatry, sociology, and other scientific fields.

1.3 Does conceptual history matter?

Some of the most compelling critiques of psychiatric concepts have been historical.[23] These genealogies often trace mental illness to defective, objectionable, or harmful origins. However, does the history of a concept matter in evaluating its present form? Some might worry that conceptual history is misguided, commits the genetic fallacy, or merely addresses descriptive issues in history without a normative critique of the concept. After all, concepts in chemistry can be traced to alchemy, but this conceptual history alone is not a meaningful critique of these concepts.

Plunkett (2016) argues that conceptual history can provide descriptive information to evaluate which concepts improve the success of inquiry.[24] If a concept emerged due to irrational, problematic, or contingent historical processes that are not responsive to our aims, this gives us a prima facie reason to worry about the concept—especially if our justification for using the concept relies on its history. Additionally, the past performance of a concept can indicate its value as a representational tool. If concepts have been unjust or unsuccessful in the past, this informs their likelihood to succeed in the present.

Conceptual history is especially important for thick concepts like mental illness, because it allows us to gain distance from the concept and see how ideology or normativity have merged into descriptive concepts. History can also reveal hidden features of a concept, show that the ostensible role of a concept isn’t in line with its actual function, and identify alternate concepts which can serve similar functions. Therefore, conceptual history does matter, especially for thick concepts with complex social histories like mental illness. Delving into this conceptual history can be essential to conceptual analysis, providing key descriptive information.

2. Conceptual Ethics

What ought to be the function of mental illness? It is difficult to create a complete definition of its ideal function. However, the concept should do at least two things. First, it should fulfill an epistemic role: describing and providing knowledge about phenomenon that corresponds to mental illness. The concept should be coherent, fruitful, accurate, predictive, and it should be essential to good explanations of phenomena that need explaining.[25] Second, it should fulfill a normative role: upholding our practical and ethical aims, including promoting well-being, improving public health, and fostering a just society. The epistemic and normative conceptual goals are analogous to the DSM-5’s aims: increasing scientific understanding and public health. At minimum, mental illness should live up to these aims. Furthermore, the concept should be modified or replaced if alternative concepts can better fulfill these functions. This section addresses epistemic and normative criticisms of mental illness.

Simion (2010) argues that the epistemic role of concepts should be prioritized, and concept amelioration should be limited to “revisions that do not result in epistemic loss.”[101] Engineering projects should not leave us with concepts that fail us epistemically. Otherwise we might be left with concepts that are essentially “noble lies,” optimized for positive normative effects but failing to represent reality. Of course, often epistemic deficiencies will lead to a concept’s negative effects. But ultimately some conceptual changes will involve tradeoffs between epistemic and normative benefits, and in these cases there is a strong argument for avoiding epistemic losses.

2.1 Epistemic Critiques

2.1.1 Natural Kind?

Many epistemic critiques revolve around the question of whether mental illness is a natural kind. Broadly put, a natural kind is a grouping that reflects the structure of the natural world rather than just human interests or actions, like the chemical elements.[26] There are several competing notions of what constitutes a natural kind. For simplicity, I will use Dupré’s account, in which a natural kind is not a set that shares a specific essential property, but a dense cluster of properties in the natural world.[27] Whether mental illness is a natural kind is a critical issue in determining the epistemic validity of the concept.

Cooper argues that at least some mental illnesses are natural kinds in the same sense as weeds.[28] Classifying a plant as a weed depends on judging the weed as normatively dis-valuable for one’s purposes (e.g. gardening). But the plants themselves are natural kinds, as they are empirically classified into species based on objective natural properties. Like the weed concept, mental illness depends on a normative judgement of a natural kind. However, the behavioral and neurobiological conditions that correspond to a mental illness can be grouped based on natural properties.

In contrast to Cooper, Hacking argues that mental illness is an interactive human kind.[112] Describing a mental illness results in social processes that alter the very properties under study. The concept changes when it is described, and thus is interactive and not indifferent to its description. Cooper responds that mental illnesses can still be natural kinds even if they are affected by social processes. After all, classifying a bacteria species often leads to treatment efforts that change the bacteria, but this does not imply the bacteria is not a natural kind. Social processes like changing diagnoses of autism may lead to changes in the symptoms of autism, but this does not change that autism reflects an underlying natural condition with biological causes.

While Cooper’s arguments are valid, she only shows that mental illness as a descriptive phenomenon may be a natural kind. But mental illness is a thick concept: a normative judgement on descriptive phenomena. Perhaps the properties associated with mental illness are grouped closely enough to call this collection of phenomena a natural kind; this is an empirical question that has not yet been demonstrated. But the key point is that these collections of phenomena alone do not constitute a mental illness concept. The normative aspects of mental illness are inevitably social creations, not features of the natural world. Even if certain neurobiological, behavioral, or psychological differences are natural kinds, the mental illness concept remains a social kind.

However, even if mental illness is not a natural kind, it may be a practical kind – a grouping that is useful enough to support effective induction and ground explanations and predictions.[29] For instance, results from taxometric studies, neurobiology, and experimental psychology seem to show that individuals with major depression form a distinct group.[30] If this holds for mental illness in general, it may vindicate the concept. However, critics of this approach argue that statistical clusters of symptoms may simply reflect folk descriptions of distress or common responses and should not be called “illnesses.”[31] Clearly, any resolution to this debate must involve deep empirical and philosophical work.

2.1.2 Scientific Problems

Psychiatrists routinely argue that there are neurobiological dysfunctions like ‘chemical imbalances’ underlying mental illnesses. However, psychiatry has failed to demonstrate that these biological differences exist and are tied to behavioral differences. The largest and most recent umbrella review[32] of biomarkers for mental disorders found that “no convincing evidence supported the existence of a trans-diagnostic biomarker.[33] Although the DSM-5’s biomedical mental illness concept implies an underlying neurobiological dysfunction, 175 years of research has failed to show a neurobiological basis for any mental illness.[34] Neurobiology is not used in psychiatric diagnosis, and there are no validated clinical tests for mental disorders.[35] Davidson notes psychiatric research is characterized by an “obsession with brain anatomy coupled with the constant admission of its theoretical and clinical uselessness.”[36] Despite ongoing promises, psychiatry has not identified clear etiologies, biological aberrations, or clinical tests for mental illnesses. Thus, the biomedical concept fails to satisfy its own desiderata.

Adding to these deficiencies, mental illness diagnoses are notoriously unreliable. Most DSM categories lack construct validity and have little predictive power.[37] A cascade of studies in the 1970s demonstrated that psychiatrists only agreed upon diagnoses about 50% of the time.[38] A more recent quantitative review of 311 taxometric findings concluded that there was almost no replicated evidence for discrete psychiatric categories.[39] This unreliability can be traced to serious conceptual problems. Mental illnesses are often defined tautologically or incoherently. For instance, psychiatrists may claim that the cause of a person’s mood swings is bipolar disorder, and the evidence the person has bipolar is her mood swings. This response can only escape tautology if some clear external cause can be identified, like a specific neural aberration—but no mental illness has firmly identified etiology. Many diagnoses are also extremely vague or ambiguous. For instance, what constitutes “excessive anxiety”? The general concept of mental illness is also vague, as addressed in section 1.1. While some categories may be useful, the current approach to mental illness has not resulted in accurate categorization.

Psychiatry also has a bad track record of modifying mental illness when it fails to describe the world accurately. Despite explanatory failures, a lack of pathological neuroanatomy, and ethical harms, psychiatry retained since-debunked mental illnesses like ‘sexual perversions,’ homosexuality, and female hysteria.[40] These constructs were only scrapped after persistent social pressures from outside psychiatry.[41] This conceptual history riddled with epistemic failures casts some doubt on the validity of the existing understanding of mental illness. While the problematic disorders have been removed, the overarching mental illness concept that resulted in these failures has hardly changed.

Finally, mental illness as defined in the DSM-5 assumes that mental illnesses are relatively universal if not culturally invariant. This results in prioritizing Western understandings of mental health and illness and “homogenizing the way the world goes mad.”[42] However, mental illnesses vary dramatically across cultures, and some clusters of symptoms exist only in specific times or places.[43] In Hong Kong, symptoms of anorexia did not appear until Western psychiatry exported the concept; in Zanzibar, schizophrenia in the American form replaced existing symptoms; in Japan, the Western concept of depression was marketed by multinational pharmaceutical corporations and quickly replaced the indigenous disorder called yuutsu.[44] Ethan Watters’ detailed studies of these phenomena show that “culturally designated pathological states are often the flipside of states a culture values.”[45] Treating certain behavioral patterns as diseases inevitably reflects the norms of specific societies, and mental illness primarily reflects Anglo-American values. Exporting this culturally specific concept may be a form of psychiatric colonialism that results in both epistemic inaccuracies and negative impacts.

2.2 Normative Critiques

2.2.1 Treatment Failures

The epistemic defects of mental illness may impair the success of treatments based on this concept. In line with this prediction, psychiatry has serious practical failures in helping those it is intends to treat. The life expectancy for patients with mental illnesses has declined since the 1950s.[46] Suicide rates for patients with schizophrenia have increased by over 10 times.[47] Psychiatric treatment failed to improve outcomes for schizophrenic patients in 37 countries, and 66% of subjects found that antipsychotic medications completely lacked effectiveness.[48] Analysis of data from 1990 to 2015 in high-income countries found that “despite substantial increases in the provision of treatment” the prevalence of mood & anxiety disorders and their symptoms has not decreased.[49] Another large cross-national study found that on five out of six dimensions, mentally ill patients in developed countries had significantly worse outcomes then those in developing countries.[50] Developing countries have less adoption of the mental illness concept addressed here, fewer psychiatrists, and less access to pharmaceutical treatments. The fact that patients have better outcomes in these countries is not a good sign for the mental illness concept or casts doubt on psychiatry in general.

The development of psychiatric categories also faces serious methodological problems. Almost 50% of research on drugs is ghostwritten by non-experts or otherwise abnormally written.[104] In many psychiatry journals, more than 90% of authors receive research funding from drug companies.[105] Furthermore, 70% of the DSM-5 task force members had direct ties to the pharmaceutical industry.[106] It is also hard to argue that a 480% increase in the number of mental disorders over fifty years is merely the result of rigorous and unbiased scientific discovery.[107] Given the rapid growth of mental illness diagnosis and treatment “we may soon reach a point when it is statistically deviant not to be taking one of these medications,” and strange to not be diagnosed with a mental illness.[108] Can we trust mental illness categories to adequately describe the world when their development is so influenced by these factors?

However, some treatments for mental illnesses may be effective. For instance, one review of 94 meta-analyses compared psychiatric drugs to medical drugs and found that psychiatric medications were not generally less effective.[51] For instance, lithium was associated with reduction in bipolar relapse rates from 61% to 40%. Ultimately, whether or not psychiatric treatments are effective is a difficult empirical question that cannot be resolved here. However, psychiatry’s effectiveness is certainly not spectacular, and its remarkable failures cast doubt on the value of psychiatric concepts.

2.2.2 Social Costs: Oppression, Stigmatization, Marginalization

Mental illness may also have serious ethical harms that justify revising or rejecting the concept. For example, people judged as mentally ill can be involuntarily committed and are often deprived of freedom in psychiatric wards.[52] The concept is also often used to deny employment, legal rights, equal treatment, and epistemic status to those who are seen as mentally ill. In this way, classifying people as mentally ill may function as a mechanism of social control, “a cunning way of excluding certain people or certain patterns of behavior.”[53] Perhaps the concept gives pseudo-medical authority to practices of ostracism and moral condemnation.[54] Some argue that the concept should be changed or abolished to prevent these normative harms.

Some argue madness is fundamentally a failure to coordinate one’s behavior correctly with society, or a failure to conform to social and economic norms. Under this view, the DSM is a device to evaluate and improve the administration of human capital, and to predict “risks connected to the future exploitation of such capital.”[109] Psychiatry often “provides concepts and languages for marketers to use,” and the mental illness concept is essential for pharmaceutical industry, which often “markets diseases in the expectation that sales of the pills will follow.”[110] Major depression linked a wide range of common symptoms to a purported natural kind, which was an “enormously profitable gift to the pharmaceutical industry,” making SSRIs the bestselling drug category in the US, with almost 10% of the population using them.[111] If it is the case that mental illness functions to justify arbitrary discrimination against infringements of socio-economic norms, then it may not be a concept worth keeping.

Mental illness does often serve to legitimate the rejection, dismissal, or marginalization of ‘mentally ill’ people. This often employs weaponized uses of the mental illness concept, like “crazy,” “insane,” “loony,” and at least 250 other stigmatizing labels.[55] Bolinger argues that these terms are slurs, as they insult both the target based on their group membership, reinforcing “the assumption that people with mental illnesses ought to be generally dismissed as epistemic agents” and representing mentally ill people as deserving bad treatment.[56]

Stigmatization is a major cost of the concept of mental illness. Internalized stigma explained 74% of the variance in suicide risk for individuals with schizophrenia,[57] and correlates with higher symptom severity.[58] Even after multivariate analysis, internalized stigma is associated with more suicidal ideation, suicidal risk, number of suicide attempts, and depression.[59] A longitudinal research design also found that self-stigma was significantly associated with suicidal ideation.[60] Education on mental illness does not improve outcomes – it tends to worsen them. Psychoeducational programs are associated with increased suicidality, and awareness of illness is related to suicide risk.[62] Adolescents who self-label as mentally ill had higher ratings of self-stigma and depression.[61] Another study found that developing insight into having a mental illness increased depression.[63] Mental illness serves to promote stigmatizing views, and therefore this concept may be more harmful than helpful.

Leslie argues that certain linguistic constructions like generic concepts can encourage essentializing social kinds, leading to both cognitive mistakes and harmful stereotyping.[64] In line with this argument, extensive surveys and experiments have shown that essentialist thinking about mental illness is linked to stigma. Both laypeople and clinicians tend to believe that mental disorders are discrete, biologically based, and have inherent causes and properties, showing that essentialism dominates psychiatry and folk thinking.[65] People who endorse the biomedical mental illness concept distance themselves more from those seen as mentally ill, perceive them as more dangerous, have lower expectations of their recovery, and show more punitive behavior.[66]

Finally, a key idea of Hacking’s work is that “people spontaneously come to fit their categories,” and categorization creates new kinds of people.[67] It is not just that ‘what is measured can be managed,’ but what is measured can be created. For example, Hacking shows that the classification of multiple personality disorder in 1875 created a rush of people who exhibited the syndrome.[68] Diagnostic categories also create corresponding identities. People tend to ‘have’ physical illnesses but ‘be’ mental illnesses. For example, diagnosed individuals have extreme difficultly de-labeling from psychiatric disorders like “bipolar,” “anorexic,” or “OCD.”[69] As one patient said, “we start to define ourselves in a way that’s hard to break because we really believe that’s who we are.”[70] This may lead persons to adopt a ‘sick role’ that hinders their recovery and flourishing.

Diagnosed individuals tend to understand their own behavior in terms of dysfunction, and often identify as disordered their entire lives. People adapt to the concepts used to represent them. For instance, oppositional defiant disorder stigmatizes defiance as an illness, resulting in discipline practices that disproportionately harm young Black men – and if ODD is an interactive kind, those diagnosed with the disorder may “respond to their classification by exhibiting closer approximations to it.”[71] Clearly, mental illness can create group identities or new kinds of people. If this identity-creation has negative results, this is a reason to reject or modify mental illness.

3. Conceptual Engineering

3.1 Why engineer mental illness?

Mental illness is uniquely amenable to conceptual engineering. First, it is easier to engineer than many other concepts. Unlike concepts like woman, the meaning of mental illness is heavily influenced by a central body (the DSM-5), and thus its intension can be more easily changed by convincing the central body to revise its definition. Mental illness is no stranger to conceptual engineering. Previous efforts have successfully changed the concept, e.g. modifying the intension to exclude social deviances and removing homosexuality from the extension. In the 1950s the Renard School of psychiatry helped restructure mental illness from a psychoanalytic to a biomedical concept.[72] Of course, we should try to improve even the most difficult-to-change concepts if we have good normative reasons to do so. But mental illness is a low-hanging fruit that can serve as a proving ground for conceptual engineering efforts.

Additionally, as argued in section 2.1.1, mental illness is more of a human kind than a natural kind. Natural kinds can retain meaning despite changes in use. For instance, the extension and use of “number” have changed to include imaginary numbers and more, but the meaning of “number” itself remains the same.[73] If it is true that natural kinds have non-plastic meanings, they may be difficult to re-engineer. Human kinds are more tractable for engineering projects because their meaning is largely defined by their use in social contexts. As Simion argues, “when it comes to concepts representing social rather than natural kinds, by conceptually engineering, we would be, in effect, changing the world.”[74] Insofar as mental illness is a social/human kind, and language is constitutive of social reality, changing the concept may change the world itself. But this is a double-edged sword: changing the concept may also require changing the structures of the social world (reality engineering).[75]

This project also has importance beyond mental illness. As Capellen points out, a general theory of conceptual engineering can guide specific projects, and these practical projects can inform the theory.[76] Exploring or implementing changes can improve our understanding of how conceptual engineering works in practice. It can also uncover issues and approaches that apply to other conceptual engineering initiatives. These features of mental illness make it a vital area for conceptual engineering.

Proposals to modify mental illness will generally fall into three categories that correspond to Cappelen’s varieties of ameliorative strategies.[77] First, abandonment proposals argue the concept should be eliminated entirely. Second, meaning change proposals argue for keeping the lexical item “mental illness” while its meaning is revised. Third, some proposals argue that both the lexical item and the meaning of mental illness should be revised. I do not exhaustively survey possible proposals but construct examples of each type.

All conceptual engineering proposals, especially the third type, will tangle with difficult issues in topic continuity. If mental illness is altered, how do we know if we are still addressing the same idea and have not simply changed the subject? Conceptual engineers can use several replies to the topic continuity objection that are addressed elsewhere.[78] The proposals below are united in that they address (a) the same existing mental illness concept, and (b) attempt to fulfill the function of this concept in better ways. More radical proposals not discussed here may argue that inquiry into mental illness should be abandoned entirely, its function completely discarded.

3.1 Abandonment

3.1.1 Complete abolition

Advocates of abolition might argue that the concept’s epistemic deficits and normative harms are so substantial that we would be better off without it. These approaches may or may not provide an alternative to fulfill the conceptual vacuum. However, these proposals will face serious challenges. Without mental illness, how can the functions of this concept be pursued? How can psychiatric inquiry proceed? Will those with neurological or mental disorders be left without hope of treatment? These challenges are daunting enough that very few propose the abolition of mental illness.

However, in Abolishing the Concept of Mental Illness, Richard Hallam takes these challenges on. He argues that “psychiatry does not have to base itself on a presumption of pathology,” and that “if the concept of mental illness were to be abolished, our response to woes would have to be thought through anew.”[79] Mental and behavioral differences should be referred to in “a more neutral way” that allows individuals to construct “non-illness identities.”[80] These differences can still be studied and treated (if the individual chooses). However, they should not be pathologized. Abolition may therefore avoid many of the harms of stigmatization and negative identity-creation, while allowing scientists to study and develop treatments for neural differences in a less biased way.

3.1.2 Abolish overarching concept, keep (some) sub-concepts

Others argue that individual diagnostic categories are worth keeping, but we do not need a type concept mental illness and it should be abolished. After all, medicine does not need to define a unitary and generalized disease concept to effectively study and treat specific physical ailments.[81] Can a single representation of mental illness really be useful in the immense variety of contexts it is applied to? As Jaspers writes, “we do not need the concept of ‘illness in general’ at all and we now know that no such general and uniform concept exists.”[82] As such, psychiatry should jettison the abstract, all-encompassing definition of mental illness and the finite lists of illnesses grouped under this concept. Individual token concepts, like autism, will only be kept if they are valuable.

Some type concepts under the overarching mental illness concept might be particularly problematic. For example, Charland argues that cluster-B personality disorders are filled with moral judgements masked by clinical descriptive language.[102] For example, antisocial and narcissistic personality disorders are defined by clearly normative concepts like dishonesty and recklessness. They require essentially moral treatment that changes the individual’s moral character. Perhaps concepts like narcissistic personality disorder should be abolished entirely. Instead of treating these conditions like biomedical concepts they should simply be described with moral concepts. However, clusters A and C are less normative, as they are defined by descriptive empirical conditions. For example, schizoid personality disorder is specified by anhedonia, lack of close friends, and solitary activities — qualities that are in principle empirically observable. These concepts may be kept. This example shows how it could be possible to eliminate or alter token concepts under the overarching mental illness concept, without altering the type concept itself.

Proponents list several benefits for this kind of conceptual move. First, abolishing the overarching concept may have epistemic benefits, allowing researchers to accurately represent the natural world and make progress in understanding specific mental conditions. While individual mental disorders like bipolar and autism may be natural kinds, the mental illness concept itself is not a natural kind, as it is a collection of distinct conditions with no defining natural properties in common.[83] Grouping phenomena into mental illness might be useful if this category allowed us to see high-level patterns, but this is not the case; there are no general patterns or features that unite all these mental conditions. Scientific advances in psychiatry, neurobiology, and genetics indicate that there are “inherently fuzzy boundaries between disorder and non-disorder.”[84] Instead, this overarching concept may encourage generalizations and bad inferences about all of its sub-concepts.

Second, as section 2.2 shows, grouping people under mental illness also allows for oppression and stigmatization. Removing the overarching concept could help prevent the generalization that sustains these harmful social effects.

3.2 Keep lexical item, change meaning

3.2.1 Haslangerian Amelioration

Haslanger argues that we should change the meaning of certain concepts to achieve ethical aims like social justice. The key question is “whether tracking, communicating, and coordinating around” the concept is a good idea.[85] Given the concept’s role in oppression, perhaps we could construct an ameliorative new definition of mental illness in a Haslangerian fashion:

A group G is “mentally disordered” or “mentally ill” (in context C) iffdf Gs members exhibit similar behaviors, thoughts, or psychologies (in C); are subject to negative treatments including but not limited to subordinate status, reduced agency, and ignored speech and thought; and the members are “marked” by the dominant ideology (in C) as a target for these negative treatments by neurobiological or behavioral features presumed to be evidence of diminished or flawed mental capacities.

Would this definition be emancipatory? At the very least, this definition reveals “features of our meanings that we were mostly unaware of,”[86] as it exposes an ideology of marginalizing the mentally ill. Perhaps this new concept “cuts at the social joints” more effectively by explaining how a group is oppressed based on certain marks.[87] This amelioration might also reduce oppression, as “mentally ill” would no longer imply that someone is less deserving of equal treatment, but rather that they happen to be marginalized based on a mental feature. It would also allow the “mentally ill” to organize around the shared condition of being oppressed by by sanism[88] or ableism. What needs changing is not the individual, but the social structures that oppress and fail to accommodate the individual.

This proposal is vulnerable to many of the same criticisms that have been aimed at Haslanger’s projects. First, this amelioration may be a topic change—we are no longer talking about mental illness. Second, this amelioration is extremely difficult to achieve. Why fight two battles: (a) showing how a group is oppressed, and (b) attempting to change the use of words for this group in counter-intuitive ways?[89] Under (a), instead of revising the concept we can improve our understanding of the existing concept, realizing that mental illness functions to oppress and marginalize certain groups. It seems simpler to only attempt (a), and perhaps more effective.

Finally, a critic might respond that mental illness is not analogous to race and gender, it because it is actually the case that people deserve different treatment (e.g. less epistemic trust) based on certain mental features. For instance, if a person has severe brain damage, or is currently in schizophrenic psychosis, perhaps we shouldn’t give their statements exactly the same weight as those of a normal epistemic agent. However, it seems better to evaluate statements based on their merits rather than the issuing agent. And often mental illness is simply applied to agents whose speech one would like to reject.

3.2.2 Descriptive Reformulation

The descriptive reformulation project argues that we should revise all mental illnesses so that they depend entirely on nonmoral concepts and conditions that can be identified empirically.[90] Under this project, mental illness would essentially be used to refer to physical illnesses of the brain and nervous system that lead to dysfunction that can be described in an evaluatively neutral way—e.g. without appealing to social norms or moral standards. Advocates claim this project can both avoid normative judgement and set psychiatry on stronger scientific and epistemic grounds.

For instance, some researchers argue that psychiatry should adopt a ‘stratified medicine’ approach toward mental illness, aimed at identifying biomarkers or cognitive tests that stratify each mental disorder phenotype “into a finite number of treatment-relevant subgroups.”[91] Some major recent projects have attempted to create strictly biological classifications of mental disorders which do not map onto existing DSM-5 diagnoses.[92] This project may argue that if a candidate “mental illness” does not correspond to an identified neurobiological dysfunction, then it is not a mental illness.

The primary objection to this project is that it is not possible. First, scientific evidence casts doubt on the existence of descriptive properties like biomarkers that can qualify something as a mental illness. Second, there is no way to call something a “mental illness” without using normative concepts of some kind. Even in medicine, health requires a standard of well-being or functioning that requires normative judgements. Some proponents may argue that dysfunction can be evaluated descriptively. For example, perhaps we can identify evolutionary dysfunctions that correspond to mental illnesses. But this still involves normatively disvaluing evolutionary dysfunction. Furthermore, many mental illnesses have adaptive benefits,[93] and evolution alone cannot entail that any particular use of a trait is ‘more functional.’ It seems that any evaluation of dysfunction requires normativity.

However, this project could be salvaged by altering it slightly. Perhaps we should abandon the normative notion of dysfunction as well. Psychiatry should instead develop value-neutral classifications of behavioral, psychological, and neurobiological conditions, each associated with treatments that individuals can select if they choose.[94] None of these conditions would be considered dysfunctions or classified into illnesses.

3.3 Change both lexical item and meaning

3.3.1 Replace with Reclaimed Term

Some may argue that abnormal psychologies are not negative, and thus should not be called ‘mental illnesses.’ As one schizophrenic individual wrote:

“I consider myself the luckiest of individuals and I am most pleased with this mind…My life is an adventure, not necessarily safe or comfortable, but at least an adventure.”[95]

Many ‘mentally ill’ people agree. Advocates of this revision argue that just as being disabled is not having a “broken or defective body,” but simply a minority body, perhaps ‘mental illness’ is just having a minority brain.[96] This is not a bad-difference, but a mere-difference. For disabled people, it is the “experience of being disabled that is itself constitutive of some of the goods in their lives.”[97] In the same way, mental illness can be essential to certain goods—for instance, “Madness might represent another possible way of seeing.”[98]

Thus, some advocate a neutral or positive concept for these states. First, the revisionist could keep “mental illness,” changing its meaning so that it has no negative evaluation or connotation. For example, terms like “queer” and “crip” were reclaimed not by changing who the term applied to, but by changing the “affective, expressive component in the concept.”[99] However, this kind of revision is difficult when the term “illness” entails almost inbuilt negative evaluations & connotations. Second, the revisionist could abandon “mental illness,” and replace it with a new lexical item with a new meaning. This concept could be (1) a reclaimed term with existing negative connotations, like “mad,” “crazy,” or “insane,” (2) a currently positive or neutral term like “shaman” or “neurodivergent,” or even (3) a neologism. Replacing mental illness with a more positive conception may improve our social practices towards psychological difference.

4. Conclusion

Conflicts over the meaning of mental illness are proxy battles, the linguistic site of an underlying struggle over the purposes of psychiatry. The concept has immense impact. Falling within the extension of mental illness can enable access to treatment and insurance, legal protection, and entry to support and advocacy groups. It can also lead to involuntary commitment, social stigma, and exclusion. Given its significance, ensuring that mental illness fulfills our epistemic and ethical aims is critical. If the concept is defective, it could lead scientific efforts astray; if it has negative ethical effects, revising, replacing, or even abandoning it could help prevent harm.

Most researchers recognize that the concepts and terms of psychiatry can be revised: “as a linguistic sign, madness becomes available for our critical manipulation.”[100] What is not clear is how these concepts function currently, what they should mean, and how we can change them. Through the process of conceptual analysis, conceptual ethics, and conceptual engineering, this essay explores these issues. By introducing the fruitful methodology of conceptual engineering to psychiatry, philosophers can develop and clarify their descriptions, critiques, and proposals for conceptual improvement.

Bibliography

“Psychiatrists Market By Segmentation (Mental Disorder Type, Patient Type, Geography), By Trends, By Restraints, By Drivers, By Major Competitors – Global Forecasts To 2023.” The Business Research Company. January 2020.

Ahn, Woo-kyoung, Elizabeth H. Flanagan, Jessecae K. Marsh, and Charles A. Sanislow. “Beliefs about essences and the reality of mental disorders.” Psychological Science 17, no. 9 (2006): 759-766.

American Psychiatric Association. Diagnostic and statistical manual of mental disorders (DSM-5®). American Psychiatric Pub, 2013. Pg. 671.

American Psychiatric Association. Diagnostic and statistical manual of mental disorders (DSM-5®). American Psychiatric Pub, 2013. Pg. 671.

Banicki, Konrad. “Personality disorders and thick concepts.” Philosophy, Psychiatry, & Psychology 25, no. 3 (2018): 209-221.

Beck, Angela J., Cory Page, J. Buche, Danielle Rittman, and Maria Gaiser. “Estimating the Distribution of the US Psychiatric Subspecialist Workforce.” Ann Arbor: University of Michigan School of Public Health Workforce Research Center (2018).

Beck, Angela J., Cory Page, J. Buche, Danielle Rittman, and Maria Gaiser. “Estimating the Distribution of the US Psychiatric Subspecialist Workforce.” Ann Arbor: University of Michigan School of Public Health Workforce Research Center (2018).

Bird, Alexander, and Emma Tobin. “Natural kinds.” Stanford Encyclopedia of Philosophy (2008).

Bolinger, Renee (forthcoming). The Language of Mental Illness. In Justin Khoo & Rachel Katharine Sterken (eds.), Routledge Handbook of Social and Political Philosophy of Language. Routledge.

Boyd, Jennifer E., Emerald P. Adler, Poorni G. Otilingam, and Townley Peters. “Internalized Stigma of Mental Illness (ISMI) scale: a multinational review.” Comprehensive Psychiatry 55, no. 1 (2014): 221-231.

Brückl, Tanja M., Victor I. Spoormaker, Philipp G. Sämann, Anna-Katharine Brem, Lara Henco, Darina Czamara, Immanuel Elbau et al. “The biological classification of mental disorders (BeCOME) study: a protocol for an observational deep-phenotyping study for the identification of biological subtypes.” BMC psychiatry 20 (2020): 1-25.

Cappelen, Herman. Fixing language: An essay on conceptual engineering. Oxford University Press, 2018.

Carballo, Alejandro Pérez. “Conceptual evaluation: epistemic.” In Alexis Burgess, Herman Cappelen & David Plunkett (eds.), Conceptual Ethics and Conceptual Engineering. Oxford, UK: Oxford University Press (2020). Pg. 304-332.

Carvalho, André F., Marco Solmi, Marcos Sanches, Myrela O. Machado, Brendon Stubbs, Olesya Ajnakina, Chelsea Sherman et al. “Evidence-based umbrella review of 162 peripheral biomarkers for major mental disorders.” Translational Psychiatry 10, no. 1 (2020): 1-13.

Colton CW, Manderscheid RW (2006). Congruencies in increased mortality rates, years of potential life lost, and causes of death among public mental health clients in eight states. Prevention of Chronic Disease. www.cdc.gov/pcd/issues/2006/apr/05_0180.htm.

Cooper, Rachel. Classifying Madness: A Philosophical Examination of the Diagnostic and Statistical Manual of Mental Disorders. Vol. 86. Springer Science & Business Media, 2006.

Cosgrove, Lisa, and Harold J. Bursztajn. “Toward credible conflict of interest policies in clinical psychiatry.” (2009).

Davidson, Arnold I. “Diseases of sexuality and the emergence of the psychiatric style of reasoning.” Meaning and Method: Essays in Honor of Hilary Putnam (1990): 295.

Demazeux, Steeves, and Patrick Singy. The DSM-5 in Perspective. New York, NY: Springer. http://dx. doi. org/10.1007/978-94-017-9765-8, 2015.

Drescher, Jack. “Out of DSM: Depathologizing homosexuality.” Behavioral Sciences 5, no. 4 (2015): 565-575.

Dupré, John. “Natural kinds and biological taxa.” The Philosophical Review 90, no. 1 (1981): 66-90.

Foucault, Michel, Peter Stastny, and Deniz Şengel. “Madness, the absence of work.” Critical inquiry 21, no. 2 (1995): 290-298.

Foucault, Michel. Madness and civilization: A history of insanity in the age of reason. Vintage, 1988.

Goldberg, Ann. Sex, Religion, and the Making of Modern Madness: The Eberbach Asylum and German Society, 1815-1849. Oxford University Press on Demand, 1999.

Greenough, Patrick. “Conceptual Engineering via Reality Engineering.” Unpublished, under review. 2020.

Hacking, Ian. “Historical ontology.” In the Scope of Logic, Methodology and Philosophy of Science, pp. 583-600. Springer, Dordrecht, 2002.

Hacking, Ian. Mad travelers: Reflections on the reality of transient mental illnesses. University of Virginia Press, 1998.

Hacking, Ian. Rewriting the soul: Multiple personality and the sciences of memory. Princeton University Press, 1998.

Harper, Marjory, ed. Migration and Mental Health: Past and Present. Springer, 2016.

Harré, John Read, Niki. “The role of biological and genetic causal beliefs in the stigmatisation of ‘mental patients’.” Journal of mental health 10, no. 2 (2001): 223-235.

Haslam, N. “Genetic essentialism, neuroessentialism, and stigma: Comment on Dar-Nimrod and Heine.” Psychological Bulletin 17: 819 – 824 (2011).

Haslam, Nick, Elise Holland, and Peter Kuppens. “Categories versus dimensions in personality and psychopathology: a quantitative review of taxometric research.” Psychological medicine 42, no. 5 (2012): 903-920.

Healy D, Harris M, Tranter R, Gutting P, Austin R, Jones-Edwards G, Roberts AP (2006). Lifetime suicide rates in treated schizophrenia: 1875–1924 and 1994–1998 cohorts compared. British Journal of Psychiatry 188, 223–228.

Healy D, Savage M, Michael P, Harris M, Hirst D, Carter M, Cattell D, McMonagle T, Sohler N, Susser E (2001). Psychiatric bed utilisation: 1896 and 1996 compared. Psychological Medicine 31, 779–790.

Healy, David, and Michael E. Thase. “Is academic psychiatry for sale?” The British Journal of Psychiatry 182, no. 5 (2003): 388-390.

Healy, David. Mania: A short history of bipolar disorder. JHU Press, 2008.

Horwitz, Allan V., and Jerome C. Wakefield. The loss of sadness: How psychiatry transformed normal sorrow into depressive disorder. Oxford University Press, 2007.

Howard, Jenna. “Negotiating an exit: Existential, interactional, and cultural obstacles to disorder disidentification.” Social Psychology Quarterly 71, no. 2 (2008): 177-192.

Jablensky, Assen, Norman Sartorius, Gunilla Ernberg, Martha Anker, Ailsa Korten, John E. Cooper, Robert Day, and Aksel Bertelsen. “Schizophrenia: manifestations, incidence and course in different cultures A World Health Organization Ten-Country Study.” Psychological Medicine Monograph Supplement 20 (1992): 1-97.

Jablensky, Assen. “Does psychiatry need an overarching concept of” mental disorder”?.” World Psychiatry 6, no. 3 (2007): 157.

Jaspers, Karl. General psychopathology. Vol. 2. JHU Press, 1997.

Kapur, Shitij, Anthony G. Phillips, and Thomas R. Insel. “Why has it taken so long for biological psychiatry to develop clinical tests and what to do about it?.” Molecular psychiatry 17, no. 12 (2012): 1174-1179.

Karagianis, Jamie, D. Novick, Jan Pecenak, Josep Maria Haro, M. Dossenbach, T. Treuer, W. Montgomery, R. Walton, and A. J. Lowry. “Worldwide‐Schizophrenia Outpatient Health Outcomes (W‐SOHO): baseline characteristics of pan‐regional observational data from more than 17,000 patients.” International Journal of Clinical Practice 63, no. 11 (2009): 1578-1588.

Kincaid, Harold, and Jacqueline A. Sullivan, eds. Classifying psychopathology: Mental kinds and natural kinds. MIT Press, 2014.

Kingdon, David, and Allan H. Young. “Research into putative biological mechanisms of mental disorders has been of no value to clinical psychiatry.” The British Journal of Psychiatry 191, no. 4 (2007): 285-290.

Kirk, Stuart A., David Cohen, and Tomi Gomory. “DSM-5: The delayed demise of descriptive diagnosis.” In The DSM-5 in perspective, pp. 63-81. Springer, Dordrecht, 2015.

Lam, Danny CK, Paul M. Salkovskis, and Hilary MC Warwick. “An experimental investigation of the impact of biological versus psychological explanations of the cause of “mental illness”.” Journal of Mental Health 14, no. 5 (2005): 453-464.

Leucht, Stefan, Sandra Hierl, Werner Kissling, Markus Dold, and John M. Davis. “Putting the efficacy of psychiatric and general medicine medication into perspective: review of meta-analyses.” The British Journal of Psychiatry 200, no. 2 (2012): 97-106.

Leucht, Stefan, Sandra Hierl, Werner Kissling, Markus Dold, and John M. Davis. “Putting the efficacy of psychiatric and general medicine medication into perspective: review of meta-analyses.” The British Journal of Psychiatry 200, no. 2 (2012): 97-106.

McPherson, Tristam and David Plunkett. “Conceptual ethics and the methodology of normative inquiry.” Conceptual Engineering and Conceptual Ethics. 2020.

Mehta, S., and A. Farina. “Is being sick really better? Effect of the disease view of mental disorder on stigma.” Journal of Social and Clinical Psychology 16: 405 – 419 (1997).

Moses, Tally. “Self-labeling and its effects among adolescents diagnosed with mental disorders.” Social Science & Medicine 68, no. 3 (2009): 570-578.

Oexle, Nathalie, Nicolas Rüsch, Sandra Viering, Christine Wyss, Erich Seifritz, Ziyan Xu, and Wolfram Kawohl. “Self-stigma and suicidality: a longitudinal study.” European archives of psychiatry and clinical neuroscience 267, no. 4 (2017): 359-361.

Patel, Vikram, Shekhar Saxena, Crick Lund, Graham Thornicroft, Florence Baingana, Paul Bolton, Dan Chisholm et al. “The Lancet Commission on global mental health and sustainable development.” The Lancet 392, no. 10157 (2018): 1553-1598.

Phelan, Jo C. “Geneticization of deviant behavior and consequences for stigma: The case of mental illness.” Journal of health and social behavior 46, no. 4 (2005): 307-322.

Plunkett, David. “Conceptual history, conceptual ethics, and the aims of inquiry: a framework for thinking about the relevance of the history/genealogy of concepts to normative inquiry.” Ergo, an Open Access Journal of Philosophy 3 (2016).

Plunkett, David. “Conceptual history, conceptual ethics, and the aims of inquiry: a framework for thinking about the relevance of the history/genealogy of concepts to normative inquiry.” Ergo, an Open Access Journal of Philosophy 3 (2016).

Pols, Jan. “The Politics of Mental Illness: Myth and Power in the Works of Thomas S. Szasz.” Trans. Mira de Vries (1984/2005). Nijmegen, 1976. Pg. 178.

Preston, Beth. 1998. Why is a Wing like a Spoon? A Pluralist Theory of Function. The Journal of Philosophy 95 (5):215–54.

Prinzing, Michael. “The revisionist’s rubric: conceptual engineering and the discontinuity objection.” Inquiry 61, no. 8 (2018): 854-880

Putnam, H. (2002). The collapse of the fact/value dichotomy and other essays. Cambridge, MA: Harvard University Press.

Rose, Diana, Graham Thornicroft, Vanessa Pinfold, and Aliya Kassam. “250 labels used to stigmatise people with mental illness.” BMC health services research 7, no. 1 (2007): 97.

Scott, Charles, ed. DSM-5® and the Law: Changes and Challenges. Oxford University Press, 2015.

Scull, Andrew. Madness in Civilization: A Cultural History of Insanity, from the Bible to Freud, from the Madhouse to Modern Medicine. Princeton University Press, 2015.

Sharaf, Amira Y., Laila H. Ossman, and Ola A. Lachine. “A cross-sectional study of the relationships between illness insight, internalized stigma, and suicide risk in individuals with schizophrenia.” International journal of nursing studies 49, no. 12 (2012): 1512-1520.

Simion, Mona. “The ‘should’ in conceptual engineering.” Inquiry 61, no. 8 (2018): 914-928.

Simion, Mona. “The ‘should’ in conceptual engineering.” Inquiry 61, no. 8 (2018): 914-928.

Stein, Dan J., Katharine A. Phillips, Derek Bolton, K. W. M. Fulford, John Z. Sadler, and Kenneth S. Kendler. “What is a mental/psychiatric disorder? From DSM-IV to DSM-V.” Psychological medicine 40, no. 11 (2010): 1759-1765.

Stretton, Serina. “Systematic review on the primary and secondary reporting of the prevalence of ghostwriting in the medical literature.” BMJ open 4, no. 7 (2014): e004777.

Surís, Alina, Ryan Holliday, and Carol S. North. “The evolution of the classification of psychiatric disorders.” Behavioral Sciences 6, no. 1 (2016): 5.

Szasz, Thomas. Manufacture of madness: A comparative study of the inquisition and the mental health movement. Syracuse University Press, 1997.

Testa, Megan, and Sara G. West. “Civil commitment in the United States.” Psychiatry (Edgmont) 7, no. 10 (2010): 30.

Tan, Chee‐Seng, Xiao‐Shan Lau, Yian‐Thin Kung, and Renu A/L. Kailsan. “Openness to experience enhances creativity: The mediating role of intrinsic motivation and the creative process engagement.” The Journal of Creative Behavior 53, no. 1 (2019): 109-119.

Tcherpakov, Marianna. “Drugs for Treating Mental Disorders: Technologies and Global Markets.” BCC Publishing. January 2011.

Thomasson, A. “A pragmatic method for normative conceptual work.” Conceptual Engineering and Conceptual Ethics. OUP (2020).

Touriño, R., Acosta, F. J., Giráldez, A., Álvarez, J., González, J. M., Abelleira, C., Benítez, N., Baena, E., Fernández, J. A., & Rodriguez, C. J. (2018). Suicidal risk, hopelessness and depression in patients with schizophrenia and internalized stigma. Actas Españolas de Psiquiatría, 46(2), 33–41.

Watters, Ethan. Crazy like us: The globalization of the American psyche. Simon and Schuster, 2010.

Zachar, Peter. “Psychiatric disorders are not natural kinds.” Philosophy, Psychiatry, & Psychology 7, no. 3 (2000): 167-182.

Appendix

Foucault, Szasz, problems with madness

Foucault cites the influential French psychiatrist Pinel, who argued that the mad should be treated as morally ill and not imprisoned. Insane people should be freed from their shackles, and treated with (a) silence, (b) encouragements to see their own reflection and recognize their madness, and (c) perpetual judgement by their caretakers to encourage more sane behavior. Foucault argues that this “liberation” is really a form of subjugation, meant to inflict the mad person with constant shame. Their punishment is made invisible and used to mold the ‘mad’ people into “disciplined bodies.” Mental illness is diagnosed by conduct but treated biologically.

Footnotes

  1. Thomasson, A. “A pragmatic method for normative conceptual work.” Conceptual Engineering and Conceptual Ethics. OUP (2020).
  2. However, the token concepts will be relevant examples to clarify the type concept. This also presents some issues in conceptual engineering, as concepts within the type are not uniform, and it’s possible some token mental illness concepts are better than others. I will address these issues in section 3.
  3. Stein, Dan J., Katharine A. Phillips, Derek Bolton, K. W. M. Fulford, John Z. Sadler, and Kenneth S. Kendler. “What is a mental/psychiatric disorder? From DSM-IV to DSM-V.” Psychological medicine 40, no. 11 (2010): 1759-1765.
  4. As defined in: McPherson, Tristam and David Plunkett. “Conceptual ethics and the methodology of normative inquiry.” Conceptual Engineering and Conceptual Ethics. 2020.
  5. See the evidence discussed in section 2.1.
  6. Marco Lauriola, Irwin P Levin, Personality traits and risky decision-making in a controlled experimental task: an exploratory study. Personality and Individual Differences, Volume 31, Issue 2, 2001, Pages 215-226, ISSN 0191-8869, https://doi.org/10.1016/S0191-8869(00)00130-6.
  7. Tan, Chee‐Seng, Xiao‐Shan Lau, Yian‐Thin Kung, and Renu A/L. Kailsan. “Openness to experience enhances creativity: The mediating role of intrinsic motivation and the creative process engagement.” The Journal of Creative Behavior 53, no. 1 (2019): 109-119.
  8. American Psychiatric Association. Diagnostic and statistical manual of mental disorders (DSM-5®). American Psychiatric Pub, 2013. Pg. 671.
  9. Banicki, Konrad. “Personality disorders and thick concepts.” Philosophy, Psychiatry, & Psychology 25, no. 3 (2018): 209-221.
  10. Putnam, H. (2002). The collapse of the fact/value dichotomy and other essays. Cambridge, MA: Harvard University Press.
  11. Thomasson (2020), pg. 444. Thomasson here is following Preston (1998) and Millikan (1984).
  12. American Psychiatric Association, pg. 62.
  13. Beck, Angela J., Cory Page, J. Buche, Danielle Rittman, and Maria Gaiser. “Estimating the Distribution of the US Psychiatric Subspecialist Workforce.” Ann Arbor: University of Michigan School of Public Health Workforce Research Center (2018).
  14. “Psychiatrists Market By Segmentation (Mental Disorder Type, Patient Type, Geography), By Trends, By Restraints, By Drivers, By Major Competitors – Global Forecasts To 2023.” The Business Research Company. January 2020.
  15. Tcherpakov, Marianna. “Drugs for Treating Mental Disorders: Technologies and Global Markets.” BCC Publishing. January 2011.
  16. Fulford, 96.
  17. Americans with Disabilities Act.
  18. Scott, Charles, ed. DSM-5® and the Law: Changes and Challenges. Oxford University Press, 2015.
  19. Serres, Michel. “The geometry of the incommunicable: madness.” In Davidson, Arnold Ira. Foucault and his interlocutors. University of Chicago Press (1997). Pg. 30.
  20. Foucault 2003, lecture of November 14, 1973.
  21. Link, Arthur S., and James F. Toole. “Presidential disability and the twenty-fifth amendment.” JAMA 272, no. 21 (1994): 1694-1697.
  22. See Foucault (1988), Goldberg (1999), Hacking (1998), Szasz (1997), and more.
  23. Plunkett, David. “Conceptual history, conceptual ethics, and the aims of inquiry: a framework for thinking about the relevance of the history/genealogy of concepts to normative inquiry.” Ergo, an Open Access Journal of Philosophy 3 (2016).
  24. Carballo, Alejandro Pérez. “Conceptual evaluation: epistemic.” (2020) In Alexis Burgess, Herman Cappelen & David Plunkett (eds.), Conceptual Ethics and Conceptual Engineering. Oxford, UK: Oxford University Press. Pg. 304-332.
  25. Bird, Alexander, and Emma Tobin. “Natural kinds.” Stanford Encyclopedia of Philosophy (2008).
  26. Dupré, John. “Natural kinds and biological taxa.” The Philosophical Review 90, no. 1 (1981): 66-90. He is following Quine (1969)’s account.
  27. Cooper, Rachel. Classifying Madness: A Philosophical Examination of the Diagnostic and Statistical Manual of Mental Disorders. Vol. 86. Springer Science & Business Media, 2006. Pg. 11.
  28. Zachar, Peter. “Psychiatric disorders are not natural kinds.” Philosophy, Psychiatry, & Psychology 7, no. 3 (2000): 167-182.
  29. Kincaid, “Defensible Natural Kinds,” in Kincaid and Sullivan (2014). Pg. 161.
  30. Hallam, Richard. Abolishing the concept of mental illness: Rethinking the nature of our woes. Routledge, 2018. Pg. 60.
  31. An umbrella review is a meta-analysis of meta-analyses.
  32. Carvalho, André F., Marco Solmi, Marcos Sanches, Myrela O. Machado, Brendon Stubbs, Olesya Ajnakina, Chelsea Sherman et al. “Evidence-based umbrella review of 162 peripheral biomarkers for major mental disorders.” Translational Psychiatry 10, no. 1 (2020): 1-13.
  33. Kingdon, David, and Allan H. Young. “Research into putative biological mechanisms of mental disorders has been of no value to clinical psychiatry.” The British Journal of Psychiatry 191, no. 4 (2007): 285-290.
  34. Kapur, Shitij, Anthony G. Phillips, and Thomas R. Insel. “Why has it taken so long for biological psychiatry to develop clinical tests and what to do about it?.” Molecular psychiatry 17, no. 12 (2012): 1174-1179.
  35. Davidson, Arnold I. “Diseases of sexuality and the emergence of the psychiatric style of reasoning.” Meaning and Method: Essays in Honor of Hilary Putnam (1990): 295.
  36. Kincaid, Harold, and Jacqueline A. Sullivan, eds. Classifying psychopathology: Mental kinds and natural kinds. MIT Press, 2014. Pg. 51.
  37. Scull (2015), pg. 385.
  38. Haslam, Nick, Elise Holland, and Peter Kuppens. “Categories versus dimensions in personality and psychopathology: a quantitative review of taxometric research.” Psychological medicine 42, no. 5 (2012): 903-920.
  39. Davidson (1990), pg. 312.
  40. Drescher, Jack. “Out of DSM: Depathologizing homosexuality.” Behavioral Sciences 5, no. 4 (2015): 565-575.
  41. Harper, Marjory, ed. Migration and Mental Health: Past and Present. Springer, 2016.
  42. Hacking, Ian. Mad travelers: Reflections on the reality of transient mental illnesses. University of Virginia Press, 1998.
  43. Watters, Ethan. Crazy like us: The globalization of the American psyche. Simon and Schuster, 2010.
  44. Watters (2010), pg. 176.
  45. Colton and Manderscheid (2006).
  46. Healy et al (2006).
  47. Karagianis et al (2009).
  48. Patel et al (2018).
  49. Jablensky et al (1992).
  50. Leucht et al (2012).
  51. Testa and West (2010).
  52. Foucault, “Madness, the absence of work,” Critical inquiry (1995).
  53. Foucault 1988, pg. 498-501.
  54. Rose et al (2007)..
  55. Bolinger, Renee (forthcoming). The Language of Mental Illness. In Justin Khoo & Rachel Katharine Sterken (eds.), Routledge Handbook of Social and Political Philosophy of Language. Routledge.
  56. Sharaf, Ossman, and Lachine (2012).
  57. Boyd et al (2014).
  58. Touriño et al (2018).
  59. Oexle et al (2017).
  60. Moses (2009).
  61. Cunningham Owens, D. G., A. Carroll, S. Fattah, Z. Clyde, I. Coffey, and E. C. Johnstone. “A randomized, controlled trial of a brief interventional package for schizophrenic out‐patients.” Acta Psychiatrica Scandinavica 103, no. 5 (2001): 362-369.
  62. Rathod, Shanaya, David Kingdon, Peter Smith, and Douglas Turkington. “Insight into schizophrenia: the effects of cognitive behavioural therapy on the components of insight and association with sociodemographics—data on a previously published randomised controlled trial.” Schizophrenia research 74, no. 2-3 (2005): 211-219.
  63. Wodak, Daniel, Sarah‐Jane Leslie, and Marjorie Rhodes. “What a loaded generalization: Generics and social cognition.” Philosophy Compass 10, no. 9 (2015): 625-635.
  64. Ahn, Woo-kyoung, Elizabeth H. Flanagan, Jessecae K. Marsh, and Charles A. Sanislow. “Beliefs about essences and the reality of mental disorders.” Psychological Science 17, no. 9 (2006): 759-766.
  65. See Haslam (2011), Mehta and Farina (1997), Lam, Salkovskis, and Warwick (2005), Phelan (2005, and Read and Harré (2001).
  66. Hacking, Ian. Historical Ontology. Harvard University Press, 2004. Pg. 108.
  67. Hacking, Ian. Rewriting the soul: Multiple personality and the sciences of memory. Princeton University Press, 1998. Pg. 16.
  68. Howard (2008).
  69. Howard (2008), pg. 7.
  70. Potter, Nancy Nyquist. “Oppositional defiant disorder: Cultural factors that influence interpretations of defiant behavior and their social and scientific consequences.” Classifying Psychopathology: Mental Kinds and Natural Kinds 175 (2014).
  71. Surís, Alina, Ryan Holliday, and Carol S. North. “The evolution of the classification of psychiatric disorders.” Behavioral Sciences 6, no. 1 (2016): 5.
  72. Greenough, Patrick. “Conceptual Engineering via Reality Engineering.” Forthcoming.
  73. Simion, Mona. “The ‘should’ in conceptual engineering.” Inquiry 61, no. 8 (2018): 914-928.
  74. Greenough (forthcoming).
  75. Cappelen (2018), chapter 4, pg. 51-53.
  76. Cappelen (2018), chapter 2, pg. 23.
  77. See Cappelen (2018), part 3; Thomasson (2020); and Prinzing (2018).
  78. Hallam, Abolishing the Concept of Mental Ilness, pg. 16.
  79. Hallam, pg. 105.
  80. Jablensky, Assen. “Does psychiatry need an overarching concept of ‘mental disorder’?.” World Psychiatry 6, no. 3 (2007): 157.
  81. Jaspers, Karl. General psychopathology. Vol. 2. JHU Press, 1997.
  82. Zacher, in Kincaid and Sullivan (2014). Pg. 87.
  83. Jablensky, “Does psychiatry need an overarching concept of ‘mental disorder’?” (2007).
  84. Haslanger, Sally. “Going On, But Not in the Same Way.” In Alexis Burgess, Herman Cappelen & David Plunkett (eds.), Conceptual Ethics and Conceptual Engineering. Pg. 236.
  85. Haslanger (2020), pg. 253.
  86. Richard (2020), pg. 356.
  87. discrimination and oppression against a mental trait or condition a person has or is judged to have.
  88. Richard (2020), pg. 370.
  89. Konrad (2018).
  90. Kapur et al (2012). Pg. 1176.
  91. Brückl, Tanja M., Victor I. Spoormaker, Philipp G. Sämann, Anna-Katharine Brem, Lara Henco, Darina Czamara, Immanuel Elbau et al. “The biological classification of mental disorders (BeCOME) study: a protocol for an observational deep-phenotyping study for the identification of biological subtypes.” BMC psychiatry 20 (2020): 1-25.
  92. Nesse, Randolph M. Good reasons for bad feelings: insights from the frontier of evolutionary psychiatry. Penguin, 2019.
  93. Kingdon and Young (2007), pg. 2.
  94. Cooper, Rachel. Psychiatry and philosophy of science. Routledge, 2014. Pg. 26.
  95. Barnes, Elizabeth. The minority body: A theory of disability. Oxford University Press, 2016. Pg. 7.
  96. Barnes (2016), pg. 111.
  97. Scull (2015), pg. 30.
  98. Richard (2020), pg. 373.
  99. Kirk, Stuart A., David Cohen, and Tomi Gomory. “DSM-5: The delayed demise of descriptive diagnosis.” In The DSM-5 in perspective, pp. 63-81. Springer, Dordrecht, 2015. Pg. 67.
  100. Simion, Mona. “The ‘should’ in conceptual engineering.” Inquiry 61, no. 8 (2018): 914-928.
  101. Charland, L. C. (2006). Moral nature of the DSM-IV cluster B personality disorders. Journal of Personality Disorders, 20, 116–25.
  102. Pols, Jan. “The Politics of Mental Illness: Myth and Power in the Works of Thomas S. Szasz.” Trans. Mira de Vries (1984/2005). Nijmegen, 1976. Pg. 178.
  103. Stretton, Serina. “Systematic review on the primary and secondary reporting of the prevalence of ghostwriting in the medical literature.” BMJ open 4, no. 7 (2014): e004777.
  104. Healy, David, and Michael E. Thase. “Is academic psychiatry for sale?” The British Journal of Psychiatry 182, no. 5 (2003): 388-390.
  105. Cosgrove, Lisa, and Harold J. Bursztajn. “Toward credible conflict of interest policies in clinical psychiatry.” (2009).
  106. Howard (2008).
  107. Hallam, Abolishing the Concept of Mental Illness, pg. 13.
  108. Fulford, Kenneth WM, Martin Davies, Richard Gipps, George Graham, John Sadler, Giovanni Stanghellini, and Tim Thornton, eds. The Oxford handbook of philosophy and psychiatry. OUP Oxford, 2013. Pg. 95.
  109. Healy, David. Mania: A short history of bipolar disorder. JHU Press, 2008. Pg. 227.
  110. Horwitz, Allan V. “11 The Social Functions of Natural Kinds: The Case of Major Depression.” Classifying Psychopathology: Mental Kinds and Natural Kinds (2014): 209.
  111. Hacking, Ian. “The looping effects of human kinds.” In D. Sperber, D. Premack, & A. J. Premack (Eds.), Symposia of the Fyssen Foundation. Causal cognition: A multidisciplinary debate (p. 351–394). Clarendon Press/Oxford University Press.
Categories
Essays Philosophy

The Paradoxes of Joy and Suffering Abolition in Nietzsche

Note: All of Nietzsche’s works will be cited in paragraph citations with their standard abbreviations (e.g. BT for Birth of Tragedy) and their section, page, or aphorism numbers, while the translation/versions will be listed in the bibliography. All other sources will appear in footnotes. 

Two key paradoxes are built into Nietzsche’s views of suffering and joy. First, Nietzsche propounds the art and discipline of suffering while simultaneously praising happiness. This is the joy paradox. Second, Nietzsche denounces the wholesale abolition of suffering, but he also seeks to eliminate meaningless suffering. This is the suffering abolition paradox. I argue that Nietzsche has a complex, multifaceted account of suffering and joy that accounts for these apparent paradoxes. The first part of this paper reconstructs Nietzsche’s view of suffering, from its origins to his defense of its value. I also address several objections to this view, including the argument that some kinds of suffering are purely destructive and irredeemable. The second part traces Nietzsche’s less well-known view of the nature of joy and how it can be sought. Finally, the third part attempts to resolve the contradiction between these two aspects and outlines the prospect of a Nietzschean transhumanism.

1. Suffering

a. The Nature of Anguish

As per usual, Nietzsche begins in conversation with Schopenhauer and the Greeks. For Schopenhauer, life consists of endlessly chasing desires that can never be satisfied, making “life an unprofitable episode, disturbing the blessed calm of non-existence.”[1] He thus affirms the wisdom of Silenus: that it is best to have never existed, and second-best to die soon (BT §3). The constant source of suffering is not external, but within the individual’s will. The only liberation from this cycle of suffering-filled desire is the aesthetic contemplation that “lifts us out of real existence and transforms us into disinterested spectators of it.”[2] In these moments, the “fierce pressure of the will” is briefly extinguished, and we can experience sublime joy without desire.[3] The logical consequence of this view is that the complete cessation of the will would be ideal. Schopenhauer, with the Buddha, sought to eliminate the desires at the root of suffering.

Nietzsche accepts the noble truth[4] that life is suffering, but his response to this fact is different: like the tragic Greeks, he affirms both the will and the suffering it causes. The Greeks realized that suffering is inevitable in the fragile, imperiled, and chaotic human condition; they “knew and felt the terror and horror of existence” (BT §3). To even endure this terrible understanding, the Greeks had to invent art, myth, and the Olympian gods. The beautiful Apollonian dream-vision is related to the painful Dionysian reality in the same way “as the rapturous vision of the tortured martyr is to his suffering” (BT §3). The martyr envisions a salvation to redeem his pain, just as tragedy creates a beautiful narrative to instill meaning into suffering. Tragedy is not just a numbing drug or palliative, but an invigorating experience that brings exuberant health in the face of the worst suffering. Even if the tragedy’s plot is a series of disasters, it brings these events together to transfigure them into a joyful experience. The Hellenic pantheon also reflected human life rather than some other world, so the Greeks saw themselves glorified and made gods: beneath the “bright sunshine of such gods, existence is felt to be worth attaining” (BT §3). Greek myth-makers and tragic writers made life worth living despite its inherent suffering.

The Transfiguration by Raphael – my daily art display
Raphael, The Transfiguration. Nietzsche uses this painting as an example in The Birth of Tragedy.

In this way the pain-prone, sensitive Greeks were able to courageously affirm their existence. Just as Raphael’s Transfiguration depicts “luminous hovering in purest bliss” above a world of woe and strife, the Greeks transfigured their pain into life-affirming tragic art (BT §4). The “hidden substratum of suffering” is not just a sideshow, but essential to creating beauty (BT §4). As Nietzsche exclaims, “how much must these people have suffered to be able to become so beautiful!” (BT §21). Ultimately, the cheerfulness of the Greeks did not rest on a contented freedom from suffering, but a powerful affirmation of it. Nietzsche continues to uphold the value of tragedy in his last works — “I promise a tragic age: the highest art in saying Yes to life, tragedy, will be reborn.”[5]

b. In Defense of Suffering

In Nietzsche’s view, “the problem is that of the meaning of suffering,” and not merely suffering (WP #1052). Man is accustomed to pain and “does not repudiate suffering as such; he desires it, he even seeks it out, provided he is shown a meaning for it” (GM §3 #28). With this fundamental understanding, Nietzsche develops concepts that will imbue suffering with meaning — and not just any meaning, but a life-affirming meaning that will bring genuine health.

Condemning value-systems centered on pleasure and pain as shallow and naive, Nietzsche urges hedonists, utilitarians, pessimists, and Epicureans to look for higher values (BGE #255). He has a “higher compassion which sees further,” recognizing that these value-systems make man smaller in the long-term (BGE #255). Nietzsche saw the British utilitarians of his time as seeking only a soporific, comfortable, mediocre, ‘herd animal’ kind of happiness (BGE #228). Those who “experience suffering and displeasure as evil, worthy of annihilation and as a defect of existence” merely subscribe to a “religion of comfortableness” (GS #338). Eliminating our species-preserving suffering would leave humanity anemic and unable to change, adapt, or resist, undermining the long-term future of mankind.

Most of all, Nietzsche condemns utilitarianism because of its “harmful consequences for the exemplary human being” (WP #399). He rejects the idea that the “ultimate goal” is the “greatest happiness of all” (SE §6). Nietzsche argues that the individual can “receive the highest value, the deepest significance” only by “living for the good of the rarest and most valuable exemplars, and not for the good of the majority” (SE §6). This reflects a critical idea: Nietzsche may not be speaking to all people, and his defenses of suffering may not apply to everyone. His intended audience may only be these extraordinary individuals. For these brave and creative individuals, pleasure and pain are always epiphenomena and not ultimate values. To achieve anything, we must seek out both.

While the hedonists may want to “do away with suffering” with some fantastic means, Nietzsche’s higher souls want it “higher and worse than it ever was!” (BGE #255). Well-being as the hedonists understand it would be a contemptible endpoint for humanity. After all, he asks:

The discipline of suffering, of great suffering – don’t you realize that up to this point it is only this suffering which has created every enhancement in man up to now? That tension of a soul in misery which develops its strength, its trembling when confronted with great destruction, its inventiveness and courage in bearing, holding out against, interpreting, and using unhappiness…

(BGE #255)

Man contains both chaotic, formless clay and the hammer to shape this rough clay into something more. We cannot have pity for the clay, for the parts of ourselves that must and “should suffer” to achieve positive transformation (BGE #255). The creature in us must suffer so that the creator in us can persevere and grow. For instance, by imposing the suffering of asceticism on himself, the philosopher “affirms his existence” (GM §3 #8). He strengthens his dominant instinct — to spirituality, knowledge, or insight — by rejecting small pleasures and sensualities. Furthermore, if we value the overcoming of resistance (the will to power), then we must also value the resistance itself – and the suffering it entails.

person making pot
Human clay can be shaped by suffering.

While we moderns mostly know agony through fantasy, the ancients trained themselves in real suffering. For them, Nietzsche argues, pain was less painful. Meanwhile, our lack of habituation to pain explains why “inevitable mosquito bites” seem to us like an objection against life as a whole (GS #48). The solution to this kind of oversensitive suffering may therefore be more suffering, so that we can become whole and strong enough to withstand the unavoidable ills of existence. Then we may welcome any kind of suffering because it will strengthen us, just as distress makes a bow tauter (GM §1 #12). Like our ancestors, we might even begin to see suffering as a virtue and as a genuine enchantment to life rather than an argument against life.

In line with Nietzche’s arguments, Haidt argues that some kinds of suffering can create posttraumatic growth.[6] This is also known as anti-fragility,[7] and it is more than just stable resilience, as it can result in positive transformation and improvements from the previous state. Posttraumatic growth has been empirically documented in many circumstances, including in refugees, Holocaust survivors, cancer patients, and prisoners.[8] This research finds that an individual’s posttraumatic growth is often predicted by their ability to make the experience meaningful. As Nietzsche provides an abundance of tools for meaning-making, he encourages growth and enables more anti-fragility.

Furthermore, certain kinds of truth and knowledge are inextricably connected to suffering. Characters like Prometheus and Faust, who steal knowledge from beyond the human world and are thus tortured for eternity, represent this fundamental fact: truth has a price. This type of exemplary individual “voluntarily takes upon himself the suffering inherent in truthfulness” to create a complete revolution in himself (SE §4). This heroic individual who tries “to transcend the curse of individuation” and “attain universality” inevitably suffers from experiencing the hidden primordial contradictions of existence (BT §9). The value of an individual can even be assessed by how much truth they can endure (GM §3 #19; BGE #39).

Enduring suffering is especially critical for the free spirits that Nietzsche considers his audience: “we first had to experience the most varied and contradictory states of distress and happiness in our souls and bodies, as the adventurers and circumnavigators of that inner world called ‘man’” (HH #7). Nietzsche implores these knowledge-seeking free spirits to “collect the honey of knowledge from diverse afflictions, disturbances, illnesses,” exploring all types of experience while “despising nothing, losing nothing, savoring everything.”[9] As voyagers in the state-space of consciousness, the free spirits must test the entire complex palette of human experiences, learning their nature and their interrelations. Therefore, extraordinary truth-seeking individuals cannot value knowledge without also valuing suffering.

Conclusively, Nietzsche defends suffering as a kind of transformative experience.[10] Suffering can be personally transformative in helping us develop ourselves, recognize our authentic aims, and become stronger, more life-affirming, and more anti-fragile beings. Suffering can also be epistemically transformative. At the very least, suffering provides knowledge about certain qualia: it tells us what some kinds of experiences are like. But pain can also provide previously inaccessible knowledge, restructuring our entire worldview. Some knowledge worth pursuing may be inseparable from suffering. As suffering is an inherent feature of existence, Nietzsche argues that we should affirm it and make it meaningful rather than avoid it.

c. Critiques of Suffering

I will address three primary critiques of Nietzsche’s defense: (1) some responses to suffering are negative, (2) simply affirming suffering because it exists commits the genetic fallacy, (3) some forms of suffering are inherently negative and irredeemable.

First, Nietzsche agrees that there are many negative reactions to suffering. He makes it clear that there are both positive (creative) and negative (destructive) reactions to suffering. Positive reactions include sublimation, virtue development, meaning-making, and creativity. Negative reactions include ressentiment, pity, and collapse. Ressentiment consists of swallowing anger, fear, hatred, or other negative emotions and letting them fester.[11] The resentful individual cannot forget some past wrong or suffering, and becomes nasty, filled with rancor, consumed by a desire to rectify or revenge a past event. This is just one example of a negative reaction. However, our harmful responses to suffering alone are not an argument against suffering itself.

scope image
Like mold, suffering can fester, grow, and turn into hateful ressentiment.

Furthermore, some interpretations of suffering are negative. For instance, in Nietzsche’s view, Christianity tells those searching for something to blame for their suffering that “you alone are to blame for it!” (GM §2 #15). This provides some meaning – we suffer because we are sinful and infinitely guilty. As we are desperate for meaning, we cling to this interpretation: “any meaning is better than none at all” (GM §3 #28). But ultimately this meaning only brings “deeper, more inward, more poisonous, more life-destructive suffering” (GM §3 #28). For the Christian then redirects his ressentiment back onto himself, lashing himself for his guilt. Christianity thus encourages moralistic thinking that increases guilt and suffering.

Nietzsche also rejects slave morality in part because of its association with misery. Slave values reflect the ressentiment of the weak and suffering (GM §1 #16). Slave morality itself was created to relieve suffering, an upwelling of a long-impotent bitterness that finally finds expression in the revolt against the master. Clearly Nietzsche does not believe all suffering is positive, for he argues that “the preponderance of feelings of displeasure over feelings of pleasure is the cause of this fictitious morality and religion” (AC #15). The slaves compensate themselves for the suffering inflicted upon them by the masters with a psychological revenge: negating the values of the masters and painting them as evil. Under their revaluation of morals, “the suffering, deprived, sick, and ugly alone are pious,” while the “powerful and noble” are painted as evil (GM §1 #7). The slave is induced to follow these inverted values because of the promise of heaven as the reward for a true believer who suffers at the hands of evil. The priest thus manipulates the slaves’ sufferings to wreak revenge on the masters.

While Christianity, slave morality, and afterworlds all affirm or provide meaning for suffering in their own ways, Nietzsche opposes them all. They end up only making suffering worse, harming exemplary individuals, negating life, and damaging even their adherents. In contrast to the guilt-manufacturing Christianity, the Greeks vindicated humanity by making the gods guilty, as these gods “took upon themselves, not the punishment, but what is nobler—the guilt” (GM §2 #23). As the gods were the source of wickedness, man was liberated from self-loathing and guilt. Nietzsche sees guilt and shame as pathologies that can be overcome by cultivating a critical awareness, a sense of generosity and self-respect, and an unashamed affirmation of life. Clearly, Nietzsche recognizes that reactions and interpretations to suffering are not all equal, and the value of suffering will often be dictated by our response to it.

Second, some may argue that Nietzsche’s affirmation of suffering commits the genetic fallacy. But even if it is true that humans were shaped by evolutionary or historical forces to both suffer and see suffering as virtue, this does not imply we must keep suffering. This would make the fallacious assumption that the origins of a concept should dictate its current use.[12] Furthermore, the claim that ‘life is suffering’ cannot entail the conclusion that ‘suffering ought to be affirmed.’ As Hume showed, descriptive claims cannot imply normative claims; an ‘ought’ cannot be derived from an ‘is.’[13] Finally, a critic of suffering might argue that all the ‘goods’ of suffering are circular and non-transferable. These skills are only beneficial insofar as suffering exists. Yes, suffering may help us develop certain skills, including the capacity to respond to unpredictable suffering, to revise goals in calamity, and to move past loss. But these are essentially ‘virtues of dealing with suffering,’ or methods of getting used to it. It seems circular to claim that that suffering should exist because of the virtues it produces while these virtues are themselves justified by the existence of suffering.

Third, some suffering seems unaffirmable. Purely destructive agony can cause only harm, undermining health, strength, joy, and preventing the affirmation of life, and is therefore antithetical to Nietzsche’s own values. While Nietzsche’s defense emphasizes the growth and transformation enabled by suffering, he seems to ignore the kind of suffering that falls outside this description.[14] Some suffering does not even involve resistance or overcoming – sometimes, it is just powerlessness, subjection, and destruction. These painful states are a form of “hermeneutical death,” as they destroy the victim’s abilities to interpret suffering or make meaning from it.[15] As Levinas writes, this kind of suffering “rends the humanity of the suffering person,” and “intrinsically, it is useless, ‘for nothing.’”[16] Critics may argue that Nietzsche’s praise of suffering ignores the existence of this purely destructive and life-negating suffering.

File:Titian-Sisyphus.jpg - Wikipedia
Sísifo by Titziano. Sisyphus is the existentialist symbol of pointless, repetitive suffering.

However, Nietzsche is not committed to the position that all pain develops us. His passages do not claim that all suffering should unequivocally affirmed, and he even objects to senseless suffering. As he writes, “what really arouses indignation against suffering is not suffering as such but the senselessness of suffering” (GM §1 #7). What Nietzsche rejects is the “mortal hatred for suffering in general” (BGE #202), a position that universally rejects all kinds of negative experience. Nietzsche’s view is more multidimensional, affirming some kinds of upbuilding suffering while rejecting other kinds of destructive suffering (e.g. the festering, passive suffering that leads to ressentiment). He clearly supports the suffering that forges individuals from chaotic fragments into stronger, more creative beings, but nowhere defends purely destructive agony. He also implies that disciplined and voluntary suffering is more likely to be positively transformative, rather than the forced and externally imposed suffering that tends to be destructive (GS #48, BGE #62). Critiques of Nietzsche’s views that rely on the existence of extreme and pointless suffering are therefore strawman arguments, attacking a position that Nietzsche does not even defend. Of course, one could still argue that Nietzsche’s views of suffering have a key blind spot, as they fail to explicitly address useless, extreme suffering.

2. Joy

person standing on rock raising both hands

Despite his advocacy for transformative suffering, Nietzsche also extols emotional states that seem to be the opposite of pain: well-being, joy, happiness, and jubilation. He proclaims that the future needs “a new health, stronger, more seasoned, tougher, more audacious, and gayer than any previous health,” and praises the ideal of “a superhuman well-being and benevolence” (GS #382). He dreams of “human beings distinguished as much by cheerfulness…more fruitful human beings, happier beings!” (GS #283) He urges poets, artists, and philosophers to “let your happiness too shine out,” instead of “painting all things a couple of degrees darker than they are” (D #561). He testifies that joy is “deeper yet than agony,” for “woe implores: Go! / but all joy wants eternity” (TSZ pg. 340). He calls for us to “share not suffering but joy” (GS #338) and to “harken to all cheerful music” (GS #302), for “life is a well of joy” (TSZ pg. 208). He declares that it is a lack of joy that brings degradation and decay, for the “mother of dissipation is not joy but joylessness” (MM #77). Nietzsche concludes that “man has felt too little joy: that alone, my brothers, is our original sin” (TSZ pg. 200).

How can we reconcile Nietzsche’s exuberant praises of joy with his embrace of suffering? Part III will address this apparent paradox. This section will extract some of Nietzsche’s core views of joy: (a) the defining features that characterize happiness, and (b) the methods and processes which produce or prevent joy.

a. The Nature of Happiness

What unites positive affective states is that “happiness…no matter what the sort, confers air, light, and freedom of movement” (D #136), and that it contains an “abundance of feeling and high-spiritedness” (D #439). Happiness for Nietzsche is closely associated with the expression of the will to power: “what is happiness? The feeling that power increases—that a resistance is overcome” (AC #2). Indeed, he states that happiness can be “understood as the liveliest feeling of power” (D #113). There are also two kinds of happiness: “the feeling of power and the feeling of surrender” (D #60). This is similar to the distinction “between the impulse to appropriate and the impulse to submit” (GS #188). The appropriating impulse feels joy in desiring and in transforming things into functions, while the submitting impulse feels joy in being-desired and becoming a function. Often, it is the “people who strive most feverishly for power” who most want to “tumble back into a state of powerlessness,” like mountain climbers who dream of effortlessly rolling back downhill (D #271). Power is an essential but hidden aspect of happiness.

Nietzsche’s conception of joy is antithetical to his era’s moderate ideas of ‘cheerfulness,’ ‘comfort,’ and ‘happiness.’ Zarathustra calls this pitiful, polluted, and stale conception of happiness “wretched contentment” (TSZ pg. 125). This mass-produced kind of pleasure only hinders the achievement of true joy. The rabble poisons life’s well of joy, and “when they called their dirty dreams ‘pleasure,’ they poisoned the language too” (TSZ pg. 208). The Last Man is the symbol of a self-satisfied and stable society that has given up on any ideal beyond wretched contentment, and that is “increasingly suspicious of all joy” (GS #259). This society teaches its members to live by the “ticktock of a small happiness” and to develop only those virtues that “get along with contentment” (TSZ pg. 281). The crowd embraces the Last Man, who preaches of acceptable levels of pleasure, rather than the ideal of the overman and great health. However, for Nietzsche, anything like this mild Epicurean satisfaction “is out of the question. Only Dionysian joy is sufficient” (WP #1029). Nice, pleasurable feelings are not enough, for “happiness ought to justify existence itself” (TSZ pg. 125). Joy, like suffering, must be transfigured into meaning.

Nietzsche rebukes the contemporary cheerleaders for the simple ideas of wretched contentment, the 19th-century Last Men, for they

do not even perceive the sufferings and monsters that as thinkers they pretend to perceive and fight, and their cheerfulness provokes displeasure simply because it deceives, for it seeks to seduce one into believing that a victory has been won. For basically there is cheerfulness only where there is victory.

(SE §2)

This mediocre happiness will only depress and torment the insightful thinkers who recognize that it is founded upon a lie. The Last Men are not concerned with the long-term joy of humanity, and instead “want to cheat it out of its future for the sake of a painless, comfortable present” (HH #434). Nietzsche even argues that “the primal suffering of modern culture” is a result of the degeneration of “authentic art” into mere “superficial entertainment” (BT #19). Those with a more “delicate taste for joy” see this kind of “crude, musty, brown pleasure” as repulsive (GS Preface #4). Genuine cheerfulness arises not from deceptions but from a hard-fought victory over a difficult problem confronted honestly. While the weak consume opiate-like pleasures to numb and console themselves, the stronger spirits attempt to overcome challenges worthy of jubilation and actually build a life worthy of joy.

b. The Joyful Science

How can genuine joy be achieved? Unfortunately, there are no foolproof methods. Just as no medicine can cure all patients, no philosophy can guarantee happiness. Whether a philosophy produces happiness is no argument for or against it. As hunting for joy-guaranteeing wisdom is futile, “may each of us be fortunate enough to discover that philosophy of life which enables him to realize his greatest measure of happiness” (D #345). Universal laws cannot lead the individual to happiness, because each person’s happiness “springs from one’s unknown laws,” and “external precepts can only hinder and check it” (D #108). Forcing all people to abide by a single law to achieve happiness is as irrational as a tyrannical individual stamping his idiosyncratic, narrow, and personal way of suffering “as an obligatory law” upon all others (GS #370).

Despite this reality, one of humankind’s great errors is the belief that happiness can come from passive submission to prescribed rules or ideals. The classic moral refrain is “do this and that, refrain from this and that—then you will be happy!” (TI §6 #2). Nietzsche rejects this formulation. In his view, virtue does not cause happiness; happiness causes virtue. In reality, “a well-balanced human being, a ‘happy one,’ must perform certain actions and shrink instinctively from other actions,” and this virtue is a consequence of his happiness (TI §6 #2). Morality is also not the way to happiness. Indeed, morality has “opened up such abundant sources of displeasure” that we can conclude it is a wellspring of more profound misery and not a source of joy (D #106). Whenever moral precepts lead a person to “unhappiness and misery to set in instead of the vouchsafed happiness,” the moralists will claim that the person overlooked some rule or practice (D #21). The idea that those who disobey morality cannot experience happiness is absurd, for “evil people have a hundred types of happiness about which the virtuous have no clue” (D #468). Subscribing to a set of moral norms is no way to achieve joy.

Additionally, individuals who are stuck in the ‘it was,’ constantly tormented by the past, cannot experience joy. Happiness relies on limited horizons, restricting one’s view to the present and forgetting the past:

Anyone who cannot forget the past entirely and set himself down on the threshold of the moment, anyone who cannot stand, without dizziness or fear, on one single point like a victory goddess, will never know what happiness is; worse, he will never do anything that makes others happy.

(HL §1)

Without the ability to forget living is impossible. While stronger natures may be able to incorporate more of the past without being stifled, every person has necessary limits. Without these limits the past can “become the gravedigger of the present” (HL §4). Individuals seeking happiness must give up their “profound insight,” their over-satiated sagacity and exhaustive knowledge of their own past, in exchange for the “divine joy of the creative and helpful person” (HL §4). Furthermore, the will must become its own “liberator and joy-bringer” by embracing past events, converting all “‘it was’ into a ‘thus I willed it’” in a demonstration of amor fati (TSZ pg. 253). Forgetting is vital for joy and creation.

blue ocean water during daytime
Happiness requires limits and fixed horizons – created by the ability to forget.

Nietzsche also emphasizes the hedonic paradox, which states that pursuing happiness directly will only reduce happiness. At the fountain of pleasure, “often you empty the cup again by wanting to fill it. And I must still learn to approach you more modestly: all-too-violently my heart still flows toward you” (TSZ pg. 210). Seeking fulfillment of pleasures will empty you of them. After all, “joy is only a symptom of the feeling of attained power…one does not strive for joy…joy accompanies; joy does not move” (WP #688). This paradox has also been validated by modern empirical research.[17] Pursuing joy directly is ineffectual. This may be why Zarathustra declares “am I concerned with my happiness? I am concerned with my work!” (TSZ pg. 258) He implores his listeners that “one shall not wish to enjoy,” for enjoyment is a bashful thing that does not want to be sought — it would be better to seek out suffering! (TSZ pg. 311) This also suggests that simple hedonists have a deficient understanding of human psychology: seeking out pleasure will only reduce it, and often pursuing pain is more beneficial.

Furthermore, just as suffering provides epistemic access to some knowledge, some truths are only available during immense joy. The primordial unity (das Ur-Eine) is experienced through a form of Dionysian ecstasy, rapturous enthrallment or intoxication (Rausch). This Dionysian experience is characterized by a “mystical, jubilant shout” (BT §16), filled with “exuberant fertility” (BT §17) and an “immeasurable, primordial delight in existence” (BT §17). The Dionysian destroys individuation “so that the mere shreds of it flutter before the mysterious primordial unity” (BT §1). Ecstasy is required to apprehend the primordial unity, and the feeling of oneness with all of nature generates immense joy. The connection between joy and knowledge runs deep. This may be why philosophers from Plato and Aristotle to Descartes and Spinoza agreed that seeking knowledge “constitutes the highest happiness” for humans (D #500). However, “there is no preestablished harmony between the furthering of truth and the well-being of humanity” (HH #517), and knowledge or truth do not necessarily generate happiness.

Ultimately the search for joy amounts to a search for personal meaning and a way to express one’s will to power. The individual need not ask why the “world” or “humanity” exists, or even why she personally exists. Instead, the individual must “try to justify the meaning of your existence a posteriori, as it were, by setting yourself a purpose…a lofty and noble ‘reason why’” (HL §9).[18] Each individual must cross the stream of life alone and cannot be simply carried by another. Nietzsche urges the young soul to look back on the things they have truly loved, that have dominated their soul while “simultaneously making it happy,” for this series of revered objects can reveal the “fundamental law of your authentic self” (SE §1). By throwing all of her abilities and powers in the direction of this life-path, the individual can reach the highest joys possible for her.

3. The Paradoxes

a. Joy and Suffering

There is an apparent conceptual tension between Nietzsche’s defense of the discipline of intense suffering and his praise of joy. However, this paradox dissolves when one stops seeing pain and pleasure as antitheses. The “breadth of space between highest happiness and deepest despair has been established only with the aid of imaginary things” (D #7). We may also overestimate this distance between suffering and joy because language exaggerates the gap. We have words primarily for superlative, extreme states, while “the milder middle degrees” are left unnamed (D #433). As we cannot apply labels to the myriad emotional states between suffering and joy, we are unable to conceptualize a continuum between the two extremes. Once again, the human obsession with dichotomous thinking prevents us from seeing the complexity of spectrums and interconnected networks.

The extreme hedonic states are not opposites. Indeed, pleasure must always include pain and may itself be the overcoming of pain: “one could describe pleasure in general as a rhythm of little unpleasurable stimuli” (WP #697). Nietzsche conceptualizes happiness as a kind of overcoming, and overcoming requires resistance, which is experienced as suffering. This means that happiness is necessarily connected to suffering. As such, Nietzsche felt that the most sublime “happiness could be invented only by a man who was suffering continually” (GS #45). As he wonders,

What if pleasure and displeasure were so tied together that whoever wanted to have as much as possible of one must also have as much as possible of the other — that whoever wanted to learn to jubilate up to the heavens would also have to be prepared for depression unto death?

(GS #12)

We must choose between either “as little displeasure as possible,” or “as much displeasure as possible as the price for the growth of an abundance of subtle pleasures and joys that have rarely been relished yet” (GS #4). In contrast, the “comfortable and benevolent” Last Men know nothing of human happiness, for they do not understand that “happiness  and unhappiness are sisters and even twins that either grow up together or, as in your case, remain small together” (GS #338). Suffering is essential to experience the height of joy. The two emotional poles cannot be separated from each other; they are two aspects of the same process.

However, Nietzsche does not claim that happiness justifies suffering. It is not a matter of a simple felicific calculus, where the positive valence in the world outweighs the negative valence. He rejects this utilitarian summation. It is not that the happiness vindicates the suffering, but rather that humans create joy despite the suffering:

Right beside the sorrow of the world and often upon its volcanic ground, human beings have laid out their little gardens of happiness…everywhere they will find some happiness sprouting beside the misfortune -and indeed, the more happiness, the more volcanic the ground was-only it would be ridiculous to say that the suffering itself could be justified by this happiness.

(HH #591).

This oft-overlooked passage demonstrates that Nietzsche does not merely think our sufferings are ‘justified’ by some happiness, but that we create happiness in response to suffering: “Perhaps I know best why man alone laughs: he alone suffers so deeply that he had to invent laughter” (WP #91). In more plain terms, “the sorrow in the world has caused human beings to suck a sort of happiness from it.”[19] It is not that Nietzsche fails to see the unjustifiable badness of some suffering — like Schopenhauer, he has a devastating understanding of the sufferings of the world, but Nietzsche also sees the necessity to create joy and meaning despite the anguish.

Finally, the eternal recurrence transmutes the eternal return of suffering into something worth joyfully embracing. Nietzsche’s eternal recurrence is “a formula for the highest affirmation, born of fullness, of overfullness, a Yes-saying without reservation, even to suffering,” and it represents the “ultimate, most joyous, most wantonly extravagant Yes to life” (EH pg. 272). The affirmer of life doesn’t desire the eternal recurrence because she wants suffering, but because she does not simply weigh pain against pleasure to determine life’s value. This kind of calculus is misguided because life is not a series of discrete events. Rather, all events are deeply interconnected by complex causal chains. (See The Calm and the Cataract for the connections between eternal recurrence and the Buddhist concept of interbeing). In affirming any single event, we affirm the whole. If each “individual” thing is connected to all other things, then when you say yes to one moment you say yes to all moments. If “all things are chained and entwined together,”[20] then we affirm the entire chain when we affirm a single link. If we say yes to one moment of joy, then we also say yes to all the suffering intertwined with this moment.

macro shot of spider web
Every moment is wired together, connected by the spiderweb of the universe.

b. Suffering Abolition

The question is not just what Nietzsche means to us, but what we would mean to him, how he might evaluate our contemporary situation, “how our epoch would appear to his thought.[21] To answer this question, this section brings Nietzsche into conversation with the modern transhumanist philosopher David Pearce, who upholds The Hedonistic Imperative: to “abolish suffering throughout the living world” through technological means like genetic engineering.[22] While Nietzsche’s critiques of hedonism remains relevant and compelling, his thought may be surprisingly adaptable to this kind of transhumanist project.

After all, Nietzsche’s philosophical project is motivated by his desire “to take away from human existence some of its heartbreaking and cruel character.”[23] This suggests that Nietzsche himself is engaged in the suffering abolition project. Nietzsche may “still be in the business of abolishing precisely the helplessness, the interpretive vacuum, that gives suffering its sting.”[24] After all, if meaninglessness is constitutive of suffering, then suffering interpreted well is no longer suffering. Many philosophers define suffering as an unpleasant experience S conjoined with the desire that S not be occurring.[25] By increasing the meaningfulness and value of suffering, Nietzsche’s work can reduce our desire to avoid suffering, making it a positive good. Suffering on its own is helpless and does not inevitably create growth. However, we can give it a value by making it constitutive of growth, creativity, and positive transformation. If we will our suffering, we are no longer helpless – it becomes an ‘I willed it,’ not a mere ‘it was’ out of our control. As Nietzsche writes about his trials, “I have never suffered from all this, for what is necessary does not hurt me” (EH pg. 332). This is abolition in a radically different sense than the simple elimination of suffering, the comfort-making that the hedonists of his time advocated. Nietzsche’s suffering-abolition focuses on filling the interpretative vacuum of suffering.

Transhumanists may be skeptical that we can really conjure suffering out of existence merely by coloring it with a kind of life-affirming interpretation. They may doubt Nietzsche’s exorbitant claim that he never suffered from necessary things. Furthermore, transhumanism can critique Nietzsche as stuck in his time. The technology to overcome suffering, end aging, or re-engineer human biology did not exist in the 1800s. Therefore, Nietzsche affirmed suffering as it existed because his best available option was to make our inevitable sufferings meaningful and beneficial. The transhumanist claims that we now have the technological ability to reform suffering dramatically or eliminate it. Maybe it is only a contingent fact that pain and pleasure are tied together, and not a necessary principle—and maybe this knot can be untied through technologies like neurobiological and genetic engineering. Indeed, the Qualia Research Institute is developing an understanding of the fundamental nature of pain and pleasure to lay the foundation for super-happiness. Nietzsche agrees that evolution “does not have happiness in view,” but only evolution itself (D #108). Why should we accept the haphazard consequences of evolution instead of guiding it towards joy? Perhaps life is suffering, but it does not have to be.

However, section 2 demonstrates that a core Nietzschean aim is to bring about immense joy, well-being, and great health for humanity. Ultimately, if happiness & suffering come into conflict, Nietzsche’s priority may be joy: “I may have done this and that for sufferers; but always I seemed to have done better when I learned to feel better joys” (TSZ pg. 200). Nietzsche also argues that “man is something that must be overcome,” and man is just “a bridge and no end,” a bridge that may be “the way to new dawns” (TSZ pg. 310). This, along with Nietzsche’s revulsion at the Last Man who is complacent in humanity’s current level of contentment, suggests that he is not satisfied with merely human happiness and instead strives for superhuman joy. This seems deeply compatible with Pearce’s supplication that we use all available technologies to create “information-sensitive gradients of superhuman bliss.”[26] Furthermore, section 1c shows that Nietzsche does not explicitly defend pointless, destructive suffering, but only the kind of transformative suffering that enhances extraordinary individuals. If Nietzsche saw modern innovations, he may encourage some kinds of transhumanism that reduce our gratuitous and futile suffering, making humans stronger and more joyous.

Despite this essential agreement about some core ideas, Nietzsche’s critiques of the hedonistic imperative would be deep and numerous; a few can be addressed here. First, the transhumanist proposal fails to evaluate all values. It may reject the value of the “natural,” but it does not question most other values and is primarily a continuation of humanist morality. Nietzsche would not accede to this form of simple, egalitarian, utilitarian transhumanism. Second, Nietzsche would likely question the kind of happiness that transhumanism advocates. Will it be the numbing, anesthetic, decadent contentment of the Last Man, who blithely believes “we have invented happiness”? (TSZ pg. 129) For Nietzsche argues this kind of happiness will only throw humanity into a rut it can no longer escape, making our souls “poor and domesticated” so we no longer have enough chaos to “give birth to a dancing star” (TSZ pg. 129). Nietzsche would reject this type of transhumanism, for it uses “the holy pretext of ‘improving’ mankind, as the ruse for sucking the blood of life itself” (EH pg. 342). While not all forms of transhumanism are vulnerable to these critiques, Nietzsche would likely urge caution so we do not stumble into the trap of the Last Man.

Finally, transhumanism may simply be a form of afterworldliness. Some long for an afterworld, a dreamed-of place where suffering will be miraculously relived, as a desperate flight away from the painful human world we live in. These afterworlds are created as phantasmic compensations for the real suffering of the world: “It was suffering and incapacity that created all afterworlds” (TSZ pg. 143). The inability to deal with or affirm the existing world leads the weary sufferer to abandon this world and dream of another, higher world, a “dehumanized inhuman world which is a heavenly nothing” (TSZ pg. 144). These afterworlds are rooted in a desire to lie about reality that comes from a sense of suffering from reality. But placing supreme value on this afterworld devalues earthly life and makes it meaningless, producing further endless suffering.

Transhumanists may respond that they are not afterworldly, for their proposals are not ideal dreams but can actually be implemented through concrete human actions. Transhumanism may even imbue life with even more meaning, for it strives for the kind of brilliant, hopeful future that makes all current efforts immensely important. In consonance with this idea, Zarathustra urged his students to fix all that is mere “dreadful accident” in man, to “work on the future and to redeem with their creation all that has been” (TSZ pg. 310). However, he cautions against manipulative idealistic visions, and condemns the idea of immortality as a “big lie.”[27] Conclusively, some kinds of transhumanism may be sickly, sweet, dishonest, and dripping with impossible idealism. But a more realistic transhumanism that does not passively dream of contentment in some afterworld may be more congruous with Nietzsche’s aspirations for the future.

sea of clouds
Transhumanism should avoid being ensnared in dreams of some perfect, unblemished, utopian afterworld.

Conclusively, Nietzsche’s ideas may be compatible with some kinds of transhumanist suffering abolition. But he cautions against the dream of an eventual technological utopia based on the ideal of the cessation of suffering. The plausibility of this utopia is a difficult empirical question; if it is even possible, suffering abolition is tenuous and distant. A significant part of Nietzsche’s rejection of suffering abolition may rest on its implausibility. In the meantime, the dream of the end of suffering can become a passive afterworldliness, and the ideals of the afterworld can vilify the existing world. After all, the transhumanist abolitionists do not fill the interpretative vacuum—they just eliminate the actual suffering. In the process of abolishing suffering, we might undermine our interpretative ability to justify life despite its suffering, and thereby fall into nihilism. Transhumanism cannot instantly abolish suffering, and while we wait, we must make suffering meaningful.

In response, the transhumanist may argue that if we justify suffering too much, we might excessively affirm our existing condition. If we cling to the way we happen to suffer currently, we may be rendered unable to become more than human. Our strict commitment to Nietzschean suffering-affirmation could condemn us to the condition of the Last Man, preventing radical new futures and thwarting the overcoming of man. Making suffering meaningful can function to defend suffering and reduce motivation to prevent extreme, pointless, irredeemable suffering. The solution may be a synthesis: Insofar as suffering exists, we should sublimate it and make our experience of it more positive and growth-producing. But we should also strive to abolish extreme, pointless suffering wherever possible.

Conclusion

There are deep conceptual tensions in Nietzsche’s work: his defense of suffering contrasts with his accolades for joy, and he critiques the abolition of suffering while engaged in a kind of suffering abolition himself. This paper has attempted to explore and resolve these tensions. Just as Nietzsche withdraws his faith in morality “out of morality” (D #4), he withdraws his support for endless joy out of a desire for joy. Happiness alone is not enough, for suffering and joy are not antithetical but symbiotic. Both must be affirmed and sought after together. While suffering abolition is a dangerous proposition, Nietzsche may support some forms of abolition that focus on our pointless suffering. Regardless of the correct answer, probing these paradoxes reveals profound complexities in Nietzsche’s work—and in the human condition.

Bibliography

Beauvoir, Simone de. The Second Sex. New York, NY: Vintage Books, 1949. Trans. by Borde and Chevallier.

Bain, David, Michael Brady, and Jennifer Corns, eds. Philosophy of Suffering: Metaphysics, Value, and Normativity. Routledge, 2019.

Chan, K. Jacky, Marta Y. Young, and Noor Sharif. “Well-being after trauma: A review of posttraumatic growth among refugees.” Canadian Psychology/psychologie canadienne 57, no. 4 (2016): 291.

Carel, Havi, and Ian James Kidd. “8 Suffering as transformative experience.” Philosophy of Suffering: Metaphysics, Value, and Normativity (2019): 165.

Davis, C. G., Nolen-Hoeksema, S., & Larson, J. (1998). Making sense of loss and benefiting from the experience: Two construals of meaning. Journal of Personality and Social Psychology, 75(2), 561–574.

Hanh, Thich Nhất. The Heart of Understanding: Commentaries on the Prajñaparamita Heart Sutra. Berkeley, California: Parallax Press, 1998. Print.

Hauskeller, Michael. “Nietzsche, the Overhuman and the posthuman: A reply to Stefan Sorgner.” Journal of Evolution and Technology 21, no. 1 (2010): 5-8.

Higgins, Kathleen Marie. Nietzsche’s Zarathustra. Lexington Books, 2010.

Kroo, A., & Nagy, H. (2011). “Posttraumatic Growth Among Traumatized Somali Refugees in Hungary.” Journal of Loss and Trauma, 16(5), 440–458.

Sartre, Jean-Paul. Existentialism is a Humanism. Yale University Press, 2007.

Honderich, Ted, ed. The Oxford companion to philosophy. OUP Oxford, 2005

Elderton, Anna, Alexis Berry, and Carmen Chan. “A systematic review of posttraumatic growth in survivors of interpersonal violence in adulthood.” Trauma, Violence, & Abuse 18, no. 2 (2017): 223-236.

Fosse, Magdalena J. Posttraumatic growth: The transformative potential of cancer. Massachusetts School of Professional Psychology, 2005.

Levinas, Emmanuel. “Useless Suffering.” The Provocation of Levinas (2002): 168-179.

Medina, José. “Varieties of Hermeneutical Injustice.” In The Routledge Handbook of Epistemic Injustice. Routledge, 2017.

Meyerson, David A., Kathryn E. Grant, Jocelyn Smith Carter, and Ryan P. Kilmer. “Posttraumatic growth among children and adolescents: A systematic review.” Clinical psychology review 31, no. 6 (2011): 949-964.

May, Simon. “Why Nietzsche is still in the morality game.” Cambridge University Press (2011).

Nietzsche, Friedrich. “Beyond Good and Evil.” Trans. Walter Kaufmann. Basic writings of Nietzsche (1966).

Nietzsche, Friedrich. “On the Genealogy of Morals and Ecce Homo, trans. Walter Kaufmann.” J. Hollingdale. New York: Vintage Books (1967).

Nietzsche, Friedrich Wilhelm. Unfashionable observations. Vol. 2. Stanford University Press, 1998.

Nietzsche, Friedrich. “On the Genealogy of Morals and Ecce Homo, trans. Walter Kaufmann.” J. Hollingdale. New York: Vintage Books (1967).

Nietzsche, Friedrich. “Daybreak: Thoughts on the prejudices of morality.” Cambridge University Press (1997).

Nietzsche, Friedrich Wilhelm. The Twilight of the Idols; or, How to Philosophize with the Hammer. The Antichrist. Good Press, 2019.

Nietzsche, Friedrich Wilhelm. “Human, All Too Human, I.” Stanford University Press (1997).

Nietzsche, Friedrich Wilhelm. The antichrist. Trans. Walter Kaufmann. Knopf, 1924.

Park, Crystal L., Donald Edmondson, Juliane R. Fenster, and Thomas O. Blank. “Meaning making and psychological adjustment following cancer: the mediating roles of growth, life meaning, and restored just-world beliefs.” Journal of consulting and clinical psychology 76, no. 5 (2008): 863.

Paul, Laurie Ann. Transformative experience. OUP Oxford, 2014.

Schopenhauer, Arthur. The Essays of Arthur Schopenhauer; Studies in Pessimism. Good Press, 2019.

“The Imperative To Abolish Suffering. David Pearce Interviwed By Sentience Research (Dec. 2019).” 2020. Hedweb.Com. https://www.hedweb.com/hedethic/sentience-interview.html.

Appendix

1. The Birth of Suffering

Nietzsche also tells a story about the origins of the suffering and its value. Under intense conditions, prehistorical humans developed the view that “voluntary suffering, self-chosen torture, is meaningful and valuable” (D #18). Too much well-being invited mistrust, while hard suffering encouraged confidence. The community’s moral exemplars were those who had the “virtue of the most frequent suffering” (D #18). These individuals needed voluntary suffering, both to inspire belief and to believe in themselves. The practice of pain was a demonstration of overflowing strength and was viewed as a festive spectacle for the sacrifice-loving gods. Nietzsche realizes that we have not yet “freed ourselves completely from such a logic of feeling” (D #18). Even now, every step towards free thought and toward shaping one’s life has to be paid for with spiritual and bodily suffering. Prehistorical eras forged humankind’s character, and this character has not changed since. These eras saw suffering as a virtue, and this is a human instinct that has only been suppressed through civil society.

“Enclosed within the walls of society,” early humans felt that “suddenly all their instincts were disvalued” (GM §2 #16). They were unable to cope with even the easiest challenges in this new world. Civilization undermined the trustworthy instinctual guides that had once provided strength and joy. As he could not trust instincts that were only well-adapted to wilderness, man had to rely on his “most fallible organ,” the conscious mind (GM §2 #16). But his old instincts still needed expression. Thus, they were turned inward. Man’s will to power, hostility, cruelty, joy in attacking, and drive to adventure were directed against himself, creating the “bad conscience” (GM §2 #16). This introduced a new appalling plague: “man’s suffering of man, of himself” (GM §2 #16). Of course, some may question if this narrative about is anthropologically or historically plausible. But even taken as a fable, it reflects important ideas about the nature of suffering.

2. Nietzsche’s meaning-making

Even if he never touched pen to paper, Nietzsche’s ability to affirm his life in the face of immense pain is a testament to his meaning-making ability. Nietzsche exemplifies the unity of suffering and joy in himself, for he felt that “my health is disgustingly rich in pain,” and despite the near-constant affliction he kept “contemplating life with joy.”[28] In Ecce Homo, he expresses gratitude for his sickness, because it allowed him to develop the skill of “looking from the perspective of the sick toward healthier concepts and values” (EH pg. 233). He attributes his capacity to instigate the revaluation of all values to this ability to reverse perspectives. Nietzsche writes that if “my sickness had not forced me to see reason,” he may have abandoned his great task and become a mere pathetic specialist (EH pg. 239). Both his and Wagner’s incredible creative gifts were enabled only by their capability to endure profound suffering (EH pg. 250). His existence was filled with physical and mental suffering, isolation, excruciating trials, and unknown efforts – and yet he has unabashed love for his fate, and does not give into yearning for something different, or much less some ideal afterworld.

We continue to live only through illusions: the “pleasure in understanding,” “art’s seductive veil of beauty,” or through some “metaphysical solace” (BT §18). What matters is not the truth of these artistic illusions but their life-affirming nature.

3. Ennui-stricken youths

Sometimes ennui-stricken youths have a desire for suffering because it gives them a motive “for doing something” (GS #56). Their imaginations invent monsters “so that they may afterwards be able to fight with a monster” (GS #56). The problem with these “distress-seekers” is that they cannot create distress internally to motivate action, but instead need some external menace – “they always need others!” (GS #56). This desperate need for troubles from outside is ultimately a form of “the nihilistic question ‘for what?’ which is rooted in the old habit of supposing that the goal must be put up, given, demanded from outside—by some superhuman authority.” [29]

4. Buddhism & Suffering

Nietzsche praises Buddhism over Christianity, as it “is a hundred times more austere, more honest, more objective. It no longer has to justify its pains…it simply says, as it simply thinks, ‘I suffer.’”[30] Buddhism does not create a glorious, moralizing, or anesthetic story for suffering. It simply describes suffering without condemning it as the result of sin. It sometimes even affirms suffering in a Nietzschean style. As Zen Buddhist thinker Thich Nath Hanh writes, “Touch your suffering. Face it directly, and your joy will become deeper.”[31] In Nietzsche’s stated utopia, the “troubles of life will be meted out to those who suffer least from them,” so that those “who are most sensitive to the highest and most sublimated kinds of suffering” will be freed from unnecessary suffering (HH #462).

5. The Jews & Suffering

The Jews are exemplars of this discipline of suffering, as they have converted crisis and oppression into spiritual strength, cultural depth, and moral, ethical, and aesthetic masterworks. As a result of terrible centuries of education, “the psychological and spiritual resources of the Jews today are extraordinary,” and every Jew can look up to exemplars who exhibit courage, endurance, and heroism in the face of the worst situations (D #205). Their suffering has only strengthened their virtue and their conviction in a higher calling.

6. Nietzsche & Levinas

However, Nietzsche’s proposals are not the only ways to make meaning from suffering. Levinas argues that pointless pain can only be made meaningful when it becomes a suffering for the suffering of someone else.[32] Suffering becomes meaningful when the individual recognizes the call to help a fellow-sufferer gratuitously, without any concern for reciprocity. Nietzsche may see this view as a “debilitation and cancellation of the individual” for the sake of the herd, “adapting the individual to fit the needs of the throng” (D #132). Nietzsche’s nuanced and numerous critiques of pity cannot be enumerated here. But other interpretations of suffering, like Levinas’ view, may have other aims and values. It is not clear that Nietzsche’s responses to suffering are the ideal responses for all individuals – and he would likely not defend this claim himself.

7. Responses to Parfit

Derek Parfit argues that “when Nietzsche tried to believe that suffering is good, so that his own suffering would be easier to bear, Nietzsche’s judgment was distorted by self-interest.”[33] However, Nietzsche does not simply assert that suffering is good. As discussed in 1b, Nietzsche is not clearly committed to defending all types of suffering, but only the kind of suffering that promotes meaning, growth, or positive transformation for the kinds of individuals he is concerned with. Thus, Parfit begins with an inaccurate premise.

Furthermore, Nietzsche recognizes that at its core, life is suffering, and the harm of suffering primarily stems from its meaninglessness. He then claims that an individual (and perhaps only some individuals) can imbue their pointless suffering with meaning to affirm existence and make life worth living. Nietzsche would likely admit that he has a vested interest in affirming and making life meaningful even if it does not have an inherent meaning. He does not aim to be an indifferent, unbiased spectator who investigates suffering from a neutral perspective. In fact, he recognizes that for this kind of indifferent spectator, the wisdom of Silenus would be overwhelming and life would seem to be not worth living. In the end, Nietzsche does not claim that he is an unbiased evaluator of life, but instead acts as a deeply interested creator of values seeking to redeem life. Therefore, Parfit’s claim of bias is largely insignificant. However, even if his example of Nietzsche does not hold, he does accurately diagnose a cognitive bias towards overestimating suffering’s value because we need it to be valuable in order to live. Nietzsche could be construed as doubling down on this bias, rendering suffering as supremely meaningful to promote the affirmation of life.

8. Responses to Vinding

Magnus Vinding argues that while meaning and purpose can help keep suffering at bay and make it more bearable, their “ability to reduce suffering should not lead us to consider them positive goods that can justify the creation of more suffering.”[34] However, first, Nietzsche does not accept an overriding imperative to eliminate suffering. Instead, he sees some kinds of suffering as worth experiencing, and focuses on values far beyond pain and pleasure — perhaps great health, the affirmation of life, or the development of the overman.

Second, he may also argue that a constitutive aim of any value-system is to imbue life with meaning, and thus meaning-making is not merely a side pursuit. Having some kind of meaning or reason to live is nearly a prerequisite to any human action, and so this is a strong argument that finding meaning must necessarily be an intrinsic positive good. Perhaps in Nietzsche’s view, the value of meaning or purpose does not reduce to the amount of suffering they prevent. They are intrinsic positive goods beyond just suffering-prevention. Why? Well, the simple argument is this:

  • P1. All thought, action, and ethics require living beings to carry them out. In other words, an action cannot be performed without a being to perform it.
  • P2. Living beings, or at least humans, require some kind of meaning or purpose to remain alive.
  • C1. Without meaning or purpose, thought, action, and ethics cannot be carried out. Thus meaning or purpose are necessary prerequisites to all thought, action, and ethics. Meaning/purpose are therefore ‘ethical priors’ in that without them, one cannot have an ethics.
  • C2. Meaning/purpose are prerequisites for all other goods. It would therefore be logically contradictory for ethics to deny that meaning/purpose are goods.

One could contest P2, arguing that individuals can live without a purpose. This may be the case. However, the premise can be strengthened by adding some caveats: (a) consciously lacking a purpose, (b) while having both the ability and mental capability to end one’s life, (c) will often lead to the end of a person’s life, (d) if the person is in a circumstance that leads to the desire to end their own life (e.g. suffering). Arguably, most humans live with an implicit purpose of some kind, or are unaware of their lack of a purpose (cf. Sartre’s idea of bad faith). Still, P2 certainly remains open to critique.

If the argument above holds, it may be justified in principle to produce some kinds of suffering to develop meaning. This is especially true if meaning is an inherent positive good. However, (a) this will likely only be justified if an individual is producing suffering for themselves voluntarily, and (b) this does not include extreme suffering – especially because under my understanding of extreme suffering, it is meaningless and destructive of purpose almost by definition. Nietzsche’s views are compatible with both (a) and (b).

Finally, insofar as meaninglessness is an essential feature of suffering, adding purpose will always reduce suffering. This makes meaning and purpose such indispensable instrumental goods that they can be functionally treated as inherent goods.

“Have I not changed? Has not bliss come to me as a storm? My happiness is foolish and will say foolish things: it is still young, so be patient with it. I am wounded by my happiness: let all who suffer be my physicians.”[35]

“Like a cry and a shout of joy I want to sweep over wide seas, till I find the blessed isles where my friends are dwelling. And my enemies among them! How I now love all to whom I may speak! My enemies too are part of my bliss.”[36]

The eternal recurrence means that “every pain and every joy and every thought and sigh and everything unutterably small or great in your life will have to return to you.”[37] As Zarathustra asks, “are not all things knotted together so firmly that this moment draws after it all that is to come?”[38] We cannot separate pain and pleasure from each other because they are two aspects of the same process.

Even though “in all ages barbarians were happier,” we fear the return to barbarism because we value knowledge so much that we cannot “value happiness without knowledge.”[39]

A simple hedonic calculus would squander these exemplary individuals. These individuals see beyond immediate consequences and focus on “more distant aims,” even at the “expense of the suffering of others.”[40] For example, they seek knowledge even if this freethinking will make others feel doubt or distress.

The Atonement may assuage his suffering temporarily by making him feel he will not be punished. But ultimately, it will only increase a key cause of suffering: guilt. After all, mankind was already infinitely guilty, and the Atonement makes us also guilty for the death of the son of God.

To buy the sublime happiness of the Greeks, “the most precious shell that the waves of existence have ever yet washed on the shore,” one must be capable of immense suffering.[41]

These afterworlds need not be religious – in Nietzsche’s lifetime, political ideologies like nationalism would also dream up utopian ideals of collective redemption.

Furthermore, vice does not destroy or decay a people, but destruction and decay produce vice as a symptom of this “degeneration of instinct.”[42]

Footnotes

  1. Psychological Observations, pg. 20. In Schopenhauer, Arthur. The Essays of Arthur Schopenhauer; Studies in Pessimism. Good Press, 2019.
  2. Schopenhauer, Psychological Observations, pg. 25.
  3. Ibid, pg. 26.
  4. In Buddhism, the first noble truth is that ‘life is suffering’ or that dukkha (suffering) is an inherent feature of life in samsara (the cycle of earthly existence).
  5. EH, ‘The Birth of Tragedy,’ §34. Pg. 274.
  6. Haidt, Jonathan. The happiness hypothesis: Finding modern truth in ancient wisdom. Basic books, 2006.
  7. Taleb, Nassim Nicholas. Antifragile: Things that gain from disorder. Vol. 3. Random House Incorporated, 2012.
  8. See Kroo and Nagy (2011); Fosse (2005); Chan, Marta, and Sharif (2016); Elderton et al (2017); Meyerson et al (2011); Davis et al (1998); Park et al (2008).
  9. Notes to HH, fall 1855-86, in the Stanford translation of HH.
  10. See Paul, Laurie Ann. Transformative experience. OUP Oxford, 2014. Also see Carel & Kidd, “Suffering as Transformative Experience,” in Bain, David, Michael Brady, and Jennifer Corns, eds. Philosophy of Suffering: Metaphysics, Value, and Normativity. Routledge, 2019.
  11. Bain, David, Michael Brady, and Jennifer Corns, eds. Philosophy of Suffering: Metaphysics, Value, and Normativity. Routledge, 2019.
  12. “Genetic Fallacy.” In Honderich, Ted, ed. The Oxford companion to philosophy. OUP Oxford, 2005.
  13. Hume, David. A Treatise on Human Nature: 1. Longmans, 1874. Pg. 335.
  14. Coronado, Amena. “Suffering & The Value of Life.” PhD diss., UC Santa Cruz, 2016. Pg. vi.
  15. Medina, José. “Varieties of Hermeneutical Injustice.” In The Routledge Handbook of Epistemic Injustice. Routledge, 2017. Pg. 41.
  16. Levinas, pg. 157.
  17. Gleibs, Ilka H., Thomas A. Morton, Anna Rabinovich, S. Alexander Haslam, and John F. Helliwell. “Unpacking the hedonic paradox: A dynamic analysis of the relationships between financial capital, social capital and life satisfaction.” British Journal of Social Psychology 52, no. 1 (2013): 25-43.
  18. This proto-existentialist maxim came before Sartre’s statement that “existence precedes essence” in Existentialism is a Humanism, but it conveys a similar idea.
  19. Notes to HH, in the Stanford edition of HH pg. 343.
  20. TSZ, pg. 333.
  21. Zizek, Slavoj. First As Tragedy, Then As Farce. Verso Books, 2009. Pg. 6.
  22. Pearce, David. Hedonistic Imperative. David Pearce., 1995.
  23. Letter to von Stein, as cited in Higgins, Kathleen Marie. Nietzsche’s Zarathustra. Lexington Books, 2010. Pg. 8
  24. May, Simon. “Why Nietzsche is still in the morality game.” Cambridge University Press (2011).
  25. Carel, Havi, and Ian James Kidd. “8 Suffering as transformative experience.” Philosophy of Suffering: Metaphysics, Value, and Normativity (2019): 165.
  26. “The Imperative to Abolish Suffering. David Pearce Interviewed by Sentience Research (Dec. 2019).” 2020. Hedweb.Com. https://www.hedweb.com/hedethic/sentience-interview.html.
  27. Hauskeller, Michael. “Nietzsche, the Overhuman and the posthuman: A reply to Stefan Sorgner.” Journal of Evolution and Technology 21, no. 1 (2010): 5-8.
  28. Letter of January 22, 1879. In footnote, Portable Nietzsche, pg. 110.
  29. Nietzsche, Friedrich. Sämtliche Briefe: Kritische Studienausgabe. Walter de Gruyter GmbH & Co KG, 2015. 12:9[43]. Pg. 355.
  30. The Antichrist, #23.
  31. Hanh, Thich Nhất. The Heart of Understanding: Commentaries on the Prajñaparamita Heart Sutra. Berkeley, California: Parallax Press, 1998. Print.
  32. Levinas, Emmanuel. “Useless Suffering.” The Provocation of Levinas (2002): 168-179. Pg. 163.
  33. Parfit, Derek. On what matters. Vol. 1. Oxford University Press, 2011. Chapter 126.
  34. Vinding, Markus. “Suffering-Focused Ethics: Defense and Implications.” Ratio Ethica (2020). Pg. 147.
  35. TSZ, pg. 196.
  36. TSZ, pg. 196.
  37. GS, #341.
  38. TSZ, pg. 270.
  39. Daybreak, Book IV, #429.
  40. Daybreak, Book II, #146. See also D, Book IV, #467: “You will cause a lot of people pain that way.- I know it; and know as well that I will suffer doubly for it, once from compassion with their suffering and then from the revenge they will take on me. Nevertheless, it is no less necessary to act as I am acting.”
  41. GS, #302.
  42. Ibid.
Categories
Essays

How to Steal a Vibe: The Phenomenal Unity of Reality, the Mind-Body Problem, and the Blockchain of Consciousness

Idealism posits that all reality is grounded in mental states: to be is to be perceived. It offers a potential solution to the crippling dilemma of how to explain the relation between consciousness and the physical brain. If everything is mind, there cannot be a mind-body problem. In this essay I will present Yetter-Chappell’s nontheistic idealism and defend it from a series of challenges. Finally, I will respond to the most critical objection, which contests the existence of a unity-of-consciousness relation. I conclude that Yetter-Chappell’s metaphysics is an effective solution to the mind-body problem.

Note — this essay is pretty technical/analytic. If you want to skip to the end for my comments in natural English, and the stuff about stealing vibes and the blockchain of consciousness, feel free (although it misses some background).

Groundwork

There are two key desiderata of a successful idealism: it must account for (a) how objects are sustained when we no longer perceive them, and (b) the regularity of our perceptions of reality. Our intuitions require these explanations. If we stop observing a tree, and then return to its location later, why do we regularly experience similar impressions of the tree? A nontheistic idealism cannot resort to claiming that God keeps all things in mind, ensuring that the world appears to us consistently. If a theory is able to meet these standards, it is prima facie acceptable, and it can be compared against opposing explanations.

A critical issue for idealism: what explains the continued existence of non-perceived objects? (for example, the “tree falls in a forest and nobody hears it” problem)

In Yetter-Chappell’s view, reality is a “unity of consciousness” (6) where the same mental laws that structure our own experiences also bind together all experiences. For instance, my pencil exists separately from my mind, as it is also part of the phenomenal unity of reality (PUR). The PUR also binds together impressions of the pencil from every possible viewpoint. There are three relations that bind experiences together:

  1. Unity-of-consciousness relation: our myriad perceptions unify into a single conscious experience. I am aware of the music in my headphones and the color of the desk as aspects of one continuous conscious experience, rather than discrete experiences of two different consciousnesses.
  2. Objectual-unity relation: Sensations are combined into objects rather than remaining disjoint. My impressions of roughness and grayness are not separate; they are bound together into the object of a rock.
  3. Spatial-unity relations: Experiences occupy one shared space with a directional structure. For instance, the blue, hat-shaped part of this space is to the right of the silvery, laptop-shaped part.

These three laws bind together reality, not just my experiences. The PUR is an immense tapestry of perceptions woven together according to these laws. This phenomenal world can also behave according to the laws of physics (translated into idealist terms). These laws have a different metaphysical nature — they govern phenomenal systems, not physical ones — but the same functional role.

Perception occurs when a mind overlaps with certain pieces of the PUR. When I see a red pen, the aspects of the pen I am aware of (its redness, its shape) literally become part of my mind. Not all of the features of the pen are visible to me — e.g. I cannot look through it to see how much ink it contains — but I am aware of enough features to recognize it as a pen. The difference between perceptual and non-perceptual states, like hallucinations, is that in non-perceptual states the objects of my perception are “bound up in my unity of consciousness, but not the phenomenal unity that is reality” (8). (See diagram below). And while reality is governed by laws that ensure its regularity, hallucinations may not exhibit this same regularity.

A diagram of Yetter-Chappel’s theory of the phenomenal unity of reality, applied to three different cases

However, I am doubtful this approach allows us to differentiate the real and the imaginary. After all, non-perceptual experiences often display the same regular patterns as perceptual experiences. A dream can sometimes obey physical laws, and hallucinations can seem as coherent as “reality.” Do we simply have to wait until an aberration appears to know a dream is non-perceptual? If so, the skeptic can just claim the parts of a dream that display regularity are perceptual, even if the irregular parts are not. This violates our intuition that dreams are wholly non-perceptual. Furthermore, perceptual experiences often display irregularities — in optical illusions, different pieces of sense data directly contradict. It seems impracticable to distinguish between the PUR and the phenomenal unity of my mind. These are serious problems for Yetter-Chappell’s idealist view of perception.

Advantages

Ultimately, what are the epistemic advantages of Yetter-Chappell’s idealism? First, she argues it fulfills the “neglected epistemic virtue”: the objects we perceive are the truth-makers for our immediate perceptual judgements. This virtue is what makes my perceptual judgements about an object more valid than those of a blindsight patient who lacks sensory awareness but can still make correct judgements about the object.[1] In the PUR, objects of my perception literally constitute part of my mind, and so the objects are the most direct possible truth-makers for my perceptual judgements.

However, this epistemic virtue does not seem like not a genuine advantage of Yetter-Chappell’s metaphysics. Ifher theory is true, our perceptual judgements fulfill the neglected epistemic virtue. But this is an epistemic virtue of our perceptual judgements if the theory is true, not a benefit of the theory itself. This account might motivate us to believe Yetter-Chappell’s metaphysics, but it does not actually corroborate her theory. After all, under the metaphysical assumption that humans are omniscient, all human judgements would be farmore valid. But this is not a reason to believe the assumption is true.

Even if the first advantage falls through, the second benefit of nontheistic idealism is that it makes reality fundamentally intelligible. Under materialism, we can only discover the structure of the world (e.g. the geometry of space-time), but not its content. Materialists only characterize how physical entities like atoms relate to other physical entities. describe the intrinsic nature of these entities. But our experience presents us with a substantial reality and not just a relational structure. Since idealism explains the content of entities — they are fundamentally experiential — its picture of reality is more comprehensible.

Third, in Yetter-Chappell’s metaphysics, the world is exactly what it appears to be, and we are directly encountering reality. The things we perceive are real — precisely because we perceive them.[2] A critic could argue this is just a reason to want to believe an idealistic metaphysics, and not a support for its veracity. However, our intuitions suggest that our perceptions put us in contact with the real world: our experiences seem to contain real objects. In Yetter-Chappell’s idealism, this intuitive view is exactly the case. The world is what it appears to be. This is an epistemic advantage of her theory to the extent that concurring with intuitions makes a theory more valid.

What makes the butterfly real is not the brain, but the mind; the brain is an abstraction made by the mind.

Objections

Yetter-Chappell addresses two problems for her theory. First, it may create quantitative profligacy. If an object is “myriad sensory impressions” bound by the objectual unity relation (13), something as simple as a pen could have near-infinite aspects — one for each way it can be perceived. This seems more theoretically complex than materialism, where the pen is just an arrangement of particles. Thus parsimony may encourage us to reject this form of idealism. But is a theory where a pen is atoms better than a theory where a pen is 5sensory impressions? If so, why would the type and quantity of substance matter (no pun intended) for the parsimony of the theory? The answer is not obvious.

Second, the theory may suffer from explanatory disunity. Why does reality happen in a coherent way, regardless of the perspective it is viewed from? For example, if I flip the light switch, my perception of the room becoming dark happens in concert with the perceptions of my nearby friend, the scientist viewing the room from a distant telescope, and even the ants on the ground. In the materialist view, this coherence happens because we are all observing a mind-independent reality governed by physical laws. But if reality is a phenomenal unity that weaves distinct experiences together, will each experiential thread require different laws? If so, how do these laws work together?

How do ants perceive our cities? In this theory of idealism, an ant’s perception also influences the reality of the world, so however ants, pigeons, rats, and maybe even plants perceive the city — it matters.

There are at least two defenses. First, different experiences in the PUR are only as separate as different threads in a tapestry. If a force acts on one thread, it also acts on the tapestry. In the same way, if a change happens in single phenomenal unity within the PUR, the overarching structure of reality ensures that everything else in the PUR responds appropriately. For instance, when you chop an apple, you don’t need to cut the apple’s sphericalness and its redness separately. These two experiential aspects are bound together into the phenomenal unity of the apple. Since reality is also a phenomenal unity, acting on any single experiential thread will also affect other threads in a structured way according to the laws of the PUR. Second, a messier approach responds that there are lower-order laws that only apply to individual experiences, and higher-order laws that apply to reality as a whole.

While I have interspersed challenges of Yetter-Chappell’s view in my exposition so far, I will address an additional line of criticism: one that challenges her basic rules of consciousness. This approach strikes at the most vulnerable point of her idealist metaphysics, for if it succeeds, it would remove the glue that holds the PUR together. Specifically, multiple neuroscience experiments are possible counterexamples to the unity-of-consciousness relation. Dennett points to change-blindness experiments where subjects are not conscious of clearly visible changes that their eyes are sensing.[3] Nagel cites brain bisection experiments, where one hemisphere is aware of information the other half is unaware of.[4] And blindsight studies indicate that subjects can disassociate different aspects of their mind from each other.[5]

A neuroscientific explanation of the incredible phenomenon of blindsight.

However, this critique misunderstands Yetter-Chappell’s conception of the unity-of-consciousness relation. A better term for her concept may be phenomenal unity.[6] This is the relation experiences have when they are experienced together as components of a single phenomenal state: “you’re aware of these things together as though forming a single conscious experience” (4). I have myriad sensory, emotional, and cognitive experiences, but no matter how numerous or varied these experiences are, they all occur as constituents of a single phenomenal unity. In this conception, not all experience has to be available to the conscious mind, but the experience that is available must be part of a unified experience. Blindsight, change-blindness, and bisection experiments only demonstrate the subjects have limited access or multiple modes of access to their experience, and not that their phenomenal experience is disunified. Furthermore, in the bisection studies, it’s possible the subjects are switching between different streams of consciousness, rather than experiencing two disunified phenomenal states simultaneously. If that is the case, the subject is still experiencing one continuous phenomenal unity.

Is it even logically coherent to say that a phenomenal unity can be “split”?

Additionally, the notion of a split or disunified phenomenal experience is arguably incoherent. Disunified phenomenal states could only be found through introspection or phenomenology, not observation, as phenomenal states are only accessible to the one experiencing them. Therefore, I could only discover disunified phenomenal states by becoming aware of them. But if I am aware that I have separate phenomenal states, then there is clearly a higher phenomenal state which is aware of these sub-states. On the other hand, if I have a second phenomenal state which I am not aware of, then it is just another phenomenal unity separate from my mind.

Conclusively, Yetter-Chappell’s idealist metaphysics achieves the required desiderata by conceptualizing a phenomenal reality that exists beyond any individual mind, bound together by three basic laws of consciousness. This theory solves the mind-body problem by positing that only mind exists. In Ryle’s terms, a mind-matter dichotomy is a “category mistake”[7] — but in the idealist picture, it is matter that is the abstraction, not the mind. The brain is an abstract idea within the phenomenal unity of our minds, just as our minds are constituents of the PUR.

My own position

Spiciness of this paper: 🌶🌶🌶🌶/5

I think the theory is legitimate and the support is strong. I’ve always been attracted to metaphysical idealism because my grasp on reality is paper-thin and I don’t think things are at all what they seem to be. Realism and physicalism seem almost naïve. Is your theory of reality really just based on the fact that you can hit the wall and feel something there? However, I’ve had enough trouble explaining away the apparent regularity and continued existence of reality when my perception of it ceases that I haven’t truly believed idealism until now. I’ve been a metaphysical naturalist for too long. With this paper, idealism seems much more convincing, especially because this version does not posit the additional theoretical burden of a transcendent God — which might be fine, but would make it less accessible and meaningful to non-theists.

Yetter-Chappell’s paper is too short to establish a complete non-theistic idealism, but it’s a promising start. I don’t fully grasp the three relations of consciousness — they’re a bit slippery for me still. Each of them could be a paper by themselves, and I’m sure those papers already exist.

I’m intrigued by the possibility of a blockchain of consciousnessin which what constructs reality itself is a decentralized ledger created by interactions between consciousnesses. We essentially negotiate with other minds to create what we call “reality,” solving computationally complex problems in order to construct reality and mine tokens of qualia.

Use the concept of a blockchain, but replace “bitcoin” with “qualia.”

In this sense, the world is created by consciousness similarly to the way a virtual reality’s landscape is rendered. When I encounter another player, or even an NPC, our imaginations interact to construct the world that we both inhabit. And not just my surroundings, but my very appearance, and my identity a conscious subject, are dependent on my imaginative negotiation. In fact, this seems to be a logical consequence of Yetter-Chappell’s nontheistic idealism. I also exist, in a sense, because the Other exists: a Sartrean and Levinasian idea that rejects the cogito in favor of a more interpersonal grounding for the existence of the self:

My ability to say cogito at all “can be born only in consequence of my appearance for myself as an individual, and this appearance is conditioned by the recognition of the Other” (Sartre, Being and Nothingness, pg. 236)

For all we know, we are already in a virtual reality, where the headsets are our eyes and the cord is our ocular nerve: we are imagining the world around us, constructing it in a dialectical process with other minds.

This is also marginally similar to the process of discursive imaginative negotiation in childhood games — best exemplified by arguments between kids during the process of imagination and play. These debates can be about whether unicorns can fly, who gets to have the most magic, and who is the bad guy in the game. And they follow a general process. For instance, one kid posits an imaginative rule (unicorns cannot fly, you are the bad guy and I’m the good guy, magical snowmen melt in the summer). Another kid disagrees. Negotiation ensues. They hopefully negotiate until they come to some consensus that allows them to play on. If they don’t, then the game is over and they both stop pretending — the imaginative blockchain collapses because consensus cannot be reached.

My potential claim is that reality itself is constructed with a process that is fundamentally similar to the imaginative negotiation process kids use to construct their pretend landscape. In the same way as the kids generate an imagination, I may unconsciously imagine my entire surrounding landscape, generating it just as worlds in a game are generated.

This is not a solipsistic view — I am not solely responsible for my world, but my imagination (or Aristotle’s phantasia) cooperates with other imaginations to produce my reality. You cannot genuinely play a pretend game alone, and you cannot genuinely imagine a reality alone. Wittgenstein’s private language argument, demonstrating that one cannot have a genuine personal language, applies here. Without Others to challenge your imaginative constructions and to negotiate with you, these constructions have no content and no meaning. If I am pretending alone and I just declare that a stick is a wand, no one can question me. But what if I forget what I originally classified the stick as? Or what if I change my mind and decide that the stick is now a staff, a gun, or a dragon’s tail? There is no one to challenge me. Thus, the stick has infinite and almost indistinguishable meanings, and therefore no meaning at all. I could engrave a sign of a dragon or a wand on the stick, giving it some permanent meaning, but then I would not be using my own private language — I would be relying on a symbolic language that Others understand.

Words cannot have meaning without Others to verify their meaning and negotiate with you to determine their meaning. In the same way, objects in an imagined landscape cannot have meaning, content, structure, or stability without Others to negotiate with you as to their meaning. This is a powerful argument against solipsism for those who accept my form of “imaginative blockchain idealism.”

I think that there is inductive evidence for my imaginative blockchain idealism. Consciousness has some interesting similarities with the blockchain. Sounds kinda crazy, but hear me out for a second.

Just like nodes in the blockchain, consciousness is distributed. Each human within the tapestry of human consciousness is a distinct node in that immense network of humanity. And each can act individually. We can also act in concert with other humans, and in groups. We have autonomous functions that we share in common with all of humanity: the behaviors, psychologies, biological routines, that we all play out in sometimes synchronous ways. (E.g. millions of humans go to sleep and wake up at the same time). To put it in object-oriented programming terms, “human” is a Class, and every individual human is an Object, an instantiation of that Class. We all share the same fundamental functions (like sleep(), eat(), or move()) and many global variables (like is_mortal=True). Death is just our node in the network of consciousness switching from ON to OFF — our Object ceasing to exist while the Class remains in existence, along with billions of other Objects. Not so bad when you think of it that way, right?

In the blockchain, if a node is attempting to scam the blockchain community in some way, then individual nodes will fail to validate the transactions(s) and eventually the node will be excluded from the blockchain. This is a stretch, I know, but is it possible that being rejected for insanity is fundamentally equivalent to being excluded from a blockchain? During the process of the imaginative negotiation of what constitutes reality, the insane person usually loses, and therefore they are excluded from the network; they cannot see the reality that the community has agreed upon.

“But maybe that isn’t possible. Maybe the mind of the majority is always the healthy mind, simply by virtue of its numbers. Maybe it’s the definition of madness to believe I’m right and everyone else if wrong, to find my thoughts rational and reasonable when almost the entire world finds them damaged and flawed.” ― Stacey Jay, Of Beast and Beauty

This might even have metaphysical significance. If the Mad are not capable of participating in the imaginative negotiation process that defines our reality, if they cannot add nodes to the blockchain, then they are excluded from reality itself. Their decisions about what an imaginative prop might mean are taken as meaningless (e.g. they cannot label a stick as a wand; that is prohibited to them). Their words become empty and vacuous. Once other humans reject your words and see your behaviors as insane, you can no longer participate in in reality — which just reinforces your insanity. The network of the imaginative blockchain has perma-banned you. You are excluded from reality and must live alone in your own imagination. (For more see Foucalt, Madness and Civilization). Your imaginings become hallucinations.

Only those nodes that are in harmony with the paradigm of the blockchain, which constructs the phenomenal unity of reality, will thrive in this environment we call existence.

And our imaginations determine what the world is to a far greater degree than we think. For instance, what turns a metal fence into a border between nation-states? The metal fence is the supposed “sensory reality.” But the border, the nation-state — these are not sensory experiences. They are products of our imagination. “America,” “China,” “Africa,” “Seychelles,” “Mongolia” — all of these things exist not in my sensory experience, or in anyone else’s sensory experience, but in the collective imagination of humanity. In the same way, the process which turns a hunk of shaped wood into a “table,” that turns a pole into a “sign,” that turns a scribble of charcoal into a “language” — this is all the human imaginary. As you read this, you are staring at a series of shaped lines and imagining things. You are hallucinating language into existence. But your hallucinations have meaning, because they have passed the winnowing of imaginative negotiation and are agreed upon.

The fact that they are imaginary does not make these constructions any less real. After all, the human imagination has more impact on reality than gravity. That’s hyperbole. But it does seem that way given the human experience. it has more impact on our subjective, first-person, phenomenological experience — the only experience we truly know — than gravity. How much time do you spend thinking about gravity? How much time do you spend thinking about relationships, language, countries, economies, jobs, ethics, and other imagined human constructions? Which one shapes your direct subjective experience more?

One epistemic benefit of the imaginative blockchain concept, pointed out by my friend Blake, is that it effectively explains why humans and other complex objects are conscious while less complex objects like plants, rocks, and atoms are not conscious: humans are able to solve computational problems required for consciousness and qualia. Just as one’s computer has to solve an intricate mathematical problem to produce or “mine” one bitcoin, and an actual mining company has to solve a physical problem to extract one ton of gold from the Earth, perhaps one’s mind has to solve a complex computational problem to produce one (1) qualia. Essentially, the reason why an abacus cannot mine bitcoin while a supercomputer can is the same reason why a human can experience qualia while a hunk of wood cannot. Humans have qualia because we have the computational complexity to solve the problems of the consciousness-blockchain. To oversimplify, we have vibes because we are complex enough to generate them.

Just as a bitcoin mining machine can request the network for one bitcoin, perhaps one mind can request the trans-phenomenal unity of reality for one qualia. One vibe please, sir.

If this theory seems non-rigorous or speculative that’s because it is. This theory is still very vague to me, but I will try to specify it later. I will probably end up turning this into a full paper once I’ve nailed down some of the concepts. Always appreciate criticism, comments, and questions.

If the word qualia doesn’t make sense to you, essentially think of it as a vibe. Certain things have vibes: what they are “like” to experience that thing. E.g. because a bat has experiences, even if they are not necessarily conscious, it has a kind of vibe — a qualia. And it’s impossible to know a bat’s vibe — see Nagel’s What Is It Like To Be a Bat? (A better name for this paper: How To Vibe With a Bat?). Think of qualia as a technical word for vibe, where its definition is an individual instance of subjective, conscious experience. When you see and eat a mango, for example, its yellow-ness, taste, and texture constitute aspects of its qualia: the vibe of the mango.

And here we go off the deep end: maybe mentally ill people, through hallucinations and other “irreal” experiences, or neurotypical people, through psychedelics, spirituality, and other transcendent experiences, can hack the blockchain of consciousness and experience qualia that they are not ‘meant’ to experience. In the same way that we can counterfeit currency and (conceptually possible) hack blockchains to produce unearned cryptocurrency, we may be able to bypass the authentication methods of the phenomenal unity of reality to experience incomprehensible thingsWe can escape the vibe check, and generate vibes at will. We can bootstrap vibes. Or in more philosophical terms, we can experience qualia without going through the imaginative negotiation process. (An individual generating qualia alone may be preempted by my previous section on the private language argument though).

To close, it’s important to remember Jung’s words: “Beware of unearned wisdom.” It’s not clear how we should react to unearned qualia and stolen vibes. We should investigate more.

What happens when you hack the blockchain of consciousness and steal undeserved qualia from the universe? Who prosecutes vibe-crimes, qualia-theft, and consciousness-?

Citations

[1] Johnston, M. (2011), On A Neglected Epistemic Virtue. Philosophical Issues, 21: 165–218. doi:10.1111/j.1533–6077.2011.00201.x

[2] Note the possibility of non-perceptual experiences like hallucinations, which a Yetter-Chappell idealist would not characterize as “real.” However, these experiences are also not perceptions.

[3] Dennett, Daniel C. (1991). Consciousness Explained. New York: Little, Brown. Pg. 361–362.

[4] Nagel, Thomas (1971). Brain bisection and the unity of consciousness. Synthese, 22 (May): 396–413.

[5] Marcel, A.J. (1993). Slippage in the unity of consciousness. Ciba Foundation symposium, 174, 168–80.

[6] Bayne, Tim (2007). The Unity of Consciousness: a cartography. Cartographies of the Mind. 201–210.

[7] Ryle, G. (2009). The Concept of Mind. London, UK: Routledge.

Categories
Essays Philosophy Politics

Compensating for What? Dworkin, sociology, and mental illness

Introduction: Just Compensation?

“What we seek is some kind of compensation for what we put up with.”

― Haruki Murakami

Who should society compensate? Which differences in outcome does justice require that we rectify? Dworkin argues that a person with handicaps or poor endowments is entitled to compensation, while a person with negative behavioral traits like laziness or impulsivity is not entitled to compensation. To argue for this claim, he draws a distinction between option luck, or the luck involved in deliberate risky decisions made by the individual, and brute luck, “a matter of how risks fall out that are not in that sense deliberate gambles.”[1] Being handicapped by forces out of your control is an example of what Dworkin would call brute luck. As handicaps are due to brute luck and are out of the individual’s control, they deserve some form of compensation. On the other hand, behavioral traits are the result of option luck and therefore do not merit compensation. As he puts it in more colloquial terms, “people should pay the price of the life they have decided to lead.”[2] This is Dworkin’s just compensation principle.

I will argue that this principle does not account for sociological and biological factors that affect our behavioral traits and our decision-making, making it much more difficult to justify only giving compensation to those with handicaps and not to those who suffer due to bad decisions. Dworkin might respond with his caveat that if a person has a behavioral impairment like a severe craving, and the person judges this craving as bad for their life-projects, it ought to be considered a handicap deserving of compensation. However, I will conclude that this caveat fails in the case of mental illness. Ultimately, the just compensation principle is an inadequate way to think about egalitarianism and justice.

Not Just Gambles

Dworkin’s just compensation principle states that our disadvantages in resources due to circumstances outside of our control are worthy of compensation, whereas disadvantages due to our deliberate gambles or lifestyle choices should not be compensated. For instance, if someone is born in poverty and suffers from the long-term effects of malnutrition, they deserve compensation for this brute luck. But if someone decides to spend every waking hour surfing for their first forty years of life, and then ends up with very few marketable skills and is unable to find employment, they do not deserve compensation. Another example of a case undeserving of compensation might be someone who decides to gamble and subsequently loses all their earnings. In these cases, Dworkin argues, the individuals have made deliberative lifestyle choices that resulted in bad option luck and decreased their access to internal and external resources. They have intentionally rolled the dice. These situations are not the result of brute luck, but are consequences of deliberative choices, and therefore do not deserve compensation from society.

gray-and-red arcade machines
How is the distribution of gambling machines determined? It strongly influences your probability of gambling.

However, these lifestyle choices are not as deliberative as Dworkin suggests. Consider the case of the gambler. Imagine a young person on a Navajo reservation decides to start gambling because there is a casino nearby, because her friends gamble and encourage her to participate, and because the local economy depends on the casino. She is also misinformed about gambling, due to cultural norms, lack of education, pervasive advertising, and other situational factors. She loses all her savings in several gambling sprees. A simple generalization of Dworkin’s theory would dictate that she is suffering the consequences of option luck and is not entitled to compensation. But this view ignores the situational factors that drove the person to gambling.

Someone who is born in an area with no casinos and strong cultural norms against gambling, who receives a good education, and has friends who mostly go to colleges and not casinos, is not subject to negative situational factors of comparable strength or frequency. Gambling may not even come to mind as a serious option for this more privileged individual. Therefore, our choices are deeply influenced by the brute luck of being born in a harmful environment. Our brute luck impacts our options and our decisions. Even if gambling is an exercise of option luck, it is arguably still worthy of compensation when someone’s decision to gamble is strongly influenced by brute luck factors outside of their control. In this sense, the gambler’s poor choices which led to bad option luck are an indirect consequence of the brute luck of being born with certain situational factors.

This case is not imaginary. Due to cultural and sociodemographic factors, a person born on a Native American reservation is twice as likely as the average person to practice pathological gambling.[3] The strong influence of surroundings on behavior has been generalized by studies which find that decision-making processes are profoundly influenced by sociocultural factors outside of our control.[4] And these “brute luck” factors do not just influence minor decisions, but shape our fundamental decisions about life projects, goals, and lifestyles. For example, a person is far more likely to decide to marry at a young age if they were raised Mormon in Utah Valley than if they had a secular childhood in New York City. This weakens Dworkin’s case that losses due to “deliberative gambles” or lifestyle choices should not be compensated, while losses due to brute luck should be compensated. Apparent choices are profoundly shaped by brute luck. It would be a superficial misrepresentation to call these choices intentional ‘gambles.’

Brute luck genetics & personality

5 Personality Traits - Infographic
The Big-Five model of personality, currently the best-supported and most accepted scientific model of personality.

Another aspect of brute luck is genetics. On a surface level, genetic factors seem to be separate from decision-making processes. But most of us will readily accept that our personality shapes our choices. And research confirms that personality affects our decisions in a wide variety of contexts.[5] For example, people with high openness to experience are far more likely to engage in high-risk behaviors.[6] If personality is largely or even partially a product of brute luck, and personality shapes our choices, that implies our decisions are partly determined by brute luck. Therefore, our gambles are not as deliberative as they seem and may deserve compensation.

It turns out that a significant proportion of personality is determined by brute luck in the form of genetic inheritance. A meta-analysis of epigenetic studies found that about 20-60% of the phenotypic variation in personality (also called temperament) is determined by genetics.[7] Pairs of twins reared apart share an average personality resemblance of .45, suggesting that almost half of their personality is rooted in genetics.[8] Another study found that genetics explain about 40-60% of the variance in Big 5 personality traits.[9] The empirical evidence concurs that our personality, which shapes our decision-making, is in large part determined by genetic factors. For example, someone who genetically inherits the personality trait of openness to experience is far more likely to seek gambling as a source of novelty.

Dworkin’s defenses

How would Dworkin respond to this objection? He notes that the distinction between brute luck and option luck is a spectrum rather than a complete dichotomy. He accepts that brute luck influences our decisions, making the distinction between option and brute luck far messier. Therefore, he might argue that we should just compensate losses to the extent that they are caused by brute luck. For example, if hypothetically 50% of a person’s personality is determined by genetics and their personality shapes 30% of their choices, then 15% of their choices will be genetically determined. If we add in another 10% due to sociological influences, Dworkin’s just compensation principle might dictate that we compensate only that 25% of the person’s losses due to behavior caused by brute luck. Quick justice maths. But it seems inordinately difficult or impossible to calculate the appropriate compensation by tracing decisions to their root causes. This suggests that Dworkin’s entire scheme of compensation is not practically implementable, as it requires calculating the incalculable to figure out if losses are caused by brute or option luck.

woman in black long sleeve shirt
If just compensation relies on calculating some obscure combination of brute luck and option luck, this process is incalculable. There’s no way of knowing the parameters or how to use them to calculate a just result.

Furthermore, Dworkin might say that the examples of sociology and genetics do not count as brute luck, as there is still an element of personal choice in both cases. A person born into a gambling-promoting culture will be more likely to gamble, but they are not compelled to do so. Additionally, all people are subject to social influences on their behavior, and it is difficult to say that one environment is unequivocally worse than another. For example, a wealthy person not born on a reservation may not be influenced by as much pressure to gamble, but rather may be subject to more influences to take cocaine, embezzle funds, or engage in insider trading. Therefore, Dworkin could make a case that sociological and genetic influences on our behavior do not constitute true brute luck, because all people are subject to these influences, and they still allow a significant element of choice. Genuine brute luck does not allow for any choice: it is a situation completely out of our control, like a hurricane or a physical disability.

However, Dworkin’s counter-argument here contradicts his previous response. The claim that brute luck only exists in conditions that do not allow for any choice is mutually exclusive with the idea that there is a spectrum between brute luck and option luck. Dworkin cannot have his spectrum and his dichotomy too. Additionally, it is almost certainly the case that some situations involve more negative brute luck than others. While all situations involve brute luck that impacts our choices, this does not imply that we should completely ignore the differences between these situations. Some environments are simply worse than others.

Cravings as handicaps

Finally, Dworkin might respond by arguing that his theory has already addressed this problem of decision-making shaped by brute luck. He agrees that personality traits shape our decision-making. Some people, he mentions, might be cursed with a personality that includes insatiable cravings for sex. If someone has a severe craving that they view as an impediment to the success of their life-projects, it may be considered a handicap worthy of compensation:

They regret that they have these tastes, and believe they would be better off without them, but nevertheless find it painful to ignore them. These tastes are handicaps; though for other people they are rather an essential part of what gives value to their lives.

(Dworkin, 303).

Dworkin therefore makes an exception in this case and reevaluates the craving as a kind of handicap. Severe cravings can be added to the list of things that a person in the hypothetical insurance market could purchase insurance against. This seems to be Dworkin’s best response to the problem of the blurred lines between option luck and brute luck. After all, it allows him to classify negative behavioral traits as cravings that are worthy of compensation only if the person views the craving as a harmful for their life-projects. However, with the rest of this paper I will argue that this response fails as well, because it fails to account for the case of mental illness.

The case of mental illness

The key problem with Dworkin’s treatment of cravings is his use of the glad-not-sad test to evaluate whether a craving is a genuine handicap or a personal failing: “if an individual is glad not sad to have a preference, that preference falls on the side of her choices and ambitions for which he bears responsibility rather than on the side of her unchosen circumstances.”[10] This rule does not account for the case of a mentally ill person who irrationally evaluates harmful cravings as beneficial for their life-projects.

For example, a person with severe schizophrenic paranoia may have an irrational craving to eliminate all communication devices from their home to escape the eyes of government spies. They may view this craving as beneficial for the life-project of protecting their family. Therefore, under Dworkin’s framework for compensation of cravings, this person would not receive compensation because they are irrationally glad that they have the irrational preference. Dworkin does not account for the possibility that the very process by which we decide whether a craving helps our helps our life-projects will be subject to brute luck factors like mental illness. Mentally ill people who have negative cravings (e.g. for drug addiction or paranoid behaviors) and judge those cravings as good would not receive compensation for the consequences of their cravings.

gold cards and two dices on round wooden platform
More and more, Dworkin’s view of option luck as ‘deliberative gambling’ seems fragile and indefensible.

Furthermore, it is problematic for Dworkin’s theory of justice that people who judge their own mental illness as good for their life projects will not be compensated. For example, someone like Van Gogh, who viewed his bipolar disorder as essential for his artistic life-projects, would never receive compensation for the harmful consequences of this disorder. After all, it is a disorder that he is generally “glad” rather than “sad” about. However, it seems deeply arbitrary that those who see their mental illness as positive should not be compensated simply because of their outlook.

This scheme of compensation even creates perverse incentives to treat one’s disorder as harmful for one’s life-projects even if a different outlook could make it beneficial. Imagine that two persons are subject to the same brute luck factor of having mental illness, and one person decides to view it as a positive factor that furthers their life projects while the other decides to view it as an impediment. The one who reevaluates the disorder as beneficial for their life-projects is almost punished for their decision by a scheme which withholds compensation when a person views a disorder as positive.

Dworkin might respond that mental illness is also something that could be insured against in the hypothetical insurance auction. In this auction, we would have knowledge about the likelihood of mental illness, as well as the differing levels and costs of coverage for mental illness. If one does not insure against mental illness, then they would not be compensated for the consequences of this mental illness.

people outdoor during daytime
Imagine an auction where you’re not buying items, but instead are buying insurance for potential brute luck factors like being born with a disability, a mental illness, into an oppressive or negative environment, and more.

However, given the rarity of mental illness it seems unlikely that anyone would purchase this insurance. And this hypothetical auction can hardly be seen as relevant to the practical implementation of just institutions. After all, how can we know what people would choose in the hypothetical auction? How can we simulate it? How can we measure and interpret the results in creating our institutions? Ultimately, the hypothetical insurance auction seems more like an idle thought experiment than a method that could salvage Dworkin’s theory of just compensation.

Conclusion

I have attempted to cast doubt on the distinction between option luck and brute luck, in order to show that variations in option luck (the results of our decisions) are largely explained by variations in brute luck (factors outside our control). If this claim is true, then Dworkin’s compensation principle cannot stand, because it relies on a distinction between brute and option luck. Furthermore, Dworkin’s view that bad option luck caused by bad behavioral traits should not be compensated rests on the rational choice model, which models human behavior as mostly explained by logical deliberations based on information to reach conclusions about and act within the world. This deliberative choice model allows Dworkin to draw a distinction between a resource paucity due to brute luck, and a resource paucity due to option luck.

But Dworkin’s view of human decision-making is incomplete at best and misguided at worst. This paper gives two strong counterexamples to the rational choice model: sociological factors and biological-genetic factors. These examples suggest that a large proportion of human decision-making is the direct or indirect result of brute luck. As such, it seems that even the bad consequences of our intentional choices might merit compensation. Dworkin gave two replies that were insufficient due to logical contradictions. Ultimately, he offers the caveat that if a person judges a craving to be harmful for their life-projects, it merits compensation. But this caveat fails as well when we apply it to mental illness. Therefore, Dworkin’s model needs serious reworking or replacement. Focusing on equality of resources and distributing resources as compensation for only the consequences of brute luck and not the consequences of option luck, fails to account for sociological, biological, and psychiatric influences on our behavior.

Works Cited

  1. Dworkin, R., 2000, Sovereign Virtue, Cambridge MA: Harvard University Press. Pg. 73.
  2. Dworkin, pg. 74.
  3. Patterson-Silver Wolf Adelv Unegv Waya, David A et al. “Sociocultural Influences on Gambling and Alcohol Use Among Native Americans in the United States.” Journal of gambling studies vol. 31,4 (2015): 1387-404. doi:10.1007/s10899-014-9512-z
  4. Bruch, Elizabeth, and Fred Feinberg. “Decision-Making Processes in Social Contexts.” Annual review of sociology vol. 43 (2017): 207-227. doi:10.1146/annurev-soc-060116-053622
  5. Vroom, V. H. (1959). Some personality determinants of the effects of participation. The Journal of Abnormal and Social Psychology, 59(3), 322-327.
  6. Marco Lauriola, Irwin P Levin, Personality traits and risky decision-making in a controlled experimental task: an exploratory study. Personality and Individual Differences, Volume 31, Issue 2, 2001, Pages 215-226, ISSN 0191-8869, https://doi.org/10.1016/S0191-8869(00)00130-6.
  7. Saudino, Kimberly J. “Behavioral genetics and child temperament.” Journal of developmental and behavioral pediatrics : JDBP vol. 26,3 (2005): 214-23.
  8. Bratko, Denis, Ana Butković, and Tena Vukasović Hlupić. “Heritability of Personality.” Psychological Topics, 26 (2017), 1, 1-24. Department of Psychology, Faculty of Humanities and Social Sciences, University of Zagreb, Croatia.
  9. Power, Robert & Pluess, Michael. (2015). Heritability estimates of the Big Five personality traits based on common genetic variants. Translational psychiatry. 5. e604. 10.1038/tp.2015.96.
  10. Olsaretti, Serena, and Richard J. Arneson. “Dworkin and Luck Egalitarianism: A Comparison.” The Oxford Handbook of Distributive Justice. Oxford University Press: June 07, 2018. Oxford Handbooks Online. Accessed 27 May 2019. Pg. 19.
Categories
Philosophy

Why Literature Matters: The Aporetic Approach

“How wonderful that we have met with a paradox. Now we have some hope of making progress.”

— Niels Bohr

Framing

Having “been reduced to the perplexity of realizing that he did not know… he will go on and discover,” Plato writes of the boy who “feels the difficulty he is in” after attempting to solve Socrates’ riddles.[1] Socrates argues that “by causing him to doubt and giving him the torpedo’s shock” of his own ignorance, “he will push on in the search gladly, as lacking knowledge; whereas then he would have been only too ready to suppose he was right.”[2] Encountering contradictions and complexity beyond his comprehension plunged the boy into aporia — an impasse, a quandary one cannot resolve, a state of puzzlement, a doubting and bewilderment, a being-at-a-loss. Aporia is the dazzling of the mind by the intricacy of existence.[3] While this state seems empty, the paucity of knowledge in aporia is fertile. Specifically, aporia created by literature offers the following routes of learning: it fosters epistemic humility by revealing our uncertainty, broadens our possibilities by expanding our imaginative horizons, and promotes existential authenticity.

This paper focuses on aporetic literature, a genre of fiction that is usually long-form, complex, and narrative or poetic. Fiction itself is characterized by the way it “invites imaginings.”[4] What distinguishes aporetic literature is a specific “mode of persuasion” distinct from the realist mode of persuasion.[5] While some authors portend to represent the real world and offer the reader closure, aporetic authors “multiply mysteries and indeterminacies and keep the reader guessing to the end and beyond.”[6] Instead of straightforwardly representing the world, aporetic literature is enigmatic, perplexing, and questioning, making interpretation difficult. It is aporia-promoting. My paradigm examples are intricate masterworks with nuanced internal tensions, like Dostoevsky’s The Brothers Karamazov, Shakespeare’s Hamlet, Whitman’s Leaves of Grass, Heller’s Catch-22, and The Bible. However, any aporia-causing fiction fits in this category, and virtually any text—Dr. Seuss’s storybooks, Disney’s Frozen, Egyptian myths, a peer’s Instagram poetry—could produce aporia. In fact, almost all literature contains a period of puzzlement between the initial pieces and the end result. Thus, aporetic literature is on a continuum and it overlaps with many other genres. To avoid excessive scope, I will concentrate on the most aporia-promoting literature.

How can fictional imaginaries instruct us in meaningful ways about the world outside the fiction? This paper aims to provide a solution to this puzzle of instructive literature, which asks how imaginative representations can change our perspectives or teach us in ways relevant to the real world. This is adjacent to the puzzle of moral persuasion[7] but is broader, including not just the way literature can teach us about morality, but also about the world, its meaning, and ourselves. I argue that imagination guided by aporetic literature can be genuinely instructive.

person holding red jigsaw puzzle

Rather than educating us on a predefined landscape of knowledge, persuading us to hold certain beliefs, providing specific answers, or promoting moral skills, aporetic literature primarily serves as a way to confront readers with intractable dilemmas. It offers a range of challenging and often contradictory perspectives that create doubt and questioning (aporia), leading the reader to a fertile space of possibility where they can recognize their limits, explore alternate worldviews, create their own values, and construct an authentic personal interpretation of both the text and life itself. There are three vital ways we learn from imaginaries guided by aporetic literature:

  1. Epistemic humility. The aporia created by the complexity and internal tension within literature causes us to recognize our ignorance. In essence, we gain insight about our lack of knowledge.
  2. Openness. Literature leads us to recognize the breadth of possibilities, widening our imaginative scope and our ability to generate new ideas.
  3. Existential authenticity. Aporia urges the reader to choose to create herself. This choice leads to greater existential authenticity as formulated by Kierkegaard and Nietzsche.

None of these routes of learning presume any particular view of ethics. After all, aporetic literature does not stress a particular ‘correct’ morality, but instead engenders aporia, opening up a mental space where individuals can develop new values or explore their existing values. Furthermore, I do not rely on any particular view of truth, where the purpose of fiction might be to track or remain faithful to the Truth. The three roles of aporetic literature function regardless of the moral or epistemic position the reader takes—a desiderata which most views of learning-from-literature cannot fulfill.

What is Learning?

Learning in my view is broader than just epistemic improvement or skill-acquisition.[8] These are both forms of learning, but they do not capture a full picture. For instance, in transformative experiences as described by L.A. Paul, one’s entire way-of-perceiving, values, and phenomenological perspective undergo a metamorphosis.[9] Just like you cannot know what it would be like to be a vampire until you become one, you cannot understand what it will like to be yourself after personal transformation. After a transformative experience, one’s window into the world is shattered. One cannot say that after a transformative experience our views have improved, since after the paradigm shift are standards of what “improvement” itself even means have changed. This paradigm-shift kind of learning cannot be understood as mere epistemic improvement. And aporetic literature is not strictly truthful or knowledge-promoting, but is better called illuminating, enlightening, or instructive.

Paradigm shifts do not just entail epistemic improvements, but new lens through which to view the world. Explaining transformative experience requires a wider conception of learning as the process of growing one’s understanding, where “understanding” is the ability to see and utilize varying perspectives. Learning is not just refining the glasses we use to view the world; it is not just improving the glasses’ prescription. Rather, transformative learning makes our window more kaleidoscopic. It shatters our existing lens, adding new perspectives, and layering these new lenses on top or alongside of our earlier lenses. Through literature, we convert our solitary pair of tinted glasses into a many-tinted kaleidoscope.

I. Epistemic humility

(a) The method of aporia

Unlike the Sophists, Socrates does not vend his wisdom away or allow his students to “mindlessly swallow the conclusions of their mentors.”[10] Rather, he uses a questioning dialectic to induce aporia, an uncertain state of possibility which urges students to “discover within themselves a multitude of beautiful things, which they bring forth into the light.”[11] With Socrates, students learn valuable mental skills and perhaps even wisdom, rather than tokens of knowledge “they can buy from time to time for a drachma.”[12] Even if this is more painful, as the student is often “distressed and annoyed at being so dragged…into the light of the sun,”[13] it is far more rewarding. While knowledge or skill acquisition is valuable, Socrates surpasses a sole focus on this method and encourages an aporetic approach as well. This approach to learning emphasizes epistemic humility—recognition of our uncertainty and limited perspective—as a first step. As Confucius described in the Analects, “to know when you know; and when you do not know; that is wisdom.”[14]

Through contradiction, metaphor, and other narrative and stylistic elements, literature exposes what we do not know. Aporetic literature is Socratic, provoking internal dialogue within the reader about significant questions instead of dictating conceptual truths. The aspiration of aporetic authors is “perhaps most of all to frustrate reason itself with the sheer complexity of their projects,” moving past the restrictions of internal consistency and direct matter-of-fact communication.[15] Rather than erecting “edifices of concepts” with “rigid regularity,” these authors use their fictions to build an “infinitely complex cathedral of concepts upon shifting foundations and flowing waters.”[16] Thus, the author leaves the reader in a state of internal tension and psychological ambiguity.

Aporetic literature does not employ optimal communication techniques to make understanding or knowledge-acquisition easier. The opposite is true. As Kierkegaard wrote, “I conceived it as my task to create difficulties everywhere.”[17] Aporetic authors create stumbling blocks in their books to trip up the reader. Similarly, in Socrates’ method of elenchus, he creates difficulties for his conversation partners by exposing internal contradictions in their views.

(b) Exposing uncertainty

Aporetic literature depicts a series of impossible dilemmas, unanswerable questions, and gripping quandaries. These aporias reveal our ignorance and uncertainty. For instance, in The Brothers Karamazov, Ivan asks his brother Alyosha a confounding question:

“Tell me straight out, I call on you—answer me: imagine that you yourself are building the edifice of human destiny with the object of making people happy in the finale, of giving them peace and rest at last, but for that you must inevitably and unavoidably torture just one tiny creature, [one child], and raise your edifice on the foundation of her unrequited tears—would you agree to be the architect on such conditions?. . . And can you admit the idea that the people for whom you are building would agree to accept their happiness on the unjustified blood of a tortured child, and having accepted it, to remain forever happy?”[18]

The Brothers Karamazov, by Fyodor Dostoevsky

The primary effect of this passage is not knowledge-acquisition, but aporia. I realize my own ethical ignorance. I do not know what the correct ethical response to this question is; I barely know where to start. Dostoyevsky has thrust me into aporia. This grows my epistemic accuracy, in a sense, because I now know what I do not know. But more importantly, the aporia has created a space of possibility—I can now re-evaluate my values.

As another example, the novel Catch-22 satirizes war, puzzling the reader through conflicting character’s perspectives and disquieting descriptions. The common phrase “Catch-22” can even be seen as a synonym for aporia, as it describes a bewildering problem where the only solution is precluded by the conditions of the problem itself. This exposition by the character Yossarian exhibits one of the most aporia-inducing passages of the novel:

“What a lousy earth! He wondered how many people were destitute that same night even in his own prosperous country, how many homes were shanties, how many husbands were drunk and wives socked, and how many children were bullied, abused, or abandoned. How many families hungered for food they could not afford to buy? How many hearts were broken? How many suicides would take place that same night, how many people would go insane? How many cockroaches and landlords would triumph? How many winners were losers, successes failures, and rich men poor men? How many wise guys were stupid? How many happy endings were unhappy endings? How many honest men were liars, brave men cowards, loyal men traitors, how many sainted men were corrupt, how many people in positions of trust had sold their souls to bodyguards, how many had never had souls? How many straight-and-narrow paths were crooked paths? How many best families were worst families and how many good people were bad people?”[19]

Catch-22, by Joseph Heller
people gathering on street during nighttime

This relentless series of confusing, contradictory questions thrusts us into a knotty situation. As a reader immersed in the novel, my empathy with Yossarian’s plight leads me to try to answer his questions. But I am unable. I cannot resolve his contradictions (lying honest men, crooked straight paths), elucidate valid rationales behind the social structures he challenges (food insecurity, triumphant cockroaches), or simplify all of his questions into a coherent logical structure. My uncertainty is exposed. Aware of my epistemic limits, my loss of confidence makes me more open to further exploration.

Finally, one of the most influential passages in English literature is Shakespeare’s poignant creation of aporia about death:

“To be, or not to be, that is the question:
…To sleep, perchance to dream—ay, there’s the rub:
For in that sleep of death what dreams may come,
When we have shuffled off this mortal coil,
Must give us pause—there’s the respect
That makes calamity of so long life.
The undiscovere’d country, from whose bourn
No traveller returns, puzzles the will,
And makes us rather bear those ills we have
Than fly to others that we know not of?”[20]

Hamlet, Shakespeare

This passage thrusts the reader into a state of questioning. Especially for readers who had not before considered the question of being or non-being, it creates aporia about whether life is worth living. Even for those accustomed to this question, it induces perplexity about what might come after the end, puzzling the will and giving the reader pause.

(c) Reshaping cognition

beige liquid illustration

In Currie’s view, the “sheer complexity of great narrative art…may increase its power to spread ignorance and error.”[21] Complex literary works make language “harder to process,” and this nuanced style generates an illusory sense of learning.[22] Literature engages our emotions and bypasses our epistemic defense mechanisms. Rather than promoting epistemic humility, literature can just reinforce biases and facilitate misguided beliefs.

The aporetic approach resolves Currie’s sensible critique of confusing, doubt-creating, and contradictory literary styles. This seemingly obstructive complexity is necessary to create aporia. Aporetic literature is designed not to lubricate the mind’s mechanisms, but to disrupt the smooth operation of the intellect. The aim is not to provide a straightforward set of rules to minimize confusion, but to mimic the intricacy of lived experience and draw out paradoxes. The conflicting ambiguity of in aporetic literature is not mere random noise, but carefully constructed and meaningful dissonance that is “strategically opposed to the harmonies it disrupts.”[23] Puzzlement and perplexity are positive effects of aporetic literature, not unfortunate byproducts.

Aporia-promoting writers throw a stick into the well-oiled spokes of our mental equipment. Our synthesis is thwarted. This breakdown discloses our automatic cognitive processes and encourages playful, self-conscious, hypothesis-testing exploration in their place. Once they are in the open, our pre-programmed interpretations can be changed, as “the brain plays with alternative ways of interpreting these elusive, intriguingly unstable representations.”[24] Aporia can help emancipate us from our biases, “breaking their grip to enable new modes of cortical organization.”[25] The dissonances of aporetic literature upset our cognitive habits and “what was dulled becomes visible again as new configurations of meaning, based on new neuronal assemblies, emerge.”[26] Writing aporetic literature is an art of taking-away, luring the reader away from their supposed knowledge. We learn through unlearning.

(d) Epistemic virtues

Epistemic humility is especially important in post-Gettier epistemologies which emphasize epistemic virtues over discrete periods of knowledge-acquisition. Fiction can model characters with epistemic virtues like humility, inquisitiveness, or intellectual courage, and is unique in its capacity to show the complexity of these virtues. For instance, Dr. Frankenstein exhibits a love of learning and passion for discovery, both usually considered epistemic virtues. But Shelley makes it clear that this curiosity is obsessive, power-hungry, and vicious, and thus not a virtue. Pip of Charles Dickens’ Great Expectations develops a vicious preoccupation with increasing his status by acquiring knowledge. His intellectual vices interfere with genuine learning, and the novel exposes his epistemic arrogance.

Tracing these fictional characters in our imaginations encourages epistemic humility. By reading a wide variety of fiction, readers can mediate between different character-models and pursue epistemic virtues with more understanding. Even if literature does not promote knowledge-acquisition directly, it builds the virtues necessary for learning.

II. Openness

photo of snow-capped mountain surrounded by sea of clouds

Aporetic literature broadens our horizons and expands our range of vision. It offers new vantage points and possibilities. In Kant’s account, in engagement with fiction our minds are “in free play, because no determinate concept restricts them to a particular rule of cognition.”[27] Aporetic literature encourages the “playful application of a multiplicity of concepts.”[28] Aporetic literature can thus expand the scope of our imagination, causing us to learn of endless new possibilities, as “The Brain—is wider than the Sky.[29] In this way, the author does not give us a thing to see but offers a light by which we may see for ourselves. Even if our behavioral responses do not become “more virtuous” by some moral standard and our models of the world do not become “more accurate” by some epistemic standard, we still learn from literature. We gain broader possibilities, a vaster imaginative scope, and a range of potential responses to the world.

(a) Neuroplasticity

Some learning can happen through fictions that invoke familiar patterns and “strengthen already existing cross-cortical processing networks,” but aporetic literature instead reconfigures neural networks, “rewiring synapses to reshape the brain’s plasticity.”[30] Oft-travelled patterns are important to create habits, but they come with a loss of flexibility and openness. In aporetic texts, the brain is tossed into a heaving tumult, oscillating between unified structure and dissolute chaos as “the poet’s eye, in a fine frenzy rolling, doth glance from heaven to Earth, Earth to heaven.”[31] This narrative fluctuation keeps the brain open to possibility and maximizes mental adaptability. As Byron declared, “poetry is the lava of the imagination whose eruption prevents an earthquake.”[32] The turmoil of a restless and uncertain mind is sublimated through aporetic literature. Rather than encouraging unconscious repression, aporia prompts the reader to fully experience the destabilizing doubt and respond to it intentionally.

(b) Defamiliarization

The physical world around us can be seen as a massive stage full of props, where props are defined in Walton’s terms as objects which prescribe “principles of generation” for our imaginings.[33] Imagination prompted by aporetic literature can defamiliarize us from our existing props. As literary theorist Shklovsky writes, “the technique of art is to make objects ‘unfamiliar,’ to make forms difficult.”[34] The phenomenon of defamiliarization is a key element of aporetic literature: “the writer shakes up the familiar scene, and as if by magic, we see a new meaning in it.”[35] Our cognition is de-automatized, we are broken from normal robotic routines of perception, and the ossified props around us are made fluid. After our props are defamiliarized, we are able to inscribe new principles of generation upon them. This enables us to re-imagine the world around us with new frameworks.

(c) Imaginative horizons

person on top of the cliff

Even if fiction writers do not understand the psyche better than anyone else, they can still offer us insights into the possibilities of human behavior. Even completely fabricated characters can portray what human behavior and motivation might be like in an alternate reality. For instance, in the novel Lord of the Flies and the more recent Hunger Games, adolescents are thrust into extreme situations. Quickly, the youths devolve into brutal violence; one of the kids wonders if “maybe there is a beast…maybe it’s only us.”[36] Readers also place themselves in the intense circumstances through immersion. They thus learn more about the possibilities of human behavior and learn how they might respond in similar circumstances. Apocalyptic or utopian literature constructs imaginaries that contradict our occurrent reality. It thereby makes the status quo seem more contingent and less necessary, encouraging interrogations into the ‘way the world is’ and new visions of what is possible. The aporia produced by fictional situations widens our imaginative horizons beyond just our quotidian experience of everyday life and behavior.

Furthermore, fiction can instigate imaginations about our personal possibilities. It thus expands our view of our own potentiality:

“I saw my life branching out before me like the green fig tree in the story. From the tip of every branch, like a fat purple fig, a wonderful future beckoned and winked… One fig was a husband and a happy home and children, and another fig was a famous poet and another fig was a brilliant professor…and beyond and above these figs were many more figs I couldn’t quite make out. I saw myself sitting in the crotch of this fig tree, starving to death, just because I couldn’t make up my mind which of the figs I would choose. I wanted each and every one of them, but choosing one meant losing all the rest, and, as I sat there, unable to decide, the figs began to wrinkle and go black, and, one by one, they plopped to the ground at my feet.”[37]

The Bell Jar, Sylvia Plath

Like the character in this passage, we may start to see all the distant branches and figs of our own future by reading about the lives of fictional characters. As Kierkegaard writes, being immersed in an imaginative fictional world allows the reader “to disperse himself among the innumerable possibilities which diverge from himself… the personality is not yet discovered.”[38] Aporetic fiction challenges the reader’s stable identity, deconstructing her epistemic confidence that ‘I am what I am,’ and grows her imagination of her personal prospects. It also offers a way to simulate alternate ways-of-being. Ultimately, this engagement with fiction can be the foundation for authenticity: the “shaping of Dasein’s being into an authentic existence depends upon its first finding itself submerged in the imaginative projections of its infinite possibilities.”[39] The authentic self is built on this primordial flux of possibilities. Without imagining our future possibilities, we cannot become ourselves.

III. Existential authenticity

“You shall not look through my eyes either, nor take things from me,/ You shall listen to all sides and filter them from your self.” — Walt Whitman[40]

(a) What is authenticity?

For the existentialists, our being is always a becoming. We perpetually make choices and enact certain roles to take a stand on who we are. Nietzsche exhorts his readers that we should “want to become who we are— human beings who are new, unique, incomparable, who give themselves laws, who create themselves!”[41] Authenticity in the existential view is not discovering what we already are, but striving towards what we decide to become: “for your true nature lies, not concealed deep within you, but immeasurably high above you.”[42] An authentic identity cannot be static. It is an action: self-overcoming in Nietzsche’s terms, perpetual striving in Kierkegaard’s, self-surpassing in Sartre’s, or in Heidegger’s vocabulary, Dasein’s constant up-surging into the future.[43] Authenticity is a constant sculpting, a “making ourselves, shaping a form out of all the elements—that is the task!”[44] In the search for authenticity we do not seek to know ourselves, but to will a self and become that self.

Further, no one can determine what I am for me:

“No one can construct for you the bridge upon which precisely you must cross the stream of life, no one but you yourself alone. There are, to be sure, countless paths and bridges and demi-gods which would bear you through this stream; but only at the cost of yourself; you would pawn yourself and lose. There is in the world only one way, on which nobody can go, except you: where does it lead? Do not ask, go along with it. Who was it who said: “a man never rises higher than when he does not know where his way can still lead him”? [Oliver Cromwell].[45]

Friedrich Nietzsche
man wearing white shirt siting on bridge overlooking at mountain

Aporetic literature does not ferry us safely over the stream. Rather, it gives us jumping-off-point to dive into the construction of ourselves. The impetus to this action must arise from the person in question and cannot be forced by external forces. Authenticity cannot be arrived at by simply repeating a set of actions or taking up a set of beliefs; it springs from self-creation. As aporia induces the “shattering of the individual,”[46] challenging our notions of what we are, it catalyzes our self-creation. Aporetic fictions thus jumpstart the reader on their path to authentic becoming-oneself.

(b) Escaping bad faith

What prevents authenticity? The existentialists are in resounding concordance on the answer: it is laziness and bad faith, which lead people to “hide themselves behind customs.”[47] Through the pressures of norms and the inertia of default interpretations, we can become lost in the They (Das Man),[48] forgetting that our identity is a choice, not a circumstance. Aporia-inducing passages shock us out of this laziness, encouraging the reader to “rebel against a state of things in which he only repeats what he has heard, learns what is already known, and imitates what already exists.”[49] As Sartre writes, authenticity arises from having a “lucid consciousness of the situation,”[50] as we must recognize our contingent situation but do not let it define us, making a choice to establish an identity.

(c) Emotional engagement

Kierkegaard criticized the culture of his time for the way it promoted detached reflection rather than engaged passionate commitment. Fiction encourages emotional engagement that prompts us to make our own decisions and interpretations. In fact, emotional engagement is required to even comprehend the plots and characters of complex literature.[51] More significantly, only the spark of a personal relationship to a book can ignite a fire worth stoking. In Fahrenheit 451, a woman is consumed in an inferno of paper because she refuses to give up her books to the firemen. After this, Montag reflects that “There must be something in books, something we can’t imagine, to make a woman stay in a burning house; there must be something there. You don’t stay for nothing.”[52] What is this unimaginable quality? What are these intensely personal things in books that some are willing to set themselves ablaze for?

burning open book

The lesson of literature is that mere theses and universal principles are not enough to provide meaning to our lived experience. After all, could you die for a set of knowledge or a series of ‘true’ postulates? Could you live based on facts alone or according to a sophisticated model? As Kierkegaard wrote:

“The obliging, immediate, wholly unreflective subject is naïvely convinced that if only the objective truth stands fast, the subject will be ready and willing to attach himself to it.”[53]

Soren Kierkegaard

Rather, objective truths are only meaningful when they are infused with emotion and integrated into one’s self. Aporetic literature prompts transformative self-investigation, in which the book serves as a prop for the reader’s authentic reimagination of herself. She encounters aporia and subsequently must decide upon her own values, her self-definition, her own meaning of life. These choices are intensely subjective and only accessible or meaningful to the individual. They are not “knowledge.” Furthermore, they cannot be called “true” according to external standards but become personally true when they are fully appropriated into the individual’s subjective life-view.[54]

Without an inward transformation, yeastless and objective truth is irrelevant. The reader cannot muster the passion to burn for such objectivity. A reader who has only acquired stale knowledge is like “a man who has collected furniture, rented an apartment, but as yet has not found the beloved to share life’s ups and downs with him.”[55] Even if the reader could discover objectivity, it is just an unfurnished apartment until become personally intertwined with this truth. Acquiring knowledge or truth will not change one’s life in any meaningful sense. Rather, our aim when we read aporetic literature should be akin to Kierkegaard’s:

“The crucial thing is to find a truth that is truth for me, to find the idea for which I am willing to live and die…this was what I needed, to lead a completely human life and not merely one of knowledge, so that I could base the development of my thought not on – yes, not on something called objective – something that in any case is not my own, but upon something that is bound up with the deepest roots of my existence, through which I am grafted into the divine, to which I cling fast even though the whole world may collapse. This is what I needed, and this is what I strive for.”[56]

Journal Entry, Soren Kierkegaard

This simplifies into an unavoidable reality: it doesn’t matter if it is true if it is not your own. This is why the woman who burns with her books retorts to the firemen that “You can’t ever have my books.”[57] She didn’t just mean this in the physical sense, as in, ‘I won’t let you burn my books without burning me too.’ She also meant that the books were so intensely personal that no one else could understand them or possess them the way she did. This is why the hero in Fahrenheit 451 is not Beady, who knows about the ancient books and can quote them while keeping them at arm’s length from his soul. Rather, it is the woman who martyrs herself for a book. Those who gain the most from literature are those who undergo the agonizing process of transplanting books into their minds and hearts.

(d) Liveness

An idea is living if you can live it, if you can act upon it irrevocably, and the idea is dead if you can only conceive it. Furthermore, as William James wrote, “deadness and liveness are measured by a thinker’s willingness to act.”[58] To oversimplify: an idea is live to the extent you can live for the idea. The bold pursuit for a book worth dying for is an adventure worth caring about. Our feeble attempt to gain knowledge from books is a mere side-quest to the task that has real existential significance. Conclusively, pieces of literature should become ways of living. Otherwise books are just the symbol-pockmarked corpses of trees. After reading the poetry, we must become poets of our own lives; after reading the narrative, we must become the narrators of our own stories.

Absorbing literature is not a passive activity in which the reader mentally consumes a series of linguistic units. Rather, “reading is a creative act…without this process of interpretation, we cannot know ourselves. ”[59] In other words: reading cannot just be filling the mind with information. If a book is to change a person in any meaningful way, the individual must be prompted to respond to the text. The reader must be like a painter inspired by blank canvas—creating through reading, using the printed words as a jumping off point to generate their own ideas. The book functions as a prop for further imagination and action rather than as a script for belief and behavior.

For Kierkegaard, the ethic of a piece of narrative literature is not explicitly given, but is rather “reached by the reader in response to the aporia that the tragedy creates.”[60] The fiction is “thereby provoking a self-defining choice.”[61] In line with the aporetic method, Kierkegaard wrote that the purpose of his work was not to “compel a person to an opinion, a conviction, a belief,” but to “compel him to be aware.”[62] Kierkegaard’s method “favors dialogic contemplation of significant questions over the systematic, discursive presentation of conceptual truths.”[63] Through his varying, self-contradicting, difficult-to-decipher pseudonyms and his aporia-promoting textual maneuvers, Kierkegaard ensured that his work required active interpretation rather than passive receptiveness.

As humans we are tossed into the desert of the world. In this wilderness, some hunt for an oasis, a wellspring of meaning, something that will define who they are for them; this is an inauthentic pursuit. The authentic approach is to build upon the desolation our own building, to dig our own oasis, to stop the mindless searching and construct ourselves wherever we are standing. From Nietzsche’s perspective, “honest exploration of an individual’s inner life and sensibility was more valuable than the objective presentations of impersonal knowledge and wisdom passed on through the ages.”[64] The role of aporetic literature is not to promote acceptance of true facts, as we cannot systematically understand our situation amidst the ambiguous complexity of existence. Rather, for Nietzsche, the true function of this form of art is to “leads us, despite the impossibility of knowledge, toward a valid intimation of what we truly are.”[65] The fundamental ground or metaphysical reality will always elude us. But we can become more authentic human beings.

person walking in the desert

Conclusion

Socrates calls aporia the “torpedo.” It slams into the mind, dazzling its abilities to comprehend, causing us to forget the convenient structures and systems that hoodwink us into believing we understand existence. In this paper I argue that one purpose of literature is to barrage the mind with flurries of these aporetic torpedoes. The staggering uncertainty these aporias create enables us to truly explore. Through epistemic humility, an increase in openness, and a widening of our imaginative scope, literature broadens our perspective, causing us to recognize that the map is not the territory—the systems we build to understand existence are always wrong. After inspiriting our recognition of this radical uncertainty, literature can prompt its readers to make the existential choices necessary to become authentic.

The complexity and even indecipherability of aporetic literature is a feature and not a bug. Instead of rejecting the complexities, nuances, and idiosyncrasies of literary expression, we should embrace them as essential methods literature uses helps us understand. After all, if an aspect of existence appears easily decipherable, then we are likely missing something. Many things are simple on the surface. But almost no experience or concept is simple once its superficial cardboard packaging is unwrapped. Literature encourages this unwrapping through intricate metaphors, narratives, and stylistic structures that prompt aporia. It reminds us, in Pope’s terms, to keep drinking from the fount of all wisdom, rather than resting in the comfort of a small sips:

A little learning is a dangerous thing;
drink deep, or taste not the Pierian spring:
there shallow draughts intoxicate the brain,
and drinking largely sobers us again…
While from the bounded level of our mind
Short views we take, nor see the lengths behind;
But more advanc’d, behold with strange surprise
New distant scenes of endless science rise![66]

Essay on Criticism, Alexander Pope
snow-covered mountains

Through the complexity of literature, we can open the floodgates to seeing far beyond our own position. Rather than taking the limits of our own imaginations for the limits of the world, we can develop a variety of kaleidoscopic lenses that provide new insights into this enigmatic existence. Furthermore, the questioning prompted by aporia is valuable in of itself:

“But to stand in the midst of this rerum concordia discors [discordant unity of things] and of this whole marvelous uncertainty and rich ambiguity of existence without questioning, without trembling with the craving and the rapture of such questioning, … [is] contemptible…This is my type of injustice.”[67]

Friedrich Nietzsche

Here, Nietzsche paints the recognition of the immense complexity and contradiction of reality (the discordant unity), and our subsequent response to this aporia, as an intrinsic value. To stare into the “marvelous uncertainty” and dive into the abyss to seek an answer is not some means to an end but an activity that is its own justification.

To summarize, aporetic fiction reveals an irresolvable puzzle. The reader is left at a loss, which his leads to a recognition of ignorance. This recognition fosters a creative space for exploring new ways-of-viewing, lenses, or possibilities. The reader is then prompted to move forward to acquire knowledge to fill in the gaps, develop skills and epistemic virtues, or to create their own authentic identity and make existential choices. Ultimately it becomes clear that aporetic literature can lead us to certain forms of learning. Even after the literature is returned to the shelf, its ideas linger beyond the pages, and the reader is left with interpretations to construct, decisions to make, conclusions to reach, and actions to take.

Appendix

A. The aporetic position on moral persuasion

My position is beyond and distinct from the established views in the philosophy of imagination: optimism, fidelity, clarificationism, enhancement, and the deflationary position.[68] Alternatively, under Noël Carroll’s outline of the three broad approaches to the puzzle of moral persuasion,[69] the aporetic approach constitutes a 4th approach, distinct from the knowledge (the arts improve our knowledge of moral truths), acquaintance (arts acquaint us with novel perspectives), and the cultivation (arts refine our existing moral positions and skills) approaches. The aporetic approach is somewhat similar to acquaintance, but it has some distinguishing features discussed in this paper.

B. Plato, art, tragedy, and literature

Example of contradiction in Plato: in the Gorgias, Socrates tells Polus that he is not a politician: “I’m not one of the politicians. Last year I was elected to the Council by lot, and when our tribe was presiding and I had to call for a vote, I came in for a laugh. I did not know how to do it” (473e- 474a). However, later in the same dialogue, Socrates contradicts this assertion: “I believe that I’m one of the few Athenians…to take up the true political craft and practice true politics. This is because the speeches I make on each occasion do not aim at gratification, but at what’s best” (521d-e). Through imagining possibilities that contradict the reader’s everyday frameworks, authors expose the limitations and implications of the maxims the reader lives by.

However, I seek to move beyond Plato. As Nietzsche describes in the Birth of Tragedy, Plato, after all, criticized tragic art because it did not “tell the truth” and failed to morally persuade: “Plato, he reckoned it among the seductive arts which only represent the agreeable, not the useful, and hence he required of his disciples abstinence and strict separation from such unphilosophical allurements; with such success that the youthful tragic poet Plato first of all burned his poems to be able to become a scholar of Socrates.”[70] This is in itself a tragedy and discounts the power of poetry. However, Nietzsche recognizes that “though there can be no doubt whatever that the most immediate effect of the Socratic impulse tended to the dissolution of Dionysian tragedy, yet a profound experience of Socrates’ own life compels us to ask whether there is necessarily only an antipodal relation between Socratism and art, and whether the birth of an “artistic Socrates” is in general something contradictory in itself.”[71]

C. Fiction and Empathy

Multiple studies and replications have found that, even after accounting for other variables, fiction exposure predicts performance on empathy tasks.[72] As empathy is a “multi-level construct extending from simple forms of emotion contagion to complex forms of cognitive perspective taking,” different types of literature may promote different forms of empathy.[73] Some fictions activate the frontal lobe areas that are associated with theory of mind and conceptual understanding of others, while some fictions activate the fundamental limbic structures that are associated with the visceral experience of compassion or feeling-with the other.

Currie also claims that “fiction’s supposed capacity to enlarge empathy would be a good thing only if it led to prosocial behavior,” which disregards the possibility that empathy is intrinsically valuable to the individual. We often treat empathy as valuable in of itself – if someone empathizes with us, we express appreciation or gratitude, and if they fail to empathize, we express critique or negative judgement. We judge them not for specific behaviors, but for their expression of empathy (often through words). Of course, language is a behavior, but in this essay Currie does not consider the possibility that empathy produced by engagement with fiction can promote more empathetic language—which seems to be one of our primary metrics of someone’s empathy.

However, empathy might be of intrinsic value from a first-person perspective. I place intrinsic value on having more empathy for others because (a) it allows me to understand them more fully, (b) it somewhat bridges the abyss between subject and Other. These are non-behavioral benefits. Currie’s focus on behavior seems somewhat myopic.

Currie may respond that empathy is valuable because it causes people to act with consideration of others. We like when others ‘hear us out’ with empathy because it shows concern or other attached values like caring or understanding. This is a practical concern— I want you to understand me or accept me. The possibilities raised above are all still prosocial behaviors that benefit the other, oneself, and society.

D. Examples of Aporetic Works

Crime and Punishment “narrates the mental agony and moral dilemmas of Rodion Raskolnikov.”[74] In a conversation with Dunya, Raskolnikov describes his dilemma: “you come to a certain limit and if you do not overstep it, you will be unhappy, but if you do overstep it, perhaps you will be even more unhappy.” In this work, Dostoevsky recognizes that his character “give rise to unresolved conflicts, that there is no higher harmony into which they are subsumed.”[75] E.g. Raskolnikov in Crime and Punishment describes his understanding of his murder in this passage:

“But how did I murder her? Is that how men do murders? Do men go to commit a murder as I went then? I will tell you some day how I went! Did I murder the old woman? I murdered myself, not her! I crushed myself once for all, for ever.… But it was the devil that killed that old woman, not I. Enough, enough, Sonia, enough! Let me be!”[76]

Crime and Punishment, Fyodor Dostoevsky

While it may be true that Dostoevsky is not painting an accurate picture of human psychology here, we still learn from this passage. We learn that one who murders might potentially feel as if they murdered themselves – even if this is not how most or even any humans do feel in actuality. We gain another potential scenario. One could object that perhaps this is not even a potential way a human might feel. However, we still learn even in that case – we learn about the limits of the human psyche, the ways in which we are emotionally or psychologically constrained, and we gain an imagined possibility of what it might be like if we were not so constrained. This also guides our emotional responses to scenarios. Furthermore, we understand the perspective of Raskolnikov, and what led him to murder: “When the reader comes to feel understanding regarding a characters wrongdoing, she is also forced into a realization that her immediate affective response to the wrongdoing…is morally arrogant. The effect is humbling.”[77]

Books like Ulysses are “difficult and resistant to comprehension,” as the “puckish, rebellious” Joyce creates complexities to “push to the limits—and beyond—the brain’s powers of integration.”[78]

E. Critique of Currie’s View of Fiction

This paper partially constitutes a critique of Currie’s view of fiction as “Cracks in the Glass” and “Imagination and Learning,” where he argues for a somewhat deflationary position on learning from literature. In his view, fiction is at least as likely to “generate an illusion of learning” and “spread ignorance and error” as it is to improve.[79] Currie views “great literature as epistemic traps rather than fonts of learning.”[80] I agree with Currie that literature is not necessarily effective at encouraging understanding of or behavioral adherence to any particular moral or epistemic framework. However, for aporetic literature, this is not a flaw, but a feature. Literature that leans in the aporia-generating direction creates learning by disrupting the reader’s equanimity, challenging their status quo, and encouraging doubt about their current beliefs and behaviors. Increasing uncertainty is a benefit of literature and even the source of our learning.

Currie’s exposition on the cognitive biases ingrained in literature is insightful, and it is true that our interpretation of literature is tainted by these biases. But I argue that literature itself is a method of revealing these cognitive biases and allowing us to understand the flaws of our own perspective. It does this by exposing logical inconsistences, revealing the limits of our perspective and our imaginative horizons, and creating narrative distance that bypasses the mechanisms which normally protect our established beliefs from cognitive dissonance. By creating narrative distance, literature overcomes the provincial biases of our limited imaginations. Narrative distance is the “the cognitive or emotional space afforded by indirect communication that invites listeners to make sense of content.”[81] in fiction, the reader is given room to reflect, accept, reject, and decide. In contrast, lectures, nonfiction, other direct forms of communication strike straight at the reader’s mind, and “the poor listener, denied any room to say No is thereby denied the room to say Yes.”[82] Aporetic literature does not provoke as much defensiveness because its content is not a straightforward assault on the reader, but guides readers into an aporia where they can set their prejudices aside. The strategic construction of distance in literature can have transformative effects where direct communication would only reinforce biases.

Through complex language and nuanced narrative that distances the reader from their normal habits of thought, aporetic literature can instigate profound insights and create transformative experiences. The paragons of the aporetic genre embed enough ambiguity in their stories to force the reader herself to make sense of the story; the burden of deciding the meaning of the text is placed on the reader, not the writer. As the text does not rigorously delineate concepts, it gives the reader almost nothing to work with, and thus the reader is forced to give to this “airy nothing a local habitation and a name; such tricks hath strong imagination.”[83] As the reader is required to actively engage with the text in this way, aporetic literature makes the reader more likely to reflect and makes her more open to making personal changes.

Even if the reader doesn’t want to learn anything, literature massages the mind, slowly easing the reader into an alternative world. Just as a frog is best boiled slowly rather than tossed straight into sizzling broth, massaging the brain rather than confronting it directly is the best way to overcome cognitive biases. In fact, research in cognitive science suggests that fiction prompts a “less critical approach to the material,” and readers “relax their critical and evaluative standards when transported into a story.”[84] Reading fiction has its greatest impact when the reader approaches the text experientially rather than rationally, critically, or with the purpose of extracting information.[85] As Currie argues, fiction’s power may activate cognitive biases, and it may lead to false beliefs. However, it also has the benefit of creating narrative distance, in which the reader can escape their limited perspective and explore seeing the world through a different lens, a different set of cognitive biases.

Ultimately, I find Currie’s view to be a solid starting point, but it is sabotaged by a set of enthymematic assumptions. Specially, it seems that at least in these two papers, he presumes that (a) literature of all genres functions in essentially the same way, (b) that the reader subscribes to some form of realism in which they are seeking to acquire morally or epistemically true beliefs from the literature, (c) that learning is essentially the accumulation of knowledge and skills. Once these presumptions are revealed and refuted, it becomes clear that aporetic literature can produce learning.

G. An Objection to My View of Learning

In this view, can learning ever be wrong? If someone’s paradigm shift or experience of aporia leads them to a “wrong” answer, is that learning? If learning is solely subjective knowing, would it constitute learning if they learn wrong things – e.g. after reading Crucible, I think it’s fine to throw friends under bus for my own sake. Or after reading Curious George I think monkeys are better at city life than most people. Would this be learning under the aporetic model?

First, the concept of a “wrong answer” does not apply to self-learning. I cannot be ‘wrong’ about my own views in a straightforward sense. Rather, I can have an insufficient understanding of myself, I can fail to develop a perspective on a particular topic, I can misunderstand my actual motivations and desires, I can develop an inauthentic identity, or I can behave in ways that contradict my second-order desires and larger goals. However, it is incoherent to say that I am “wrong” in the sense that I fail to correspond to some objective, correct, or right version of myself. This self does not exist, or if it does, I do not have access to it or it exists only in my imagination. Furthermore, as Sartre describes, as a subject with existential freedom, I am constantly aware that all of the standards I create to evaluate myself are in fact created by myself. If I use standards created by others, I choose to use them; others can coerce or incentivize me to use particular standards, but they cannot force me to evaluate my own thoughts and behaviors in any way. Whether or not one has “learned” about oneself can only be judged by oneself. This makes it more difficult to determine whether one has learned, but as Kierkegaard says, the purpose is to create difficulties everywhere.

The author could intend a specific perspective in their work and may seek to promote a specific moral framework or worldview. They may have structured the plot and characters in order to support this worldview. Therefore, it may not be the author’s intent to promote aporia, but my argument is that this is the effect of a specific kind of literature—aporetic literature. The purpose of aporetic literature is to get audience members to think about the issue and implicitly lead them to these questioning states.

H. Further Unanswered Questions

  • How does learning occur in aporetic literature? What is the mechanism or process?
    • Does there have to be intent behind it?
    • Is the aporetic view just one extra step to the intended answer?
  • What causes failure to learn? Is it the fault of the subject? How could it be at fault, since we are defining learning based off of the subject?
  • Does learning from aporetic literature just have the requirement of entering the aporetic state? Or is there something after the aporia that the reader must pursue?
    • Is aporia intrinsically valuable or only valuable for secondary purposes?
  • Is aporetic learning defined by introspection? What is the cognitive model of aporia?

I. Text-based literature vs images

Furthermore, textual, narrative fiction of my paradigmatic kind has unique advantages over other forms of art in moral persuasion. Literature cannot literally represent the world. It can only paint images and ideas in the imagination. Fiction gestures at a world beyond the reality we perceive, creating a sense of transcendence and encouraging readers to question their immanent surroundings. Fiction also can generate a contradiction between (a) my immanent reality and my physical perceptions and (b) my sense of transcendence and imaginative view of a world beyond my own perspective. This contradiction generates aporia, and it makes me more likely to accept the possibility that my perspective is limited and perhaps there is something beyond.

As morality is not a landscape or object we perceive in occurrent reality, it is in a sense imaginary. Text encourages imagination and thus allows readers to more easily conceptualize a “realm of morals” or a moral law that exists in a transcendent sense. On the other hand, representational image-based art (e.g. photography, non-animated film, animations that track reality) generally encourage us to take reality as a given and to reject the transcendent. Thus, images are not as effective as text-based narratives in moral persuasion or in moral learning, as they do not activate the imagination. Narrative fiction is consumed through imagination, visual art and images are consumed through perception.

The media theorist Vilém Flusser argued that visual media presents the world ‘as it is,’ when inevitably these visual mediums are representations of the artist’s view: “Flusser classifies the different media in three categories: traditional images, texts and technical images. Each of these media are created by man as an explanation of the world in order to facilitate his orientation in this world. Yet, each medium is possessed by the same sly dialectics: instead of representing the world, media present the world as it is perceived by them…instead of representing the world, they [the images] obscure it until human beings finally become a function of the images they create.”[86] Images mediate between the world and human beings, and “therefore images are needed to make [the world] comprehensible.”[87] Aporetic texts do not claim to make the world comprehensible; they rather reveal the incomprehensibility of the world.

J. Other Notes

Heidegger might say that through literature we are temporarily freed from our own Das Man and we can imagine another Das Man, allowing us to glimpse another meaning-structure and another world of possibility.

What makes Socrates a radically effective mentor of philosophy is also precisely what makes him an “abject failure” by modern standards, with their emphasis on “formalizable, repeatable data points representing operational knowledge, skill sets, and material mastery.”[88]

Aporetic literature is in line with Nietzsche’s conception of imagination as Urvermögen menschlicher Phantasie [‘the primal faculty of human fantasy’], through which we spontaneously try and express the way we perceive the world, making each individual into an “artistically creating subject.”[89]

Kierkegaard did not seek to lead his reader to any specific judgement, for after all, “what he judges is not in my power.”[90] Ultimately, “Any attempt on Kierkegaard’s part to control or make decisions for the reader would invalidate his entire authorship. Rather, his task is simply presenting his metaphors before withdrawing from the reader to allow the reader to accept or reject the message of the metaphor.”[91] As he asks in his journals ‘Have I the right to use my art in order to win over a person, is it not still a mode of deception? … When he sees me moved, inspired, etc., he accepts my view, consequently for a reason entirely different than mine, and an unsound reason.”[92]

Through his complex, layered, aporia-promoting textual maneuvers Kierkegaard expressed his ideas while “retaining an ironic distance from the explicitly stated views.”[93] The lack of direct communication of information in aporetic literature does not originate in a paucity of information to communicate. As Kierkegaard writes, in most communication, “there is no lack of information…something else is lacking, and this is a something which the one cannot directly communicate to the other.”[94] Narrative fiction is the way of communicating the incommunicable. Rather, it does not directly communicate them, but prompts readers to discover these ineffable things for themselves. In other words, “the world grows stranger as we stare / with vortices of maddening change / How understand what we unbare / as through the ragged scene we range? … the gap is widening betwixt / reality and the minds of men.”[95]

In fact, “the ability of uniting opposing qualities into distinctive, socially powerful and coherent patterns, shapes and forms is the hallmark of any creative society.”[96]

As Sartre argues, our “emotional reactions to the irreal are freer because they are not confronted with the same constraints and resistances we encounter in reality.”[97]

Sources

  1. Cooper, John M., and Douglas S. Hutchinson, eds. Plato: complete works. The Meno Hackett Publishing, 1997. Meno 84a-c.
  2. Cooper and Hutchinson, Plato: complete works, Meno 84b.
  3. Brendan, Thomas. “Socrates 1.4 – Aporia And The Wisdom Of Emptiness.” 2013. Thereitis.Org. http://thereitis.org/socrates-1-4-aporia-and-the-wisdom-of-emptiness/.
  4. Friend, Stacie (2014) Believing in stories. In: Currie, G. and Kieran, M. and Meskin, A. and Robson, J. (eds.) Aesthetics and the Sciences of Mind. Oxford, UK: Oxford University Press. ISBN 9780199669639
  5. Liao, Shen‐yi. “Moral persuasion and the diversity of fictions.” Pacific Philosophical Quarterly 94, no. 3 (2013): 269-289. Pg. 2.
  6. Armstrong, 50.
  7. As defined by Liao.
  8. Currie, Greg. “Imagination and Learning.” In Kind, Routledge Handbook of the Philosophy of Imagination, 407.
  9. Paul, Laurie Ann. Transformative experience. OUP Oxford, 2014.
  10. Boutros, Victor. “Spelunking with Socrates: A Study of Socratic Pedagogy in Plato’s Republic.” Paideia: Journal of the 20th World Philosophy Congress. 10 Aug 1998. Web. 2 Feb 2018.
  11. Plato, Theaetetus, 150d.
  12. Plato, The Meno, 25a.
  13. Plato, The Republic, 515a.
  14. Confucius. The Analects of Confucius: A Philosophical Translation. New York: Ballantine Books, 1999. Print.
  15. Garff, Joakim, Peder Jothen, and James Rovira. “The Moravian Origins of Kierkegaard’s and Blake’s Socratic Literature.” Kierkegaard, Literature, and the Arts. Northwestern University Press, 2018. Pg. 239-256. Pg. 245.
  16. Danto, Arthur C. Nietzsche as philosopher. Columbia University Press, 1965. Pg. 39.
  17. Kierkegaard, Søren. Concluding unscientific postscript. Princeton University Press, 2019. Pg. 186-187.
  18. Dostoyevsky, Fyodor. The Brothers Karamazov. Penguin, 2015. Pg. 153.
  19. Heller, Joseph. Catch-22: a novel. Vol. 4. Simon and Schuster, 1999.
  20. Shakespeare, William. Hamlet. Vol. 22. Cassell, limited, 1889. Act 3, Scene 1, Pg. 3.
  21. Currie, Gregory. “Cracks in the glass: fiction, imagination and moral learning.” University of York.
  22. Currie, 10.
  23. Armstrong, 48.
  24. Armstrong, 90.
  25. Armstrong, 90.
  26. Armstrong, 50.
  27. Kant, Critique of judgement, 62.
  28. Ginsborg, Hannah. “Kant’s Aesthetics and Teleology.” The Stanford Encyclopedia of Philosophy (Winter 2019 Edition), Edward N. Zalta (ed.).
  29. Dickinson, Emily. The Poems of Emily Dickinson: Reading Edition. Belknap Press, 2005.
  30. Armstrong, 47.
  31. Shakespeare, A Midsummer Night’s Dream, act 5.
  32. Byron, Baron George Gordon. The poetical works of Lord Byron. Vol. 2. J. Murray, 1860.
  33. Walton, Kendall L. Mimesis as make-believe: On the foundations of the representational arts. Harvard University Press, 1990. Pg. 35-40.
  34. Shklovsky, Viktor. “Art as technique.” Literary theory: An anthology (1917): 15-21.
  35. Nin, Anaïs. The novel of the future. Swallow Press, 2014.
  36. Golding, William. Lord of the Flies. Penguin, 1987.
  37. Plath, Sylvia. The bell jar. Faber & Faber, 2008. Pg. 77.
  38. Kierkegaard, Søren. Repetition: An Essay in Experimental Psychology. Trans. Walter Lowrie. (1964). Pg. 4.
  39. Berthold-Bold, 62.
  40. Whitman, Walt. Song of Myself.
  41. Nietzsche, Friedrich Wilhelm. The gay science: With a prelude in German rhymes and an appendix of songs. Vol. 985. Vintage, 1974. Pg. 335.
  42. Nietzsche, Friedrich. “Schopenhauer as Educator.” Untimely Meditations (1874). Pg. 129.
  43. Berthold-Bond, Daniel. “A Kierkegaardian critique of Heidegger’s concept of authenticity.” Man and World 24, no. 2 (1991): 119-142.
  44. Nietzsche, Friedrich, Giorgio Colli, and Mazzino Montinari. “Kritische Studienausgabe (KSA).” Org. Giorgio (1988). 9:7[213], p. 361.
  45. Nietzsche, “Schopenhauer as Educator,” 2.
  46. Nietzsche, Friedrich Wilhelm, and Douglas Smith. 2000. The birth of tragedy. Oxford: Oxford University Press. Ch. viii, p. 58.
  47. Nietzsche, “Schopenhauer as Educator,” 127.
  48. Berthold-Bold, pg. 142.
  49. Nietzsche, Friedrich. On the advantage and disadvantage of history for life. Hackett Publishing, 1980. Pg. 123.
  50. Sartre, Jean-Paul, and George Joseph Becker. 1948. Anti-Semite and Jew. [New York]: Schocken Books. Pg. 90.
  51. Azadeh, Ghoncheh. “The Value of an Emotional Engagement with Literature.” Aporia (Brigham Young University philosophy journal) vol. 26 no. 1—2016. Pg. 207.
  52. Bradbury, Ray. Fahrenheit 451. Page 58.
  53. Kierkegaard, Søren. The Essential Kierkegaard. Princeton University Press: New Haven, Connecticut 1998. Edited by Howard V. Hong. Page 14.
  54. Kierkegaard, Concluding Unscientific Postscript, 356.
  55. Kierkegaard, 35.
  56. Kierkegaard, 9.
  57. Bradbury, Ray. Fahrenheit 451. New York: Simon and Schuster, 1967. Print. Page 39.
  58. James, William and Cahn, Steven (ed). The Will to Believe: And Other Essays in Popular Philosophy. New York: Longmans, Green, and Co., 1896. Page 15.
  59. Curcio, James. Brian Castro’s fiction: the seductive play of language. Amherst, NY: Cambria Press, 2008. Print. Page 153.
  60. Kierkegaard, Soren. Either/or: A fragment of life. Penguin UK, 2004. From vol. 1, “The ancient tragical motif as reflected in the modern.”
  61. Garff, 254.
  62. Kierkegaard, Søren. The point of view. Vol. 22. Princeton University Press, 1998. Pg. 50.
  63. Garff, Joakim, Peder Jothen, and James Rovira. “The Moravian Origins of Kierkegaard’s and Blake’s Socratic Literature.” Kierkegaard, Literature, and the Arts. Northwestern University Press, 2018. Pg. 239-256. Pg. 239.
  64. Ong, Yi-Ping. “A View of Life: Nietzsche, Kierkegaard, and the Novel.” Philosophy and Literature 33, no. 1 (2009): 167-183. Pg. 173.
  65. Bennett, Benjamin. “Nietzsche’s Idea of Myth: The Birth of Tragedy from the Spirit of Eighteenth-Century Aesthetics.” Publications of the Modern Language Association of America (1979): 420-433. Pg. 429.
  66. Pope, Alexander. Essay on Criticism. 1711. Print. Lines 215 – 232.
  67. Nietzsche, Friedrich. The Gay Science, trans. Walter Kaufmann. Pg. 77.
  68. It is important to note that prima facie all of these approaches (except the deflationary one) are compatible with one another. In a hyper-optimistic view, literature could simultaneously enhance our moral skills (optimism), offer accurate portrayals of the human condition (fidelity), deepen our understanding of ethical cases and reveal our underlying moral positions (clarificationism) and promote aporia, as defended here. I do not take a position on the validity of these other approaches.
  69. Carroll, Noël. “Art and ethical criticism: An overview of recent directions of research.” Ethics 110, no. 2 (2000): 350-387.
  70. Nietzsche, Friedrich Wilhelm, and Douglas Smith. 2000. The birth of tragedy. Oxford: Oxford University Press.
  71. Nietzsche, The Birth of Tragedy, section 14.
  72. Mar, Raymond A., Keith Oatley, and Jordan B. Peterson. “Exploring the link between reading fiction and empathy: Ruling out individual differences and examining outcomes.” Communications 34, no. 4 (2009): 407-428.
  73. Singer, Tania. “The neuronal basis and ontogeny of empathy and mind reading: review of literature and implications for future research.” Neuroscience & Biobehavioral Reviews 30, no. 6 (2006): 855-863.
  74. Prajapati, Abhisarika. “Understanding The Mental Landscape Of The Protagonist Of Crime And Punishment.” International Journal of Innovative Research and Advanced Studies (IJIRAS), Volume 4 Issue 8, August 2017. Pg. 251.
  75. Paris, Bernard. Dostoevsky’s Greatest Characters: A New Approach to” Notes from the Underground,” Crime and Punishment, and The Brothers Karamozov. Springer, 2008. Pg. 55.
  76. Dostoyevsky, Fyodor. Crime and Punishment:(Penguin Classics Deluxe Edition). Penguin, 2015. Ch. 5. Part 4.
  77. Wyle, Eleanor Beth. “How fiction makes us better people: an analytic account of how fiction succeeds in being morally developmental.” PhD diss., 2012.
  78. Armstrong, Paul B. How literature plays with the brain: the neuroscience of reading and art. JHU Press, 2013. Pg. 46.
  79. Currie, Greg. “Cracks in the glass: fiction, imagination and moral learning.” 2014. Pg. 10.
  80. Currie, “Cracks in the glass,” 16.
  81. Taeger, Stephan. “Using narrative distance to invite transformative learning experiences.” Journal of Research in Innovative Teaching & Learning (2019). Pg. 15.
  82. Taeger, pg. 55.
  83. Shakespeare, William, 1564-1616. A Midsummer Night’s Dream. New York: Signet Classic, 1998. Act 5, Scene 1.
  84. Green, Melanie C., Jennifer Garst, and Timothy C. Brock. “The power of fiction: Determinants and boundaries.” The psychology of entertainment media: Blurring the lines between entertainment and persuasion (2004): 161-176. Pg 162.
  85. Prentice, Deborah A., Richard J. Gerrig, and Daniel S. Bailis. “What readers bring to the processing of fictional texts.” Psychonomic Bulletin & Review 4, no. 3 (1997): 416-420.
  86. Ieven, Bram. “How to Orientate Oneself in the World: A General Outline of Flusser’s Theory of Media.” Image & Narrative 3, no. 2 (2003).
  87. Ieven, quoting Flusser 9.
  88. Miller, Paul Allen. “The Repeatable and the Unrepeatable: Žižek and the Future of the Humanities, or: Assessing Socrates.” Symplokē, vol. 17, no. 1-2, 2009, pp. 7–25. JSTOR. N.d. Web. 5 Feb 2018.
  89. Nietzsche, Friedrich. “On Truth and Lie in an Extramoral Sense 1.” In The continental aesthetics reader, pp. 62-76. Routledge, 2017.
  90. Kierkegaard, The point of view, Pg. 50.
  91. Lorentzen, Jamie. Kierkegaard’s Metaphors. Mercer University Press, 2001. Pg. 28.
  92. Lorentzen, pg. 27.
  93. Ong, “A View of Life: Nietzsche, Kierkegaard, and the Novel,” 178.
  94. Craddock, Fred B. Craddock on the Craft of Preaching. Chalice Press, 2011. Citing Kierkegaard.
  95. Apuleius. The golden ass. Indiana University Press, 1962. Trans. Jack Lindsey. Pg. 5.
  96. Murphy, Peter. The collective imagination: The creative spirit of free societies. Routledge, 2016. Pg. 2.
  97. Sartre, Jean-Paul. The imaginary: A phenomenological psychology of the imagination. Psychology Press, 2004. Pg. 136.
Categories
Essays Philosophy

Calm and the Cataract: Zen and The Antichrist

This paper seeks to explore points of resonance between Nietzsche and Thich Nath Hanh. At first glance, these two thinkers seem entirely diametrical. One is a German philosopher echoed in electric phrases like “God is dead” and “what does not kill me makes me stronger.” As a young reader described, “he might have been the Devil, but he had better lines than God” (Kamiya). The other is the founder of the Plum Village school of Mahayana Buddhism, known as a constant voice for peace and a teacher of the essential arts of sitting, eating, relaxing, and breathing. Hanh encourages us to immerse ourselves in the simple beauty of the present moment and absorb the “lessons we can learn from the cloud, the water, the wave, the leaf” (42). In contrast, Nietzsche urges us to surpass all small things and seek our own apotheosis, for “man is something that must be overcome” (Zarathustra, 125). One is an unrelenting, intoxicating rush of declarations, the evisceration of all things approved by the consensus of religious and societal authority. The other is a patient and tranquil stream of ever-reassuring ideas for the aspiring Buddhist. They seem almost irreconcilable.

lightning above ocean during night time

However, further reflection reveals that these two are only as separate as the raging waterfall and the reflective pond. The pond descends into waterfalls and waterfalls feed the pond. Any river carving through tumultuous territory will have points of rapid descent and stretches of calmness. And existence is certainly a tumultuous territory. As Hanh writes, “understanding is like water flowing in a stream” (21). Sometimes the stream rushes and sometimes it settles. Without the blitzing onslaughts of water, sediments would stagnate into a complacency that could never transform landscapes. And without the stillness of the pond, sediments would flurry forever without ever finding rest or becoming fruitful soil.

In the same way, the ideas of Nietzsche and Hanh have an almost symbiotic relationship: we can better understand both by listening to the dialogue between them. Their commonalities include the insight that our experience of reality is illusory and empty, the recognition that life consists of suffering, and a unique synergy between Nietzschean eternal recurrence and Hanh’s concept of interbeing. However, the two have fundamental disagreements on the appropriate response to the suffering and illusions embedded in existence.

Nietzsche on Buddhism

Nietzsche inherited most of his understanding of Buddhism from Schopenhauer, who considered his own philosophy a European relative of Buddhism: “up till 1818, when my work appeared, there was to be found in Europe only a very few accounts of Buddhism” (17). As one of his students and early disciples, Nietzsche “was predisposed to react to Buddhism in terms of his close reading of Schopenhauer” (Elman). Many Buddhists have disputed Schopenhauer’s comprehension of their religion. It is enough to say Nietzsche’s knowledge of Buddhism is nowhere near complete: it came secondhand from a Western philosopher whose own understanding is questionable. But there is also evidence that Nietzsche scoured the sparse texts he had available, especially the ancient Sanskrit Upanishads, and he referenced complex Buddhist topics with some awareness of the nuance involved (Bilimoria, 363). Ultimately my aim is not to trace the genealogy of Buddhist ideas into Nietzsche’s mind. Instead, I will show that these two ways of thinking have converged on a few key areas without delving into the origins of this convergence.

white wall with text

Emptiness

Nietzsche and Buddha both see the transient, illusory, and contingent nature of our experience. Our lives are composed of a dynamic stream of phenomena that lacks any objective basis. Underneath our perceptions there lies only what the Buddhist philosopher Nāgārjuna called Śūnyatā (emptiness) and what Nietzsche called Abgrund (abyss), a void beyond all human categories and abstractions (Moad). (Nāgārjuna had a fascinating conception of emptiness & nothing that I can’t delve into further here. I highly recommend this essay on Nagarjuna, Nietzsche, and the Strange Looping Trick to learn more.) As Hanh wrote, “emptiness is the ground of everything … This is the true meaning of emptiness. Form does not have a separate existence” (17). Hanh means that all things are “empty of a separate self,” as nothing has an essential core, fundamental reality, or absolute being. Our perceptions are just a migrating flock of fleeting dreams, conceptual constructs, illusions, bubbles, and shadows. Nietzsche writes to the same effect:

Truth is mobile army of metaphors, metonyms, and anthropomorphisms—in short, a sum of human relations which have been enhanced, transposed, and embellished poetically and rhetorically, and which after long use seem firm, canonical, and obligatory to a people: truths are illusions we have forgotten are illusions; metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins. (On Truth and Lie in an Extra-Moral Sense)

time lapse photography of body of water

Both authors agree that living experience consists of “hanging in dreams.” We are flung into an empty world, and to provide meaning, we hang amongst a series of dreams. Most humans end up immersing themselves in concepts and frameworks that obscure the emptiness. Hanh and Nietzsche encourage us to dive into the abyss.

Throughout their works, both authors urge the reader to avoid self-deception: “We should not imprison ourselves in concepts” (Hanh, 34). Over a lifetime, we are inculcated into this “habit of believing this to be true or false, of asserting or denying” (Will to Power, 524). One of the symptoms of the self-deceiving habit is the obsession with the self, and the division between the subject and the external world. Here Nietzsche agrees with the fundamental Buddhist doctrine of anatman (lack of self). He writes against the concept of a transcendental ego: “the ‘subject’ is not something given, it is something added and invented and projected behind what there is” (Will to Power, 481). Surpassing concepts allows us to see the emptiness that permeates life.

Buddhism encourages mindfulness, allowing consciousness to be simply present without engaging in the turbulent label-sticking and concept-making process. The two traditions enrich each other: one may practice meditation as a reliable path into the void while using Nietzsche’s writings as powerful underpinnings for the critical Buddhist concept of emptiness.

Suffering

Furthermore, the first noble truth – that life is suffering – resounds with both Nietzsche and Hanh. Both recognized that suffering is a fundamental feature of human life. And both proposed a similar response: “Don’t throw away your suffering. Touch your suffering. Face it directly, and your joy will become deeper” (Hanh). Nietzsche appreciated that the Buddha did not try to give suffering some artificial moral origin:

“Buddhism, I repeat, is a hundred times more austere, more honest, more objective. It no longer has to justify its pains, its susceptibility to suffering, by interpreting these things in terms of sin—it simply says, as it simply thinks, ‘I suffer’”

Nietzsche, The Antichrist, 23

The Buddha did not try to attach to suffering a glorious and anesthetic story, to affix melodic bells and jangles that might alleviate the pain. For example, the Buddha did not claim that suffering was a consequence of the first sin and the subsequent fall from grace. Instead the Buddha simply described the suffering.

Nietzsche and Buddha both refuse to accept the opulent walled garden of paradise. They venture out to understand suffering, to describe it with honesty and courage, and then to respond to it. Of course, their shared courses eventually diverge, as Buddha sets out upon the Eightfold Path and Nietzsche trailblazes his life-affirming philosophy. But they both begin with the same foundation: the integrity of honestly describing the suffering inherent to the human condition.

green grass field

Interbeing

Finally, the two agree on interbeing. Hanh begins his discussion of interbeing with this simple declaration: “If you are a poet, you will see clearly that there is a cloud floating in this sheet of paper” (3). The paper is composed of tree pulp, trees arise from a complex interplay of water and carbon, and this cycle relies upon rain from the clouds. The clouds, the tree bark, the rays of the sun, the nutrients that fed the tree, even the axe used to cut the trees – these are all ghosts that metaphorically and literally reside within the paper. As Hanh wrote, “this sheet of paper is, because everything else is…as thin as this sheet of paper is, it contains everything in the universe in it” (4).

The ever-poetic Nietzsche saw this interbeing as well. He wrote constant praise of the person “whose soul is so overfull that he forgets himself, and all things are in him” (Zarathustra, 16). The Buddhist ideal of the bodhisattva and the Nietzschean ideal of the Übermensch both witness the interbeing of all things and forget the self. Furthermore, Nietzsche mirrored Hanh’s description of interbeing:

“Observe,” continued I, “This Moment! From the gateway, This Moment, there runs a long eternal lane backwards: behind us lies an eternity. Must not whatever can run its course of all things, have already run along that lane? Must not whatever can happen of all things have already happened, resulted, and gone by? …. And are not all things closely bound together in such wise that This Moment draws all coming things after it?” (Zarathustra, 126).

Concentric circles created by stars moving through the night sky over a silhouetted rock face

And this idea is not isolated in Nietzsche’s thought, but reinforced throughout the ouvre. Zarathustra later repeated the sentiment that all things inter-are:

Everything breaks, everything is integrated anew; eternally builds itself the same house of existence. All things separate, all things again greet one another; eternally true to itself remains the ring of existence.”

Nietzsche, Thus Spake Zarathustra, 171

One could be forgiven for assuming that these words were written by a Zen monk.

Interbeing tells us that the separateness of each component of the universe is only a superficial judgement – a product of our habitual categorization. When we look deeper, beyond good and evil, we see that all things inter-are: they all rely upon one another for their existence and are built from one another. Science makes this interbeing more literal and visible: our bodies are made from the remnants of star corpses. Our cars run on compressed dinosaur bones. The water we drink has circled the world countless times, taking up residence in dinosaurs, trees, humans, mushrooms, clouds, pipes, rivers, and every other place we can imagine.

time lapse photography of waterfall

But Nietzsche takes interbeing one step further, beyond description and into the realm of values. For each “individual” thing is connected to all other things, and the entire universe combines into each moment, then when you say yes to one moment you say yes to all moments. If “all things are chained and entwined together,”[1] then we affirm the entire chain when we affirm a single link; we affirm even the process that forged the chain. When a climber reaches a summit and is overwhelmed by sublime beauty and joy, she affirms not only that moment, but everything else inextricably connected to it: epochs of geology that molded the mountain, the childhood that shaped her personhood and led her to climb, the trillions of organisms that lived, suffered, died, and eventually decomposed into the soil she walked upon. From Hanh’s premise of interbeing, Nietzsche develops eternal recurrence: when we fully embrace a single moment, we embrace all eternity, and everything contained in it.

Disagreements

Ultimately this sweet resonance between the philosopher and the monk cannot last. Nietzsche decides that the fundamental disagreements are too much to bear, declaring that “I could become the Buddha of Europe, though frankly I would be the antipode of the Indian Buddha” (Panaïoti). While he agrees that our experiences are just illusory projections of the mind, Nietzsche disagrees on the response to this emptiness. The Buddha offers a path to enlightenment, a state of awareness that transcends the void:

“the state where creations (phenomenal illusions) cease to arise through their understanding of extinction and creation. All, now having their mind silenced, awakened to the wisdom-sea of prajna on the nature of the void (as it is within the silent void that the inherent Self-Wisdom manifests).” (Vajrasamadhi Sutra)

Nietzsche seeks no such transcendence. Instead, he proposes immanence, the affirmation of the illusory and the void: his philosophy is “inverted Platonism: the further it is from actual reality, the purer, more beautiful, and better it becomes. Living in illusion as the ideal” (Conway, 404). We can see the pinnacles of this life-embracing immanence in myth, metaphor, and the artistic play of the creative. This illusion should be conscious, beautiful, and intentional, a myth that wraps every piece of existence into its narrative and does not negate even the tiniest fragment. Art, for example, is the cult of the beautiful illusion that allows us to endure the “the insight into the general untruth and falsity of things now given us by science” (Gay Science, 107). On this point Hanh and Nietzsche are far from the same page. Nietzsche has wandered away from Buddhist meditative clarity and into the ecstatic illusion.

woman sitting on shore

On suffering, as well, the two start to diverge at the same crossroads, between transcendence (moving beyond suffering) and immanence (embracing suffering). Buddhism encourages us to surpass our desires to move beyond dukkha, as the third noble truth is the “cessation of suffering: it is the remainderless fading away and cessation of that same craving, the giving up and relinquishing of it, freedom from it, non-reliance on it” (Laumakis, 48). On the other hand, Nietzsche encourages us to affirm all aspects of the human condition, including and especially suffering. He issues an injunction that seems to be aimed directly at Siddhartha Guatama:

“Let us beware of waking the dead and disturbing these living coffins! They encounter a sick man or an old man or a corpse and immediately they say, ‘Life is refuted.’ But only they themselves are refuted, and their eyes, which see only this one face of existence” (Thus Spake Zarathustra, 1).

While Nietzsche loves Buddhism for treating suffering with honest rather than moralizing tendencies, he argues that the Buddha did not go far enough. We should not just observe the monster of suffering but embrace it: we should face the “great challenge of looking at this monstrous world with an unswerving gaze and declaring it ‘beautifulrather than ‘evil’” (Loy, 37). This is the concept of amor fati: love of one’s fate, despite its tragedies.

Nietzsche condemns Buddhism as merely the “consolation of weary spirits longing for a dreamless sleep” (in nirvana) rather than a courageous re-affirmation of existence. When we disengage from our cravings, he argues, we disengage from life itself. He encourages us to become more attached to reality, condemning detachment as life-negating and vitality-draining. Instead of escaping suffering we should double down on it.

topless man covered face with white bandage

However, many thinkers argue that Nietzsche misunderstands Buddhism (Loy; Moad; Hongladarom; Bilimoria). First, dukkha does not just mean the experience of suffering, but the existential incompleteness and anguish that come from spiritual ignorance. From a Nietzschean perspective, this incompleteness might be interpreted the inability or unwillingness to embrace suffering, and Buddhism could be re-evaluated as a method of embracing suffering. This seems to reconcile the two views. Second, Buddhism does not promote inaction or detachment in response to suffering – after all, the Buddha continued teaching and living an active life for 45 years after enlightenment. His life of teaching, giving, and serving was not an attempt to fulfill an obligation, but a set of actions naturally done by an enlightened Buddhist who is overflowing into the world (Moad). In the same way, Nietzsche advocates overflowing rather than ethics: “senselessness and ugliness seem as it were licensed, in consequence of the overflowing plenitude of procreative, fructifying power, which can convert every desert into a luxuriant orchard” (The Gay Science, 370). This reconciliation seems incomplete, but perhaps it shows that Nietzsche cannot entirely repudiate Buddhism while remaining internally consistent.

aerial view of trees and buildings during daytime

Through the dialectic between the philosopher and the Zen monk we can improve our understanding of both thinkers. The simple focus of Hanh’s writings offers a clear lens through which to view the dense, stylistic, and polemical prose of Nietzsche. Both thinkers are seeking a vision of great health that allows one to deal with the emptiness and suffering of existence, although they have substantial disagreements about the path to this ideal state. While they may not be two branches of the same tree, they are certainly trees growing towards the same sun – philosophies with common goals and roots. Both are seeking an outlook that leads to the “most profound enjoyment of the moment” (The Gay Science, 302). This dialogue between Nietzsche and Hanh allows us to explore the conceptual landscape between the two without losing sight of nuance. Both the crashing cataract and the serene estuary are necessary for the river to traverse a complex topography.

Works Cited

Vajrasamadhi Sutra (The Diamond-Absorption Sutra). Trans. into Chinese by Anonym, Northern Liang Dynasty, China; into English by Robert E. Buswell. 4 Jun 2019. <http://www.buddhasutra.com/files/vajrasamadhi_sutra.htm>

Graham, Parkes. Nietzsche and Early Buddhism. Philosophy East and West. Vol. 50, no. 2, 2000, pp. 254–267. Print.

Elman, Benjamin A. Nietzsche and Buddhism. Journal of the History of Ideas, Vol. 44, No. 4. (Oct. – Dec., 1983), pp. 671-686. Print.

Hanh, Thich Nhất. The Heart of Understanding: Commentaries on the Prajñaparamita Heart Sutra. Berkeley, California: Parallax Press, 1998. Print.

Hạnh, Thich Nhất. The Heart of the Buddha’s Teaching: Transforming Suffering into Peace, Joy & Liberation: The Four Noble Truths, the Noble Eightfold Path, and Other Basic Buddhist Teachings. New York: Broadway Books, 1999. Print.

Loy, David. Beyond good and evil? A Buddhist critique of Nietzsche. Asian Philosophy Vol. 6, No. 1, March 1996. Print. Pg. 37-58.

Kamiya, Gary. “Bookend; Falling Out with Superman.” The New York Times. 23 Jan 2000. Web. Accessed 2 Jun 2019.

Laumakis, Stephen J. An Introduction to Buddhist Philosophy. Cambridge University Press, Feb 21, 2008. Print.

Moad, Omar. Dukkha, Inaction, and Nirvana: Suffering, Weariness, and Death? A look at Nietzsche’s Criticisms of Buddhist Philosophy. The Philosopher, Volume LXXXXII, No. 1. Print.

Nietzsche, Friedrich Wilhelm. The Anti-Christ. R. J. Hollingdale (Trans and Ed.) (Harmondsworth: Penguin), No. 20. Print.

Nietzsche, Friedrich Wilhelm. The Gay Science. Trans. Walter Kaufmann. Random House, Vintage Books: New York, Mar 1974. Print.

Nietzsche, Friedrich Wilhelm. Schopenhauer as Educator. Chicago: Regenery, 1965. Print.

Nietzsche, Friedrich Wilhelm. Thus Spake Zarathustra. Chicago: Regenery, 1965. Print.

Nietzsche, Friedrich Wilhelm, and Taylor Carman. On Truth and Untruth: Selected Writings. Harper Perennial, 2010. Print.

Panaïoti, Antoine. Nietzsche and Buddhist Philosophy. Cambridge University Press, 2014. Print.

Hongladarom, S. (2011). The Overman and the Arahant : Models of Human Perfection in Nietzsche and Buddhism. Asian Philosophy, 21(1), 53–69.

Bilimoria, P. (2008). Nietzsche as “Europe’s Buddha” and “Asia’s superman.” Sophia, 47(3), 359–376. Print.

  1. Thus Spake Zarathustra, pg. 333: “Did you ever say Yes to one joy? O my friends, then you said Yes to all woe as well. All things are chained and entwined together, all things are in love; If you ever wanted one moment twice, if you ever said: ‘You please me, happiness, instant, moment!’ then you wanted everything to return.”