Categories
Philosophy Politics

Reclaiming Slurs through Conceptual Engineering

Images generated by MidJourney AI, based on a prompt about a conceptual hammer destroying an ideological structure.

Introduction

Ideology can leave us “stuck in a cage, imprisoned among all sorts of terrible concepts.”[1] Slurs are linked to an especially harmful kind of concept. Successfully reclaiming slur terms requires understanding and rejecting these concepts. Linguistic reclamation of slur terms, when combined with critique of the underlying concept, can put an oppressive weapon out of action and help liberate us from pernicious conceptual cages.

My analysis will not focus on the semantic theory of slurs or slur reclamation. Constructing a natural language semantics of slurs is primarily a matter for empirical linguistic research, not philosophy. Indeed, Cappelen (2017) argues that semantics should be left to specialists with the expertise to conduct empirical study and formal analysis of linguistic phenomena.[2] Of course, findings in linguistics will be very relevant for philosophers, and it is certainly within the purview of philosophy to interpret these findings and investigate the theoretical foundations of linguistics. The substantial philosophical literature on the semantics of slurs also demonstrates that philosophers can use interdisciplinary approaches to make meaningful progress in semantics. Developing theoretical semantic accounts of slurs has proven valuable. However, validating these theories will require empirical study of linguistic patterns in natural language use. Then, we can evaluate how operationalized forms of these semantic theories can explain the observed patterns. Ultimately, settling the differences between semantic theories of slurs requires linguistic research.

However, conceptual engineering and conceptual ethics are matters for philosophy. The task of philosophers is not just to describe linguistic tools, but to assess the representational features of these tools and find ways to fix their defective or harmful aspects. Therefore, instead of conducting descriptive semantics, this paper focuses on the concepts underpinning slur terms. Section 1 describes the concepts connected to slurs and explicates their normative flaws. Section 2 argues that fully successful reclamations of slurs must involve conceptual engineering, not just lexical change. Finally, section 3 addresses some important objections to this conceptual view of slurs.

1. Slurring Concepts

Slur lexical items are connected to underlying concepts (representational devices), which we can call slurring concepts. These concepts are defective and harmful in virtue of their key characteristics: they are thick, essentializing, reactive, and subordinating.

First, slurring concepts are thick concepts, with both descriptive and normative features. Slurs make a negative evaluation of some social group.

Second, slurring concepts are essentializing. As Neufeld describes, slurs designate an essence that is causally connected to negative stereotypical features of some social group.[3] This essence is a failed natural kind. For example, the N-word posits a “blackness essence” that is supposed to be causally responsible for negative features of Black people. The evidence for this semantic view is substantial, as it can explain features of slurs in natural language that other theories do not account for. For instance, it explains a systematic linguistic pattern: slurs are always nouns. This is because nouns are unique lexical devices that predicate things into enduring, essential categories like natural kinds. Neufeld’s theory has many other successful predictions and explanatory benefits. However, our primary concern is not in identifying the correct semantic theory, but in understanding slurring concepts and their defects. It is a sufficient to say that slurs must make use of essentializing concepts to refer to a targeted group in a stable way and to warrant negative inferences about this group.

Essentializing concepts are epistemically flawed ways to describe social groups. Using essentializing concepts for real natural kinds like rock and atom is appropriate. However, social groups like races, religions, and sexual orientations are not immutable essences with strict natural boundaries, and they cannot justify attributing inherent properties to their members. Essentializing social categories produces cognitive mistakes and bad inferences.[4] Furthermore, essentializing concepts have normative harms, as they encourage dehumanization and harmful stereotypes. Treating members of a targeted group as determined by their group membership, without the autonomy of a person, is clearly dehumanizing. Empirical research shows that essentializing concepts, like a biological conception of race, result in increased stereotyping and discrimination.[5] For example, people who endorse an essentializing biomedical concept of mental illness distance themselves more from those seen as mentally ill, perceive them as more dangerous, have lower expectations of their recovery, and show more punitive behavior.[6] Simply making an essentializing concept salient can cause members of the essentialized group to perform worse on various activities, even if the stereotypes associated with the group are neutral or positive.[7] These defects alone are strong reasons to reject the use of essentializing concepts for social groups.

Third, slurring concepts are reactive, as described by Braddon-Mitchell: a reactive concept automatically tokens a reactive representation, which is a representation that shortcuts the belief-desire system and includes a motivation for action.[8] For instance, the reactive concept kike can trigger a representation of Jews that encourages prejudicial actions against them, and includes a negative view of Jews that justifies these actions. Indeed, one study demonstrated that “category representations immediately and automatically activate representations of the related stereotype features.”[9] This makes slurs uniquely dangerous forms of linguistic propaganda, as they can bypass conscious processing to produce discriminatory representations and behaviors.

Finally, slurring concepts are subordinating. They are thick concepts with a specific kind of normative component: a negative evaluation that ranks the targeted group as inferior and legitimates discriminatory behavior toward the group.[10] This represents members of the target group in ways that justify derogating, intimidating, abusing, or oppressing them. Due to their specific features, slurring concepts do not just cause subordination, they constitute subordination. This constitutive claim is surprising: if a representation is just held mentally and does not manifest in any harmful actions, how can it be subordinating?

The act of conceptualizing a social group in an essentializing, negative way creates reactive representations that result in subordinating stereotypes and inferences. Because our social reality is shaped by the way others see us, being surrounded by people who represent you as inferior or subhuman is a kind of subordination itself, even if their representations do not lead to tangible actions. Furthermore, slurring concepts are so closely tied to subordinating effects that it is not sensible to separate this kind of representation from its consequences. Holding a slurring concept leads to unconscious, automatic discriminatory behaviors, and even members of the targeted group experience inhibitions and impaired performance when a slurring concept is salient.[11] Ultimately, whether slurring concepts are constitutive of subordination or only cause subordination, they vital point is that they are subordinating.

2. Slur Reclamation as Conceptual Engineering

Reclaiming slurs is often an intentional project carried out by oppressed groups to resist their oppression and to co-opt a tool of subordination for purposes of liberation. Taking ownership of a slur and imbuing it with positive associations is an act of “weapons control” that diminishes the word’s subordinating power, effectively putting the slur out of action.[12] For example, in the 1980s, LGBT activists applied the slur “queer” to themselves in positive and pride-evoking ways, and they were largely successful in changing the word’s connotation.[13] However, I argue that changing a lexical item’s meaning is insufficient for slur reclamation. Lexical change is not an effective form of weapons control because it fails to challenge the most dangerous weapon: the slurring concept remains intact. Fully successful slur reclamation requires conceptual change, and not just linguistic change. The slurring concept connected to the lexical item must be critiqued and dismantled.

2.1 Partial vs. Full Slur Reclamations

How can we explain slur reclamation? Under semantic theories of slurs like Croom’s,[14] one might describe reclamation as the process of adding positive properties to a term that become more salient than any negative properties. This explanation cannot account for slur reclamations that do not change the valence of a term but instead detach it from an essentializing social kind. For example, the term “gypsy” as it is used in the U.S. is disconnected from the Roma social group, but the term is still attached to negative properties and used as a pejorative. At least in the American cultural context, this slur has been neutralized – it no longer is linked to an essentializing concept. However, because it still has derogating force, “gypsy” has not been reclaimed.

In contrast, Camp’s perspectival theory holds that regardless of what perspective an individual holds when using a slur, the slur is still connected to a slurring perspective.[15] However, it is empirically clear that slurs can be detached from derogating perspectives through individual and collective linguistic actions. Camp’s theory cannot explain this reclamation without substantial revisions. Regardless, her perspectival approach is insightful in emphasizing that slurs are linked to a near-automatic, integrated way of thinking about a targeted group. Rather than interpreting signaling allegiance to a somewhat vague ‘perspective,’ we interpret slurs as uses of slurring concepts. As a result of the specific features of slurring concepts, their properties are similar to Camp’s perspectives.

Finally, under Neufeld’s account, just as a slur is created when a failed natural kind is causally connected to negative properties, a slur can be unmade when the kind is disconnected from these negative properties. For instance, the reclaimed slur “queer” is still used to refer to roughly the same social kind (people with non-conforming sexual and gender identities), but it is disconnected from negative properties, and instead is even attached to positive properties. In this case, the social kind connected to the term remained the same, but the valence associated with it was neutralized or reversed. In the “gypsy” case discussed above, the opposite occurred in the US – the negative properties of the word remained, while it was disconnected from the essentializing concept (of the Roma as a social kind). Neufeld’s explanation of derogatory variation can explain both kinds of slur reclamation: holding the level of essentialization fixed, more negative slurs are more derogating, while holding the negativity fixed, more essentializing slurs are more derogating. Disconnecting slurs from essentializing concepts and reducing their pejorative force are therefore two ways to carry out reclamation projects.

All of these theories fail to directly account for the importance of confronting the underlying concept in slur reclamation. If a mental representation like a slurring perspective or concept is critical to the meaning and force of a slur, then it follows that complete slur reclamation must fix these mental representations and not merely the lexical item. Indeed, Neufeld holds a meta-semantic view where terms inherit their linguistic meaning from the mental concepts we associate with them.[16] Partial reclamations can occur when a positive or neutral version of the slur term achieves linguistic uptake, or when the lexical item is no longer associated with an essentialized social group. However, this kind of reclamation is limited and insufficient. It only decouples a lexical item from a slurring concept and does not subvert the slurring concept itself. The most dangerous weapon, the slurring concept, remains at large, and will continue to manifest in other lexical items.

Partial reclamations can thereby constitute illusions of change. They play ‘whack-a-mole’ with lexical items while failing to address the root cause. Full reclamation involves not just lexical change, but a successful dismantling of the slurring concept. The importance of the underlying concept means that “ameliorative attempts that focus exclusively on the language used are unlikely to have much success in the long run.”[17] For example, the descriptive term for intellectually deficient individuals has been changed many times, from “moron” to “idiot” to “mentally retarded.” When they were initially introduced, these were non-pejorative descriptive terms, but all were rapidly adopted as slurs for people with intellectual disablements. This shows the insufficiency of merely changing language without critique and rejection of the slurring concept.

2.2 Conceptually Engineering Slurs

Reclaiming slurs therefore requires addressing the slurring concept. One fruitful method for carrying out full reclamation is conceptual engineering: the process of assessing our representational devices, reflecting on how to improve them, and implementing these improvements. As we have already diagnosed the flaws of slurring concepts, how can we go about fixing these representations? One obvious approach is to eliminate the slurring concept entirely. However, this is just elimination, not reclamation. It is also not clear how to eliminate a slurring concept. The characteristic features of slurring features give us a few lines of attack. For instance, we can reject the negative normative component of the thick concept and encourage adoption of either a pure descriptive concept (e.g. person of color) or a thick concept with a positive normative component (e.g. queer). However, this approach risks “reinforcing an essentialist construction of the group identity,”[18] as it maintains an essentializing concept of the targeted group. The slur can easily be reactivated and weaponized against its targets by reversing its valence, making this type of reclamation very precarious.

Another possible approach is to reduce the reactivity of slurring concepts. For example, perhaps training people to consciously recognize how slurs prompt automatic reactive representations of the targeted group can curb the impact of reactive concepts. Indeed, there is some evidence that implicit bias training can work to a limited degree.[19] However, this only mitigates the slurring concept’s effects. Additionally, slurring concepts are reactive because they are essentializing. Essentialism about social kinds is what leads to automatic, reactive processing about the groups targeted by slurs.[20] Likewise, attempting to undermine the subordinating force of slurring concepts starts at the end of the process, as it fails to address the features that make these concepts subordinating. Conclusively, all approaches to engineering slurring concepts lead us back to the same source: essentialism.

Disarming and rehabilitating a slurring concept therefore must start by rejecting essentialism. Failing to critique the essentializing concept leaves the conceptual foundations of the slur intact. In this sense, concepts like woman, race, mental illness, and homosexual are proto-slurring concepts. By essentializing a social category, these concepts function to lay the groundwork for slurs, making the essentialized group a target for oppression and subordination. Successful critiques of essentializing concepts can remove the ground that slurs stand upon. For example, Haslanger argues that woman is a failed natural kind used to mark an individual as someone who should occupy a subordinate social position based on purported biological features.[21] Shifting the meaning of “woman” to be more in line with its real social function can unmask this underlying ideology. Instead of conceptualizing womanhood as an essential biological category, we should treat woman as a folk social concept used to subordinate. In the same vein, Appiah critiques the essentializing concept of race, arguing that there is no biological or naturalistic basis for treating races as real categories.[22] Finally, many thinkers including Szasz and Foucault argue that mental illness is a failed natural kind used to justify social exclusion practices.[23] Conceptual engineering projects like these can undermine the essentialist foundations of slurs.

3. The Importance of Social Practice in Slur Reclamation

One objection to anti-essentialist conceptual engineering projects is that partial slur reclamations are successful precisely because they enable positive identification and solidarity within an essentialized group. For example, the N-word is a way for Black people to express solidarity and camaraderie as members of an essentialized and oppressed social category.[24] Rejecting the essentializing race concept could have at least two harmful consequences: (1) it precludes organizing and expressing solidarity along racial lines, (2) it can lead to false consciousness, pretending that the essentialized categories do not have continue to have real social effects simply because we have rejected the essentializing concept. However, solidarity does not require essentialism. Instead of treating race as an essential category, one can treat race as a social construction used to target groups for subordination. People within the targeted groups can then express solidarity not as common members of a real natural kind, but as fellow targets of arbitrary social oppression. Indeed, the liberatory, reclaimed form of the N-word does not require treating Blackness as an essential category. The reclamation can reject the essentializing concept while emphasizing the way this concept is still used to oppress and conveying solidarity and resistance amongst members the targeted group.

However, why try to reclaim slurs at all? Why not introduce a new lexical item to communicate a new, liberating, non-essentializing concept, instead of using a term tainted by being a former slur? It seems paradoxical to intentionally choose a lexical item that one considers deeply flawed. Slur terms might also have direct lexical effects, where the word itself produces negative cognitive reactions even if its meaning is changed.[25] (For example, the word “Hitler” has negative lexical effects regardless of its conceptual content or usage). This gives a prima facie reason to avoid the lexical item. However, there are important reasons why conceptually engineering projects should reclaim the slur word by associating it with a new concept, rather than abandoning it entirely. First, maintaining the original lexical item allows us to put an oppressive weapon out of action, and to actually turn it against the oppressors. Once reclaimed, the word no longer has its subordinating power. Instead, it can be used as a vehicle to for liberatory, non-essentializing concepts that replace the slurring concept. Second, language has an important role in shaping social reality. Reclaiming terms with preexisting impacts can allow us to ameliorate or even reverse these impacts on social reality, while introducing a new term will require building its social impact from the ground up.[26] The benefits of co-opting slur terms are sufficient to outweigh the costs of lexical effects.

Finally, one especially potent objection to concept-focused slur reclamation projects is that they prioritize changing representations over changing practices. As Táíwò emphasizes, our analysis of propaganda should focus not just on mental representations, but how these representations influence practice and action.[27] Even if a person does not hold a slurring concept, they can still act upon a public practical premise, treating members of the targeted group in essentializing and subordinating ways. The important feature of slurs is not the concept, but the way these slurs feature in oppressive social structures and license harmful actions. Therefore, it is misguided to emphasize mental representations, and our primary concern in reclamation projects should not be changing concepts. Rather, we should focus on the social structures and practices that give slurring concepts their power. Conceptual engineering is far too abstract and ideal, placing our priorities in the wrong places and failing to recognize the importance of practice. We need reality engineering, not conceptual engineering.

This objection is well-received, and I agree with Táíwò’s practice-first approach. Any attempt to fully reclaim a slur must coincide with material changes to prevent oppressive practices. However, harmful representations can be oppressive in themselves. Slurring concepts represent their targets as essentially subordinate kinds, and result in oppressive and limiting mindsets. Lifting the blinders of a slurring concept can itself be liberatory. Additionally, conceptual engineering is not exclusive with practical reform, and it can help enable and guide material changes. Furthermore, a key feature of slurring concepts is that they are reactive. This makes slurring concepts action-engendering, as they automatically motivate and encourage discriminatory action. Focusing on the harmful actions associated with a slurring concept is a treatment of a symptom, not the underlying conceptual disease. Finally, slurring concepts are integrated within larger oppressive conceptual systems that can be aptly characterized as ideologies. Therefore, reclaiming slurs and critiquing slurring concepts functions as a form of ideology critique. Conceptual engineering can make the essentializing, subordinating ideology more visible, discouraging complacency and false consciousness while promoting actions to resist this ideology.

Conclusion

Dismantling slurring concepts is an essential step in fully successful slur reclamation. This paper emphasizes the critical role of slurring concepts. I began by describing the key features of slurring concepts that enable slurs to serve their harmful function. Then, I argued that full reclamation requires not just lexical change but conceptual engineering, and that rejecting essentializing thinking is the key to disarming slurs. Finally, I addressed some objections and complications in the engineering of slurring concepts. Reclaiming slur terms and critiquing slurring concepts can serve a vital role in critiquing and resisting oppressive ideologies.

Bibliography

Appiah, Kwame Anthony. The ethics of identity. Princeton University Press, 2010.

Bolinger, Renee (forthcoming). The Language of Mental Illness. In Justin Khoo & Rachel Katharine Sterken (eds.), Routledge Handbook of Social and Political Philosophy of Language. Routledge.
PhilArchive copy v1: https://philarchive.org/archive/BOLTLO-7v1

Braddon-Mitchell, David. “Reactive Concepts: Engineering the Concept CONCEPT.” In Conceptual Engineering and Conceptual Ethics. Oxford University Press.

Camp, Elisabeth. “Slurring perspectives.” Analytic Philosophy 54, no. 3 (2013): 330-349.

Cappelen, Herman, “Why philosophers shouldn’t do semantics,” Review of Philosophy and Psychology 8, no. 4 (2017): 743-762.

Cappelen, Herman. Fixing language: An essay on conceptual engineering. Oxford University Press, 2018.

Carnaghi, Andrea, and Anne Maass. “In-group and out-group perspectives in the use of derogatory group labels: Gay versus fag.” Journal of Language and social Psychology 26, no. 2 (2007): 142-156.

Croom, Adam M. “Slurs.” Language Sciences 33, no. 3 (2011): 343-358.

Fawaz, Ramzi, and Shanté Paradigm Smalls. “Queers Read This! LGBTQ Literature Now.” GLQ: A Journal of Lesbian and Gay Studies 24, no. 2-3 (2018): 169-187.

Habgood-Coote, Joshua. “Fake news, conceptual engineering, and linguistic resistance: reply to Pepp, Michaelson and Sterken, and Brown.” Inquiry (2020): 1-29.

Herbert, Cassie. “Precarious projects: the performative structure of reclamation.” Language Sciences 52 (2015): 131-138.

Jeshion, Robin. “Pride and Prejudiced: on the Reclamation of Slurs.” Grazer Philosophische Studien 97, no. 1 (2020): 106-137.

Khoo, Justin. “Code words in political discourse.” Philosophical topics 45, no. 2 (2017): 33-64.

Langton, Rae. “Speech acts and unspeakable acts.” Philosophy & Public Affairs (1993): 293-330.

Maitra, Ishani. “Subordinating speech.” Speech and harm: Controversies over free speech (2012): 94-120.

Neufeld, Eleonore. An essentialist theory of the meaning of slurs. Ann Arbor, MI: Michigan Publishing, University of Michigan Library, 2019.

Nguyen, Hannah-Hanh D., and Ann Marie Ryan. “Does stereotype threat affect test performance of minorities and women? A meta-analysis of experimental evidence.” Journal of applied psychology 93, no. 6 (2008): 1314.

Podosky, Paul-Mikhail Catapang. “Ideology and normativity: constraints on conceptual engineering.” Inquiry (2018): 1-15.

Pritlove, Cheryl, Clara Juando-Prats, Kari Ala-Leppilampi, and Janet A. Parson. “The good, the bad, and the ugly of implicit bias.” The Lancet 393, no. 10171 (2019): 502-504.

Richard, Mark, A. Burgess, H. Cappelen, and D. Plunkett. “The A-project and the B-project.” Conceptual Engineering and Conceptual Ethics (2018).

Rieger, Sarah. “Facebook to investigate whether anti-Indigenous slur should be added to hate speech guidelines.” CBC News. Oct 24, 2018.

Stanley, Jason. How propaganda works. Princeton University Press, 2015.

Táíwò, Olúfémi O. “The Empire Has No Clothes.” Disputatio 1, no. ahead-of-print (2018).

Táíwò, Olúfẹmi. “Beware of Schools Bearing Gifts.” Public Affairs Quarterly 31, no. 1 (2017): 1-18.

  1. Nietzsche, Friedrich. The twilight of the idols. Jovian Press, 2018. Pg. 502.
  2. Cappelen, Herman, “Why philosophers shouldn’t do semantics,” Review of Philosophy and Psychology 8, no. 4 (2017): 743-762.
  3. Neufeld, Eleonore, An essentialist theory of the meaning of slurs, Ann Arbor, MI: Michigan Publishing, University of Michigan Library, 2019.
  4. Wodak, Leslie, and Rhodes, “What a loaded generalization: Generics and social cognition,” (2015).
  5. Prentice and Miller, “Psychological essentialism of human categories,” (2007).
  6. See Haslam (2011), Mehta and Farina (1997), Lam, Salkovskis, and Warwick (2005), Phelan (2005).
  7. Nguyen, Hannah-Hanh D., and Ann Marie Ryan, “Does stereotype threat affect test performance of minorities and women? A meta-analysis of experimental evidence,” Journal of applied psychology 93, no. 6 (2008): 1314.
  8. Braddon-Mitchell, “Reactive Concepts,” Conceptual Engineering and Conceptual Ethics (2020): 79.
  9. Neufeld, pg. 21. Quote is from a summary of a study by Carnaghi & Maass (2017).
  10. See Maitra “Subordinating speech,” (2012).
  11. See empirical evidence in Carnaghi and Maass (2017); Nguyen and Ryan (2008).
  12. Jeshion, Robin, “Pride and Prejudiced: on the Reclamation of Slurs,” Grazer Philosophische Studien 97, no. 1 (2020): 106-137.
  13. Fawaz, Ramzi, and Shanté Paradigm Smalls, “Queers Read This! LGBTQ Literature Now,” GLQ: A Journal of Lesbian and Gay Studies 24, no. 2-3 (2018): 169-187.
  14. Croom, Adam M, “Slurs,” Language Sciences 33, no. 3 (2011): 343-358.
  15. Camp, Elisabeth, “Slurring perspectives,” Analytic Philosophy 54, no. 3 (2013): 330-349.
  16. Neufeld, An essentialist theory of the meaning of slurs, pg. 3 (in footnote 8).
  17. Renee Bolinger, “The Language of Mental Illness,” in Justin Khoo & Rachel Katharine Sterken (eds.), Routledge Handbook of Social and Political Philosophy of Language (forthcoming).
  18. Herbert, Cassie, “Precarious projects: the performative structure of reclamation,” Language Sciences 52 (2015): 131-138. Pg. 133.
  19. Pritlove, Cheryl, Clara Juando-Prats, Kari Ala-Leppilampi, and Janet A. Parson, “The good, the bad, and the ugly of implicit bias,” The Lancet 393, no. 10171 (2019): 502-504.
  20. Prentice and Miller (2007).
  21. Sally Haslanger, “Going on, not in the same way,” Conceptual engineering and conceptual ethics (2020): 230.
  22. Kwame Anthony Appiah, The ethics of identity, Princeton University Press, 2010.
  23. See Jeremy Hadfield, “The Conceptual Engineering of Mental Illness,” jeremyhadfield.com (2020) for a review.
  24. Robin Jeshion, “Pride and Prejudiced: on the Reclamation of Slurs,” Grazer Philosophische Studien 97, no. 1 (2020): 106-137.
  25. See Cappelen, “Fixing Language,” (2018).
  26. Herman Cappelen, “Conceptual Engineering: The Master Argument,” Conceptual engineering and conceptual ethics, Oxford University Press (2019).
  27. Olúfémi Táíwò, “The Empire Has No Clothes,” Disputatio 1, no. ahead-of-print (2018).
Categories
Cognitive Science Essays Politics

Two Ways to Promote Positivity and Disrupt Echo Chambers

Social media algorithms are the unseen forces modifying our minds and swaying our societies. Most of us have no idea how they work. We just accept their results. Only a few programmers, product managers, and executives know the full details, and even fewer can change these systems. But they have an immense influence on our world. In 2019, around 72% of adults in the United States were on at least social media site (Pew Research). These algorithms especially shape our individual mental lives, affecting who and what we are exposed to. Should these algorithms take a stand on which emotions are better than others? Should they, for example, promote joy and compassion above anger? I think the answer is yes. I’ll also argue that these algorithms should disrupt echo chambers by introducing semi-random content into newsfeeds.

Weighing Love above Anger?

Reactions on Facebook (anything beyond a ‘like’) are weighted higher by the newsfeed algorithm. If you react with an emotion to a post, you’re more likely to see more of that item than if you simply like it. But right now – as far as we know – the algorithm weights all of the different emotions equally (Hootsuite). That should change! Specifically, anger’s weight should be reduced. Positive emotions should carry more weight.

Should Facebook take a position on which emotions are weighed more highly? Maybe the algorithm should instead weight love, laughter, and care highest, then surprise, then sadness, and finally anger the lowest. Mere likes would stay the lowest of all, as they represent lower engagement. This would make news-feeds more positive, promote better thinking, and maybe reduce divisiveness. It might counteract the dominance of anger on social media.

Social media is already filled with emotional contagion. Social media networks tend to create clusters of people who experience synchronized waves of similar emotions (Coviello et al 2014). Do we have to let anger spread like an unfettered pandemic? Or can we encourage more positive emotions instead?

Right now, anger-producing content spreads most quickly on social media. One study found that angry content is more likely to go viral, followed by joy, while sad or disgust-provoking content results in the most subdued reactions (Fan et al 2013). This rewards fringe content, ‘fake news,’ or stuff that makes people mad – and might explain why Fox News is the top publisher on Facebook by total engagement.

thumbnail
A social media network colored by emotions (from Fan et al). Notice the density & clustering of the red (anger) networks, while green (joy) is more scattered and noisy. This graph also includes black (disgust) and blue (sadness).

Our psychology encourages us to react rapidly to anger and fear, and to share bad news. This has evolutionary advantages: info about potential dangers circulates swiftly. But it also arguably hurts our society and encourages reactive, angry, tribalist, and even hateful thinking. Anger-producing content activates Kahneman’s System 2 – fast, instinctive, emotional. Anger suppresses the more deliberate, slower, and more careful side of the mind. For example, research finds that angry people are more likely to stereotype, rely more on simple heuristics, and their judgements rely far more on the person who is telling them the message than the actual content of the message (Bodenhausen et al 1994). Anger clouds thinking.

“Angry people are not always wise.”

― Jane Austen, Pride and Prejudice

On the other hand, positive emotions make our thinking more creative, integrative, flexible, and open to information (Fredrickson 2003). And these emotions speak for themselves; they just feel better. This also translates into health benefits. People who experience more positive emotions tend to live longer (Abel & Kruger 2010). Meanwhile, anger increases blood pressure, and leads to tons of other harmful physiological impacts associated with stress (Groër et al 1994). Positive emotions like compassion enhance the immune system while anger weakens and inhibits immune reactions (Rein et al 1995). Laughter alone improves immune responses (Brod et al 2014). Promoting more positive emotions on social media could not only improve our thinking and reduce waves of anger-fueled divisiveness and misinformation. On a population level, it could promote health and longevity. It could even help slightly strengthen immune systems and enhance our resilience to the current pandemic.

boy singing on microphone with pop filter
Social media as it exists currently amplifies the voices of the angriest. It doesn’t need to be that way.

Objections to this Change

I’m not sure about this idea. There are lots of complexities to take into account. Any algorithm change would likely have countless unknown and unintended consequences. It’s hard to know what externalities this would create. For example, if sadness is given lower weight, important news about sad events throughout the world (e.g. genocide in Myanmar) will become even more obscure and hidden. Favoring positive emotions may result in ignorance of a different kind. As the positivity-favoring algorithm glosses over or suppresses info associated with negative emotions, we may become less aware of problems in the world.

My response to this is simple. First, yes, I think changing the algorithm will be hard and complex. But this is not an argument against the change. It just means developers and decision-makers who implement the new algorithm need to be careful and forward-looking. The change should be tested thoroughly before it’s rolled out globally. Data scientists should examine the effects of the new algorithm and look for subtle unintended effects. Facebook should also be transparent about the changes and play close attention to user feedback. My assumption in this post that Facebook will not botch the changes and will take all these sensible precautions and more.

Some people might argue this change would be too paternalistic. Facebook should not intervene to promote our supposed best interests. Social media networks should not take a position on which human emotions are ‘better.’ However, Facebook is already taking an implicit stance. The structure of social media already favors anger. Accepting the default is taking a tacit stand in favor of an anger-promoting system. It would be impossible for Facebook to be completely neutral on this issue. Any algorithm will inevitably favor some emotions over others. So why not make a stand for more positive emotions? This would not infringe on anyone’s freedoms. If anything, it would liberate us from the restrictive, rationality-undermining, mind-consuming effects of anger.

This intervention is better understood as a ‘nudge.’ In their book Nudge, the economists Prof. Thaler and Prof. Sunstein argued that there are a variety of societal changes we can make that don’t reduce anyone’s freedom but promote better choices. For example, managers of school cafeterias can put healthier foods at eye level, while placing junk food in places that are harder for kids to reach. This doesn’t restrict choice – the kids can still access the junk food. But it influences choice in a positive direction. The authors use the name choice architecture for any system that affects our decisions. Choice architectures inevitably favor some choices over others. For instance, if the junk food is placed at eye level instead, that would encourage more unhealthy choices.

On social media, the choice architecture is the social media feed that presents a range of choices (posts) to interact with. No architecture is neutral, and right now, Facebook’s algorithm favors more anger-promoting choices. Modifying the architecture to favor positive emotions like love and empathy would not infringe on freedoms. It would only nudge our choices in a better direction by presenting more positivity-promoting content. It would even enhance our freedom by preventing our brains from being hijacked by anger.

Archipelagos of Echo Chambers

See you next time :-) | I'm off for some adventures, will be… | Flickr

Online social networks are made up of countless small islands of thought. These are often called echo chambers. The more extreme the thoughts, the wider the gulf that separates them from the world, and the more insular the island becomes. Research shows that political party groupings which are further apart in ideological terms interact less, and that individuals at the extreme ends of the ideological scale are particularly likely to form echo chambers (Bright 2017). For example, on climate change, most people are segregated into “skeptic” or “activist” groups (Williams et al 2015). People within chambers tend to accept & spread clearly false information if it confirms the group’s beliefs (Bessi et al 2015). Social media has almost definitely contributed to today’s extreme ideological polarization.

The borders between chambers are often hard to see from within a chamber. But these borders are psychologically enforced. People engage less with cross-cutting content from outside their echo chamber (Garimella et al 2018). This same study found that people who create more bipartisan, cross-cutting content pay the price of less engagement. And people tend to interact positively with people within their group, while interactions with outsiders are more negative. The people who build rafts & attempt to sail over to neighboring islands are met with either silence or a flurry of arrows. Bridging the gaps between echo chambers is not easy.

Randomness to Disrupt Echoes

I have a very simple suggestion to break up echo chambers: the algorithm should introduce some randomness. This is content that is chosen without input from the normal newsfeed algorithm. It might be semi-randomly selected from people we follow, or even from beyond our limited social networks. This increases novelty and introduces us to content that we wouldn’t expect. It reduces echo chambers by exposing us to content outside our well-curated worlds. It encourages more open and critical thinking. Plus, Facebook’s machine learning algorithms may learn more from our reactions to this novel info and offer better content in the future.

Randomness would help bridge the divides between information islands on social media. Right now, the only thing that interrupts the insularity of these islands are the people who dare to cross the seas between them. Sailing between islands is disincentivized by the fact that people who cross the divides between echo chambers are often spurned or ignored, while people who stay within their islands are rewarded with engagement and shares. Adding a random element to the newsfeed is like adding a spaceship to the social media archipelago, picking up info from one island and dropping it onto another. Like the San tribe in the South African comedy The Gods Must be Crazy, who have to deal with a Coca-Cola bottle that falls from the sky, we’ll encounter unpredictable content that interferes with our comfortable and restrictive echo chambers.

Categories
Essays Philosophy Politics

Compensating for What? Dworkin, sociology, and mental illness

Introduction: Just Compensation?

“What we seek is some kind of compensation for what we put up with.”

― Haruki Murakami

Who should society compensate? Which differences in outcome does justice require that we rectify? Dworkin argues that a person with handicaps or poor endowments is entitled to compensation, while a person with negative behavioral traits like laziness or impulsivity is not entitled to compensation. To argue for this claim, he draws a distinction between option luck, or the luck involved in deliberate risky decisions made by the individual, and brute luck, “a matter of how risks fall out that are not in that sense deliberate gambles.”[1] Being handicapped by forces out of your control is an example of what Dworkin would call brute luck. As handicaps are due to brute luck and are out of the individual’s control, they deserve some form of compensation. On the other hand, behavioral traits are the result of option luck and therefore do not merit compensation. As he puts it in more colloquial terms, “people should pay the price of the life they have decided to lead.”[2] This is Dworkin’s just compensation principle.

I will argue that this principle does not account for sociological and biological factors that affect our behavioral traits and our decision-making, making it much more difficult to justify only giving compensation to those with handicaps and not to those who suffer due to bad decisions. Dworkin might respond with his caveat that if a person has a behavioral impairment like a severe craving, and the person judges this craving as bad for their life-projects, it ought to be considered a handicap deserving of compensation. However, I will conclude that this caveat fails in the case of mental illness. Ultimately, the just compensation principle is an inadequate way to think about egalitarianism and justice.

Not Just Gambles

Dworkin’s just compensation principle states that our disadvantages in resources due to circumstances outside of our control are worthy of compensation, whereas disadvantages due to our deliberate gambles or lifestyle choices should not be compensated. For instance, if someone is born in poverty and suffers from the long-term effects of malnutrition, they deserve compensation for this brute luck. But if someone decides to spend every waking hour surfing for their first forty years of life, and then ends up with very few marketable skills and is unable to find employment, they do not deserve compensation. Another example of a case undeserving of compensation might be someone who decides to gamble and subsequently loses all their earnings. In these cases, Dworkin argues, the individuals have made deliberative lifestyle choices that resulted in bad option luck and decreased their access to internal and external resources. They have intentionally rolled the dice. These situations are not the result of brute luck, but are consequences of deliberative choices, and therefore do not deserve compensation from society.

gray-and-red arcade machines
How is the distribution of gambling machines determined? It strongly influences your probability of gambling.

However, these lifestyle choices are not as deliberative as Dworkin suggests. Consider the case of the gambler. Imagine a young person on a Navajo reservation decides to start gambling because there is a casino nearby, because her friends gamble and encourage her to participate, and because the local economy depends on the casino. She is also misinformed about gambling, due to cultural norms, lack of education, pervasive advertising, and other situational factors. She loses all her savings in several gambling sprees. A simple generalization of Dworkin’s theory would dictate that she is suffering the consequences of option luck and is not entitled to compensation. But this view ignores the situational factors that drove the person to gambling.

Someone who is born in an area with no casinos and strong cultural norms against gambling, who receives a good education, and has friends who mostly go to colleges and not casinos, is not subject to negative situational factors of comparable strength or frequency. Gambling may not even come to mind as a serious option for this more privileged individual. Therefore, our choices are deeply influenced by the brute luck of being born in a harmful environment. Our brute luck impacts our options and our decisions. Even if gambling is an exercise of option luck, it is arguably still worthy of compensation when someone’s decision to gamble is strongly influenced by brute luck factors outside of their control. In this sense, the gambler’s poor choices which led to bad option luck are an indirect consequence of the brute luck of being born with certain situational factors.

This case is not imaginary. Due to cultural and sociodemographic factors, a person born on a Native American reservation is twice as likely as the average person to practice pathological gambling.[3] The strong influence of surroundings on behavior has been generalized by studies which find that decision-making processes are profoundly influenced by sociocultural factors outside of our control.[4] And these “brute luck” factors do not just influence minor decisions, but shape our fundamental decisions about life projects, goals, and lifestyles. For example, a person is far more likely to decide to marry at a young age if they were raised Mormon in Utah Valley than if they had a secular childhood in New York City. This weakens Dworkin’s case that losses due to “deliberative gambles” or lifestyle choices should not be compensated, while losses due to brute luck should be compensated. Apparent choices are profoundly shaped by brute luck. It would be a superficial misrepresentation to call these choices intentional ‘gambles.’

Brute luck genetics & personality

5 Personality Traits - Infographic
The Big-Five model of personality, currently the best-supported and most accepted scientific model of personality.

Another aspect of brute luck is genetics. On a surface level, genetic factors seem to be separate from decision-making processes. But most of us will readily accept that our personality shapes our choices. And research confirms that personality affects our decisions in a wide variety of contexts.[5] For example, people with high openness to experience are far more likely to engage in high-risk behaviors.[6] If personality is largely or even partially a product of brute luck, and personality shapes our choices, that implies our decisions are partly determined by brute luck. Therefore, our gambles are not as deliberative as they seem and may deserve compensation.

It turns out that a significant proportion of personality is determined by brute luck in the form of genetic inheritance. A meta-analysis of epigenetic studies found that about 20-60% of the phenotypic variation in personality (also called temperament) is determined by genetics.[7] Pairs of twins reared apart share an average personality resemblance of .45, suggesting that almost half of their personality is rooted in genetics.[8] Another study found that genetics explain about 40-60% of the variance in Big 5 personality traits.[9] The empirical evidence concurs that our personality, which shapes our decision-making, is in large part determined by genetic factors. For example, someone who genetically inherits the personality trait of openness to experience is far more likely to seek gambling as a source of novelty.

Dworkin’s defenses

How would Dworkin respond to this objection? He notes that the distinction between brute luck and option luck is a spectrum rather than a complete dichotomy. He accepts that brute luck influences our decisions, making the distinction between option and brute luck far messier. Therefore, he might argue that we should just compensate losses to the extent that they are caused by brute luck. For example, if hypothetically 50% of a person’s personality is determined by genetics and their personality shapes 30% of their choices, then 15% of their choices will be genetically determined. If we add in another 10% due to sociological influences, Dworkin’s just compensation principle might dictate that we compensate only that 25% of the person’s losses due to behavior caused by brute luck. Quick justice maths. But it seems inordinately difficult or impossible to calculate the appropriate compensation by tracing decisions to their root causes. This suggests that Dworkin’s entire scheme of compensation is not practically implementable, as it requires calculating the incalculable to figure out if losses are caused by brute or option luck.

woman in black long sleeve shirt
If just compensation relies on calculating some obscure combination of brute luck and option luck, this process is incalculable. There’s no way of knowing the parameters or how to use them to calculate a just result.

Furthermore, Dworkin might say that the examples of sociology and genetics do not count as brute luck, as there is still an element of personal choice in both cases. A person born into a gambling-promoting culture will be more likely to gamble, but they are not compelled to do so. Additionally, all people are subject to social influences on their behavior, and it is difficult to say that one environment is unequivocally worse than another. For example, a wealthy person not born on a reservation may not be influenced by as much pressure to gamble, but rather may be subject to more influences to take cocaine, embezzle funds, or engage in insider trading. Therefore, Dworkin could make a case that sociological and genetic influences on our behavior do not constitute true brute luck, because all people are subject to these influences, and they still allow a significant element of choice. Genuine brute luck does not allow for any choice: it is a situation completely out of our control, like a hurricane or a physical disability.

However, Dworkin’s counter-argument here contradicts his previous response. The claim that brute luck only exists in conditions that do not allow for any choice is mutually exclusive with the idea that there is a spectrum between brute luck and option luck. Dworkin cannot have his spectrum and his dichotomy too. Additionally, it is almost certainly the case that some situations involve more negative brute luck than others. While all situations involve brute luck that impacts our choices, this does not imply that we should completely ignore the differences between these situations. Some environments are simply worse than others.

Cravings as handicaps

Finally, Dworkin might respond by arguing that his theory has already addressed this problem of decision-making shaped by brute luck. He agrees that personality traits shape our decision-making. Some people, he mentions, might be cursed with a personality that includes insatiable cravings for sex. If someone has a severe craving that they view as an impediment to the success of their life-projects, it may be considered a handicap worthy of compensation:

They regret that they have these tastes, and believe they would be better off without them, but nevertheless find it painful to ignore them. These tastes are handicaps; though for other people they are rather an essential part of what gives value to their lives.

(Dworkin, 303).

Dworkin therefore makes an exception in this case and reevaluates the craving as a kind of handicap. Severe cravings can be added to the list of things that a person in the hypothetical insurance market could purchase insurance against. This seems to be Dworkin’s best response to the problem of the blurred lines between option luck and brute luck. After all, it allows him to classify negative behavioral traits as cravings that are worthy of compensation only if the person views the craving as a harmful for their life-projects. However, with the rest of this paper I will argue that this response fails as well, because it fails to account for the case of mental illness.

The case of mental illness

The key problem with Dworkin’s treatment of cravings is his use of the glad-not-sad test to evaluate whether a craving is a genuine handicap or a personal failing: “if an individual is glad not sad to have a preference, that preference falls on the side of her choices and ambitions for which he bears responsibility rather than on the side of her unchosen circumstances.”[10] This rule does not account for the case of a mentally ill person who irrationally evaluates harmful cravings as beneficial for their life-projects.

For example, a person with severe schizophrenic paranoia may have an irrational craving to eliminate all communication devices from their home to escape the eyes of government spies. They may view this craving as beneficial for the life-project of protecting their family. Therefore, under Dworkin’s framework for compensation of cravings, this person would not receive compensation because they are irrationally glad that they have the irrational preference. Dworkin does not account for the possibility that the very process by which we decide whether a craving helps our helps our life-projects will be subject to brute luck factors like mental illness. Mentally ill people who have negative cravings (e.g. for drug addiction or paranoid behaviors) and judge those cravings as good would not receive compensation for the consequences of their cravings.

gold cards and two dices on round wooden platform
More and more, Dworkin’s view of option luck as ‘deliberative gambling’ seems fragile and indefensible.

Furthermore, it is problematic for Dworkin’s theory of justice that people who judge their own mental illness as good for their life projects will not be compensated. For example, someone like Van Gogh, who viewed his bipolar disorder as essential for his artistic life-projects, would never receive compensation for the harmful consequences of this disorder. After all, it is a disorder that he is generally “glad” rather than “sad” about. However, it seems deeply arbitrary that those who see their mental illness as positive should not be compensated simply because of their outlook.

This scheme of compensation even creates perverse incentives to treat one’s disorder as harmful for one’s life-projects even if a different outlook could make it beneficial. Imagine that two persons are subject to the same brute luck factor of having mental illness, and one person decides to view it as a positive factor that furthers their life projects while the other decides to view it as an impediment. The one who reevaluates the disorder as beneficial for their life-projects is almost punished for their decision by a scheme which withholds compensation when a person views a disorder as positive.

Dworkin might respond that mental illness is also something that could be insured against in the hypothetical insurance auction. In this auction, we would have knowledge about the likelihood of mental illness, as well as the differing levels and costs of coverage for mental illness. If one does not insure against mental illness, then they would not be compensated for the consequences of this mental illness.

people outdoor during daytime
Imagine an auction where you’re not buying items, but instead are buying insurance for potential brute luck factors like being born with a disability, a mental illness, into an oppressive or negative environment, and more.

However, given the rarity of mental illness it seems unlikely that anyone would purchase this insurance. And this hypothetical auction can hardly be seen as relevant to the practical implementation of just institutions. After all, how can we know what people would choose in the hypothetical auction? How can we simulate it? How can we measure and interpret the results in creating our institutions? Ultimately, the hypothetical insurance auction seems more like an idle thought experiment than a method that could salvage Dworkin’s theory of just compensation.

Conclusion

I have attempted to cast doubt on the distinction between option luck and brute luck, in order to show that variations in option luck (the results of our decisions) are largely explained by variations in brute luck (factors outside our control). If this claim is true, then Dworkin’s compensation principle cannot stand, because it relies on a distinction between brute and option luck. Furthermore, Dworkin’s view that bad option luck caused by bad behavioral traits should not be compensated rests on the rational choice model, which models human behavior as mostly explained by logical deliberations based on information to reach conclusions about and act within the world. This deliberative choice model allows Dworkin to draw a distinction between a resource paucity due to brute luck, and a resource paucity due to option luck.

But Dworkin’s view of human decision-making is incomplete at best and misguided at worst. This paper gives two strong counterexamples to the rational choice model: sociological factors and biological-genetic factors. These examples suggest that a large proportion of human decision-making is the direct or indirect result of brute luck. As such, it seems that even the bad consequences of our intentional choices might merit compensation. Dworkin gave two replies that were insufficient due to logical contradictions. Ultimately, he offers the caveat that if a person judges a craving to be harmful for their life-projects, it merits compensation. But this caveat fails as well when we apply it to mental illness. Therefore, Dworkin’s model needs serious reworking or replacement. Focusing on equality of resources and distributing resources as compensation for only the consequences of brute luck and not the consequences of option luck, fails to account for sociological, biological, and psychiatric influences on our behavior.

Works Cited

  1. Dworkin, R., 2000, Sovereign Virtue, Cambridge MA: Harvard University Press. Pg. 73.
  2. Dworkin, pg. 74.
  3. Patterson-Silver Wolf Adelv Unegv Waya, David A et al. “Sociocultural Influences on Gambling and Alcohol Use Among Native Americans in the United States.” Journal of gambling studies vol. 31,4 (2015): 1387-404. doi:10.1007/s10899-014-9512-z
  4. Bruch, Elizabeth, and Fred Feinberg. “Decision-Making Processes in Social Contexts.” Annual review of sociology vol. 43 (2017): 207-227. doi:10.1146/annurev-soc-060116-053622
  5. Vroom, V. H. (1959). Some personality determinants of the effects of participation. The Journal of Abnormal and Social Psychology, 59(3), 322-327.
  6. Marco Lauriola, Irwin P Levin, Personality traits and risky decision-making in a controlled experimental task: an exploratory study. Personality and Individual Differences, Volume 31, Issue 2, 2001, Pages 215-226, ISSN 0191-8869, https://doi.org/10.1016/S0191-8869(00)00130-6.
  7. Saudino, Kimberly J. “Behavioral genetics and child temperament.” Journal of developmental and behavioral pediatrics : JDBP vol. 26,3 (2005): 214-23.
  8. Bratko, Denis, Ana Butković, and Tena Vukasović Hlupić. “Heritability of Personality.” Psychological Topics, 26 (2017), 1, 1-24. Department of Psychology, Faculty of Humanities and Social Sciences, University of Zagreb, Croatia.
  9. Power, Robert & Pluess, Michael. (2015). Heritability estimates of the Big Five personality traits based on common genetic variants. Translational psychiatry. 5. e604. 10.1038/tp.2015.96.
  10. Olsaretti, Serena, and Richard J. Arneson. “Dworkin and Luck Egalitarianism: A Comparison.” The Oxford Handbook of Distributive Justice. Oxford University Press: June 07, 2018. Oxford Handbooks Online. Accessed 27 May 2019. Pg. 19.
Categories
Essays Philosophy Politics

Against Toil

Work less. More specifically: toil less.

Toil is work that is without intrinsic joy, placed in opposition to leisure, and is often aimed to improve performance on some metric that is defined by an external force. The key elements of toil are coercion and joylessness. We don’t choose it freely, and it isn’t fun. Maybe its rewards are ephemerally enjoyable, but the work itself is not rewarding, or at least not so rewarding that you would do it without any external incentive. Often, toil is work that has no purpose whatsoever: it is work for work’s sake. You, and the world, would be better off if you engaged in less toil. As Lewis Hyde wrote, “Your life is too short and too valuable to fritter away in work.”

Work for work’s sake

The eulogists of work—Behind the glorification of “work” and the tireless talk of the “blessings of work” I find the same thought as behind the praise of impersonal activity for the public benefit: the fear of everything individual. At bottom, one now feels when confronted with work-and what is invariably meant is relentless industry from early till late-that such work is the best police, that it keeps everybody in harness and powerfully obstructs the development of reason, of covetousness, of the desire for independence. For it uses up a tremendous amount of nervous energy and takes it away from reflection, brooding, dreaming, worry, love, and hatred; it always sets a small goal before one’s eyes and permits easy and regular satisfactions. In that way a society in which the members continually work hard will have more security: and security is now adored as the supreme goddess.

Nietzsche, The Dawn (Kaufmann), #173.
assorted chains on abandoned room with graffiti

It’s possible that the majority of work in the modern era is unnecessary. One of many paradoxes of modernity is that we aren’t working less even as automation reduces the necessity for work. As the economic rationale for work goes away, work is vested with more and more psychological weight. As Paul Lafargue wrote in 1883, “the laborer, instead of prolonging his former rest times, redoubles his ardor, as if he wished to rival the machine” (The Right to be Lazy, ch. 3). Psychological research finds that most individuals see unproductive, unnecessary work as moral, and individuals who practice superfluous toil tend to be evaluated as ‘better people’ (The Moralization of Unproductive Effort). Under two centuries of industrial capitalism, our culture has reached a new peak: it sees pointless high-effort as a virtue.

In Bullshit Jobs, David Graeber argues that huge swathes of the working population are toiling away in pointless occupations. 37% of UK workers believe their job does not make a “meaningful contribution to the world.” To be fair, Graeber’s statistical evidence for this thesis is weak—there’s significant evidence that far more workers than he estimates believe their work is meaningful—and his theory is somewhat vague and non-rigorous (Bullshit about Jobs). To validate the theory, we’d have to somehow quantify how many jobs make a ‘meaningful contribution’ rather than just relying on opinion polls. That’s an incredible difficult task.

But Graeber assembles a variety of evidence beyond these polls. For example, office workers in 2016 spent only 39% of their time on their actual jobs, devoting most of the rest to emails, wasteful meetings, and administrative tasks. He also chronicles five types of bullshit jobs: flunkies (who make their boss feel more important through managerial feudalism), goons (who mainly oppose goons in other companies), duct-tapers (who repeatedly create band-aid solutions instead of permanently fixing problems), box-tickers (who fill out paperwork as a proxy for action), and taskmasters (who manage people who don’t need managing). Each of these types are supported by a series of anecdotes. Are you involved in one of these bullshit jobs? Or even worse, are you unconsciously or consciously training to fulfill one of these meaningless roles?

In praise of play

Fundamental hominid psychology evolved in the hunter-gatherer populations of Africa over the last 2.8 million years. Around 300,000 years ago, our species emerged, and it would be. ‘Behaviorally modern’ humans, who used specialized tools, rituals, exploration, trade, art, and more, did not exist until the Upper Paleolithic around 60,000 years ago. Agriculture began about 10,000 years ago. Why this review of dates and human evolution?

Well, our psychology has been influenced by agriculture for only 3% of human existence. Humans have been living in industrial society for only .08% of our time on this planet. And we’ve been living in the current ‘postmodern condition,’ with the Internet and other innovations of the last 30 years, for only about .01% of human existence. We are not evolved for our current conditions. Our psychology was built for a radically different world. We should take advice from our ancestor’s conditions to understand what types of life are more ‘natural’ and perhaps better for human psychology.

Hunter-gatherer societies are deeply playful. “Their own work is simply an extension of children’s play…as their play become increasingly skilled, the activities become productive” (Play Makes Us Human). Hunter-gatherers typically work only around 20-40 hours a week. Unlike our closest primate relatives the bonobos, chimpanzees, and gorillas, which have a strict social hierarchy and high-ranking individual(s) which dominate lower social tiers, anthropologist have found that most human hunter-gatherer societies are “fiercely egalitarian” (Lee, 1998). Hunting trips are seen as a form of skilled play, and if any individual decides not to participate, they are allowed to without conflict.

four boy playing ball on green grass

In anthropology, play is distinguished by these qualities: it is self-chosen, self-directed, intrinsically motivated, guided by mental rules, imaginative, and involves an alert but unstressed state of mind. Play is necessarily egalitarian, in that if one individual threatened to dominate entirely, the others would stop playing and flee the game. The Human Relations Area Files, a primary data source in anthropology, show that hunter-gatherer cultures uniquely lack competitive play (Marshall 1976). Hunter-gatherer adults told researchers that their children spend almost all of their time playing – children spend only 2 hours a day foraging, and even when foraging, they continue to play (Draper 1988). However, the more a society transitions to agriculture, the less time children have to play.

And in the post-agricultural era, children play even less. In The Decline of Play and the Rise of Psychopathology in Children and Adolescents, the researchers find that for six to eight year olds between 1981 and 1997, there was a 25% decrease in time spent playing, a 55% decrease in time spent talking to others at home, and a 19% decrease in time spent watching TV over sixteen years. Meanwhile, there was an 18% increase in time spent in school, a 145% increase in time spent doing schoolwork at home, and a 168% increase in time spent shopping with parents. And as play disappeared, depression and anxiety increased by a full standard deviation, and 85% of young people in the 1990s had anxiety and depression scores greater than the average scores for the same age group in the 1950s. Between 1950 and 2005, the suicide rate for children under 15 quadrupled. And the average young person in 2002 was more prone to an external locus of control (more prone to claim lack of personal control) than 80% of young people in the 1960s. In 2007, 70% of college students scored higher in narcissism than the average college student in 1982.

England: Child Labor, 1871 Photograph by Granger
Most developed countries have moved past forcing children to work in factories — but we still have eliminated play.

Eliminating play in our children’s lives teaches them that life is a chore to be endured. Play teaches us how to to make choices, solve problems, cooperate with others as equals, follow the rules of the given game, and create new ways of playing. Most importantly, play teaches us how to experience joy. A study of happiness in public school students found that the children were by far the most miserable in school and the happiest when playing with friends (Gray 2013). While the conventional story is that schoolwork is a necessary evil that can’t be fun, what if the misery of this work comes from unnecessary forms of education that prevent joyful learning?

Play is not the same as entertainment. Most of the leisure time we have is filled with entertainment, used merely to numb us while we are not working, to encourage us to buy even more commodities, or to immerse us in the spectacle of lives that are not our own. This is entertainment. Play, on the other hand, is an active, creative activity, in which the imagined world or narrative is built and constantly changed by the participants — rather than disseminated by some unknown figure like a media corporation.

“But for all these people art exists only so that they will become even more dispirited, even more numb and mindless, or even more hasty and desirous.”

Nietzsche, Unfashionable Observations, pg. 287.

Ultimately, if we all devote ourselves to toil, then we are doomed to live in a Disneyland with no children: a world with immense economic prosperity and no one who remembers how to play.

The Protestant & Mormon work ethics

Weber’s 1904 sociological masterpiece, The Protestant Ethic and the Spirit of Capitalism, has shaped almost the conversations about the work ethic since. In Catholicism, work could not earn one salvation, and only grace and repentance could redeem individuals. For Luther, work was saintly, and “an individual was religiously compelled to follow a secular vocation with as much zeal as possible. A person living according to this world view was more likely to accumulate money” (Weber, 42). For Calvinists, who thought the saved were predestined, work was used to achieve financial success and thus earn the ‘mark of God.’ Calvinists could relieve anxiety about being spiritually unworthy by achieving the (material) blessings of God. In a footnote, Weber explicitly mentions the new Mormon religion, citing a Mormon scripture that states: “But a lazy or indolent man cannot be a Christian and be saved. He is destined to be struck down and cast from the hive” (Weber, 235). I’m not sure where Weber read this — if anyone finds the origin, let me know!

Mormonism, the homegrown American religion, takes this Protestant work ethic and advances it even further. Utah’s state motto is Industry and the symbol of the State of Deseret (the Mormon name for a pre-Utah state in the West) was the beehive. At a Mormon conference called Especially for Youth when I was 14, I asked an advisor about why he believed in the church, and he said a keystone of his belief was the material success of Mormon society. I of course don’t remember exactly what he said, but it was close to “Look at how successful Utah is and how much Mormonism has helped us prosper – you can judge the religion by its fruits.”

Bees inside the wooden cage
Are we more than just bees in a manufactured hive, working pointlessly to produce honey we will never taste?

Theological elements of Mormonism encourage this work ethic. Joseph Smith rejected the concept of immaterial substances: “There is no such thing as immaterial matter. All spirit is matter, but it is more fine or pure and can only be discerned by purer eyes” (D&C 131, 7-8). God himself is a material being in Mormon ontology. Further, in Mormon theology, it is possible to become like God, create a world, and populate it. This heavenly culmination of a spirit’s life is achieved through a combination of faith and works. The idea of eternal progression is closely tied to the work ethic.

“Wherefore, because thou hast been faithful thy seed… shall dwell in prosperity long upon the face of this land; and nothing, save it shall be iniquity among them, shall harm or disturb their prosperity upon the face of the land forever.”

2 Nephi 1:31, The Book of Mormon

In an intensive analysis of the Book of Mormon, a group of scholars traced the development of the work ethic in the primary Mormon religious text (Material Values in the Book of Mormon). Anyone who has read the Book of Mormon is familiar with the story of the pride cycle: a group is righteous, which leads to material rewards, and these rewards corrupt them, leading to a collapse or loss of their success, which then leads to humility and righteousness once again. Though their wealth is destroyed again and again, when the people repent of their sins and turn back to the Lord, they start to prosper once more (Helaman 4:15-16; Ether 7:26). The Lamanites, the recurring antagonists of the Book of Mormon, are portrayed as a group who survive by robbing and plundering rather than laboring for goods with “their own hands” (Alma 17:14). In contrast, the Mormons are exhorted to work and avoid laziness: “Thou shalt not idle away thy time…neither shalt thou bury thy talent that it may not be known” (D&C 130:18-21). In Mormon theology, labor is tied to righteousness, which is in turn connected to prosperity.

“And now, because of the steadiness of the church they began to be exceedingly rich, having abundance of all things whatsoever they stood in need…And thus, in their prosperous circumstances, they did not send away any who were naked, or that were hungry, or that were athirst, or that were sick, or that had not been nourished; and they did not set their hearts upon riches; therefore they were liberal to all, both old and young, both bond and free, both male and female, whether out of the church or in the church, having no respect to persons as to those who stood in need. And thus they did prosper and become far more wealthy than those who did not belong to their church.

Alma 1:27-31, The Book of Mormon

I don’t mean to frame Mormonism as a hyper-capitalist religion; there are many verses that condemn material wealth (e.g. the description of the ‘great and abominable’ Church in 1 Nephi 13:6-8). The Church also encourages charity and generosity. And the Book of Mormon also describes the problems with class structures and economic inequality: “And the people began to be distinguished by ranks, according to their riches and their chances for learning…And thus there became a great inequality in all the land” (3 Nephi 6:12-15). The same verse describes how the devil exercised his power in “puffing them up with pride, tempting them to seek for power, and authority, and riches, and the vain things of the world.”

There are also many parts of Mormon scripture that could be characterized as socialist-leaning. After the coming of Christ in the Americas, the Book of Mormon describes how “they had all things common among them; therefore there were not rich and poor” (4 Nephi 1:3). Over the next two centuries, this righteous, communal society fell back into class structures and status-signalling: they became “lifted up in pride, such as the wearing of costly apparel, and all manner of fine pearls…And from that time forth they did have their goods and their substance no more common among them” (4 Nephi 24). Early LDS societies also attempted to practice a form of theocratic communalism called the United Order, in which private property was eliminated and the land and goods of Church members were owned by the Church.

However, these more anti-capitalist and communal strands of Mormonism are hardly visible in modern Mormon society. Around 70% of Mormons are Republican, and around 75% of Mormon Republicans believe government aid to the poor does more harm than good (Pew Research). One sociological study found that Mormons perceive wealthier members as more spiritual or blessed, and are more likely to attribute flattering spiritual qualities to materially successful members — and poorer Mormons were even more likely to buy into this myth of wealth & righteousness (Rector 1999). Contemporary Mormonism represents a zenith of the toil addiction.

Popular culture’s recognition of the Mormon work ethic is part of the reason American press coverage of Mormonism has become more positive over the last half-century (Stathis 1981). For example, in 2008, The Economist published an article about the economic success of the state of Utah called The Mormon work ethic, and The New Republic in 2012 published a much longer article on The Mormon Ethic and the Spirit of Capitalism. The powerful synergy of Mormonism and American capitalism have created a uniquely compelling work ethic.

American fantasies

Social innovations like the Protestant work ethic generate a race to the bottom. One group adopts a hyper-industrious culture that mandates its population give up personal happiness & the pursuit of individual freedom for the sake of the herd’s productivity. Neighboring groups see that they will be unable to compete with the hyper-industrious culture unless they adopt a similar work-ethic. Soon, the addiction to work is exported globally. Now we live in a global culture in which difference is being rapidly erased as the gospel of toil spreads into every corner of the planet. Globalization creates a race to the bottom: corporations will primarily harness the labor of the countries with the most relentless, life-erasing, humanity-eating work ethics.

The economy is then detached from its purpose – to serve humanity – and we begin to serve the economy. Our limbs are harnessed to the dance of numbers. All incentives orchestrate together to favor work over individuality, joy, or the pursuit of projects that do not increase established metrics. GDP, comparative advantage, the constant pressure to rise into higher social tiers, to have certain products, to leap through the correct hoops at the right times – these control our behavior in innumerable seen and unseen ways. We have forgotten entirely what it would be like to be free of this obsession with toil, and we have forgotten the normative aim of our constructed human systems: to promote human flourishing.

Toil is especially an American addiction. Maybe the most toxic element of our culture is also the most-praised: the ‘work ethic.’ This is the constant, all-consuming, corporation-connected drive to labor that erases our individuality. America is the 9th-most overworked nation in the world, and Americans are working more and sleeping less today than in the 1970s (Covert 2018). Unlike most countries, the US does not have a limit on the maximum length of the work week. And according to the ILO, “Americans work 137 more hours per year than Japanese workers, 260 more hours per year than British workers, and 499 more hours per year than French workers.” Even as toil becomes less necessary, it is devouring more of our time. And as American workers strive to be more productive, they are rewarded less.

Is the EPI Correct About Wages and Productivity? – Difficult Run

The American dream: if you are capable enough, you can rise to the top; you can become anything if you have the talent. The if puts us in a psychological trap: to reject the American dream seems to be an admission of incapability. The desire to believe in the American dream is ingeniously tied to the desire to believe in oneself. If I negate the dream, am I just admitting that I am not enough to become? That I don’t have the capability to succeed, and thus don’t want to believe it’s possible? I, the American individual, want to be the psychological type that is able to fulfill the American dream – the ‘great’ person who comes up from the bottom, the Benjamin Franklin, the Jay Z. To lose faith in the American dream seems to be an admission that I am not a person of this type.

This fear, this personal insecurity, encourages a blind faith in the dream and a denial of the material conditions that define both what we are striving for and how it is achieved. Any rejection of the promise of the American dream seems to be a mere product of ressentiment. Thus the ideology undermines the ground on which its opponents stand.

College and indoctrination into toil culture

The addiction to the work-ethic, and the compelling stories of success that are connected to it, especially affects ambition-filled young people. This is why 36% of Harvard graduates go into finance and consulting – and going to Harvard makes students significantly more likely to work in these fields. At Dartmouth, where I go, even more are seduced – almost 50% of students end up working in finance and consulting, and many of the remainder go to work for high-status tech companies. These firms woo aspiring students with impossible-to-refuse offers. Their recruiting methods weaponize student’s vague fear of not being “successful” the lack of any specific vision. They play into student’s desires to continue ‘upward momentum’ by fulfilling a conventional success story and succeeding in yet another selective admissions process. It’s tempting to talk oneself into these toil-filled careers.

The flood of students into consulting is less of a brain drain than a spirit drain – it sucks our most energetic, dream-filled, neuroplastic, capable youth into careers where they will do nothing but optimize the existing system. They slowly lose their independent mind, grind away their capacity for creativity, and are rewarded amply for it.

Elite students climb confidently until they reach a level of competition sufficiently intense to beat their dreams out of them. Higher education is the place where people who had big plans in high school get stuck in fierce rivalries with equally smart peers over conventional careers like management consulting and investment banking. For the privilege of being turned into conformists, students (or their families) pay hundreds of thousands of dollars in skyrocketing tuition that continues to outpace inflation. Why are we doing this to ourselves?

Peter Thiel, Zero to One, pg. 36

Thiel’s alternative – working on a entrepreneurial project – is only somewhat better. It also encourages toil, just more independent toil that is tied to metrics like money that are more directly relevant to life in capitalism than academic grades. It too is coercive and relatively joyless, as the student must relentlessly work on the project to become financially ‘successful’ when the fellowship ends. As a whole, these students are also far more incentivized by the status of the Fellowship, and the vision of being an entrepreneur, than by their intrinsic enjoyment of the labor. It has also changed over the last few years, and it now almost exclusively funds projects with a high chance of profitability. The Thiel Fellowship is another Toil Fellowship.

But I agree with Thiel’s conviction that colleges indoctrinate youth into a pointless work ethic, encourage conventionality, and serve to erase dreams. Elite institutions serve the additional purpose of status-signalling for the upper class — for many, an Ivy League education is merely a form of conspicuous consumption. I can’t speak for all college cultures, but Dartmouth culture is dominated by the toil ethic. Students constantly chatter about how busy they are, list their work obligations, study in visible public space, and signal about their high-effort. What’s the point? Work for work’s sake?

empty chairs in theater
Fluorescent lights & mass, batch-processing, toil-based education.

Human beings must be broken in to serve the purposes of the age, so that they can be put to work at the earliest possible moment; they are supposed to go to work in the factory of general utility before they are mature — indeed, so that they do not become mature — because allowing them to mature would be a luxury that would divert a great deal of energy away from “the labor market.” Some birds are blinded so that they will sing more beautifully.

Nietzsche, Unfashionable Observations (Stanford), pg. 134.

Who is Moloch?

Our economies have become unhinged from the original force that sets them into motion: joy. The human desire for joy, for happiness, for conscious positive experience, is what motivates the creation of barter and currency. And yet now we have forgotten that our economies are our instruments and not our masters. This has now evolved into our vast interconnected systems of steel and concrete, punishment and incentive, which exist not to further human flourishing but to maintain themselves.

“I saw the best minds of my generation destroyed…What sphinx of cement and aluminum bashed open their skulls and ate up their brains and imagination? … Moloch whose love is endless oil and stone! … They broke their backs lifting Moloch to Heaven!”

Allen Ginsberg, Howl

In his howl against the unseen forces that maintain a soul-destructive system, Ginsberg names these forces Moloch after the Canaanite god of child sacrifice. Everyone hates this system, and yet the system remains. What could possibly keep it going? It seems almost like there is some malevolent being, assigning everyone life-draining toil, generating task after task, reinforcing processes that no human being with a semblance of spirit would choose to create. The reality is more complicated. In Meditations on Moloch, Scott Alexander lists a series of examples of toxic, self-maintaining systems, where certain features prevent participants from cooperating to fix the system. Moloch is a name for the features that define this type of system. A human tendency is to invent gods as explanatory forces when we do not understand a system. Moloch represents the complex set of forces that create and maintain the system we inhabit.

Dominic McGill - Moloch - Contemporary Art
Moloch by Domonic McGill

Imagining out of toil

A simple way to avoid toil is through the imagination. Imagine that money was no object. Then, imagine there were no social incentives like the desire to signal high-status careers. Finally, imagine that work for work’s sake was unnecessary. This imaginative reduction enables us to get at the core of our authentic desires. Like Rawls’ veil of ignorance, it encourages us to imagine a world in which particular social motivations and contingencies did not govern our behavior. Like Husserl’s phenomenological reduction, this method tries to escape the abstractions and concepts that usually determine how we experience the world. If we were immune to the toil ethic, ignorant of which careers were tied to status and money, what would we choose?

“Let the young soul look back on its life with the question: What have you truly loved up to now, what attracted your soul, what dominated it while simultaneously making it happy? Place this series of revered objects before you, and perhaps their nature and their sequence
will reveal to you a law, the fundamental law of your authentic self.”

Nietzsche, Unfashionable Observations (Stanford: 1995), pg. 174.

Most people will never do this. Even fewer will take this imagining seriously, and follow the guidance of their more authentic nature. Imaginations are too limited, “the world as it is” is too blinding, toxic incentive structures are too motivating. Many use toil precisely as a means of escape, a coping mechanism and a way to avoid themselves. Our lives themselves are becoming products we manufacture. As Debord wrote in The Society of the Spectacle, “The more his life is now his product, the more he is separated from his life.”

All of you to whom furious work is dear, and whatever is fast, new, and strange-you find it hard to bear yourselves; your industry is escape and the will to forget yourselves. If you believed more in life you would fling yourselves less to the moment. But you do not have contents enough in yourselves for waiting-and not even for idleness.

Nietzsche, Thus Spake Zarathustra (Kaufmann), pg. 158.

I hope we at least realize that the most common defense of conventionality – “this is the way the world is, and we have to work within it” – is what makes the world the way it is. It’s a self-fulfilling prophecy. Working within existing structures validates those structures. Toiling for toil’s sake encourages others to do the same, and maintains the system that instills the toil ethic in our minds. Change will only happen when we stop believing this prophecy. If you’re on a hamster wheel, the answer isn’t to run faster. It’s to get off.

And, of course, those who don’t believe they can make a change never will.

Categories
Book Reviews Essays Politics

To End All Wars?

About four years ago, I read All Quiet on the Western Front by Erich Remarque on a Sunday in November, a lot like this one. It was painful. Paul (the “protagonist,” if there is one) is a brutal narrator. Reading most of the book in a day made his story more real, rushed, and urgent. I remember reading certain parts and shutting the book out of horror. Crying wasn’t rare.

During most of high school, I would say All Quiet was my favorite book. I’m not sure why. Not because I ‘enjoyed’ it. Only a sadist could. Maybe because it immersed me, and Paul’s voice had been inscribed on my mind. His story was more concrete and rattling than any history I’d learned before. While it is a novel and Paul did not exist in a literal sense, millions of people experienced his story. As shameful as it is to say, these millions had just never been real to me. As Camus, who lived through WWII in France, wrote:

But what are a hundred million deaths? … Since a dead man has no substance unless one has actually seen him dead, a hundred million corpses broadcast through history are no more than a puff of smoke in the imagination.” — Albert Camus, The Plague, pg. 4. 

Reading the book made these deaths more than just a puff of smoke; or at least, it made a few of these deaths real. Remarque turned them into ink on paper, which became thoughts and memories ingrained in neurons in my brain. Once-empty phrases gained powerful meaning: “Bombardment, barrage, curtain-fire, mines, gas, tanks, machine-guns, hand-grenades – words, words, but they hold the horror of the world” (All Quiet on the Western Front, 46). If only the generals and political leaders of WWI were able to read this book during the war. Then again, most of them experienced the nightmarish inspiration for All Quiet firsthand, and most were still able to dissociate it from their actions and continue the war.

I feel intense anger at the generals who tossed away countless lives mindlessly. They had an attitude similar to Napoleon’s:

“You cannot stop me. I can spend 30,000 men a month.” — Napoleon Bonaparte, Letter to Klemens von Metternich

Human life is the currency of war. The WWI generals were spending it. They poured hundreds of thousands of human bodies into Verdun, the Somme, Ypres, the Marne, like they were depositing piles of cash into the morbid bank of war. The supreme commander of the Allied forces in 1918, known for being reckless with human life during the Flanders, First Marne, and Artois campaigns, said something reminiscent of Napoleon’s quote:

“It takes 15,000 casualties to train a major general.”  — Ferdinand Foch (source: Nine Divisions in Champagne by Patrick Takle)

Doesn’t it sound like he’s quoting a price: we could train this general, but it will cost us 15,000 lives? Is that all the Great War was to these generals? A storm of prices, budget allocations, necessary costs, spending decisions? But behind each number was not a dollar but an individual, usually a man around my age, torn away from life and drafted into the process of destroying it en masse.

“I am young, I am twenty years old; yet I know nothing of life but despair, death, fear, and fatuous superficiality cast over an abyss of sorrow. I see how peoples are set against one another, and in silence, unknowingly, foolishly, obediently, innocently slay one another.” — Erich Maria Remarque, All Quiet on the Western Front, Ch. 10

I find it hard to imagine that these people were the same as we are. Were people just different back then? Their generation went through horrors we cannot imagine, and then went through them again in the Second World War. Could my generation survive the trenches? Could we slog through the mud of Passchendaele, our minds broken by the beating of artillery and the sight of death, and continue to fight? I think the answer is yes; but I hope we never get the opportunity to prove it.

No one would like to think they are capable of atrocity or extraordinary violence. But this belief disregards history. Many of the people who reported the Armenian genocide during WWI, who decried the Ottomans for their brutality and inhumanity, were German military officers operating in Turkey. They thought they were fundamentally different from the monsters they condemned. Twenty-one years later, some of the same people would be involved in committing Holocaust. We all have a capacity for barbarity. Only by recognizing its existence and working against it can we prevent repeating history.

“If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?” — Aleksandr Solzhenitsyn, The Gulag Archipelago

We failed to make good on our ancestor’s promise that WWI would be the war to end all wars. It has been one hundred years, and this planet has been scarred by more atrocity, violence, and mass destruction. Perhaps more than even veterans of the Great War could imagine.

Now, we live in the most peaceful time in history by most metrics. There hasn’t been a direct confrontation between great powers since 1945. But our peace is almost as fragile as the “concert of Europe” before World War 1. Almost all of the world’s major powers could obliterate life on Earth with a nuclear war and subsequent nuclear winter. Global military spending (the combined defense spending of every country) is at an all-time high (source). Seemingly minor movements, like China’s expansion into the South China Sea and Russia’s invasion of Crimea, reveal the tension underlying the global geopolitical order.

Image result for graph of conflict over time

And nationalism is making a resurgence globally. About a week ago, Jair Bolsonaro came to power in Brazil. This is a self-proclaimed nationalist who has said things like “I’m in favor of the military regime,” “it’s all right if some innocent people die. Innocent people die in many wars,” and “The only mistake of the dictatorship was torturing and not killing” (source). Our president has said “I’m a nationalist. OK? I’m a nationalist. Nationalist. Use that word” (source). Far-right parties are gaining momentum in Europe. These trends should worry anyone who has read about the first half of the 20th century.

Image result for resurgence of nationalism graph

I remember hearing that the last WWI veteran had died, when I was 13. I didn’t understand this much, but I had listened to my grandpa’s stories about Vietnam. I was wistful and even heartbroken I would never have the chance to hear about WWI from someone who was actually there. Assuming I survive for a while longer, I will probably also live through the death of the last person who fought in WWII, and the last person who experienced the Holocaust. I have a friend who is an international student from Rwanda. His parents lived through the genocide. He told me that they constantly remind him to tell his children their stories, for when the generation who remembers an atrocity disappear, the atrocity once again becomes possible. Hopefully I can be one of the minds that remembers these horrors and helps prevent them.

There are twenty-seven years until the centennial of World War II. These years should be treated as a test for humanity, everyone alive today, and our global political system. Have we overcome global war and permanently ended it? Have we finally decided to prioritize peace, human well-being, and the survival of the human species over geopolitical power games, tribalism, and the relentless struggle for limited resources? Or over these two decades, will we simply repeat what happened in the last century?

Note: Over the past month, I’ve listened to Dan Carlin’s Blueprint for Armageddon podcast about WWI. It’s amazing. It has a perfect balance between historical fact, primary sources, background info, and his personal analysis. And it is free! People (including me) pay thousands for college lectures that are far worse than this podcast. Yes, all parts combined it’s about 15 hours long. But it is important and worth it, and strung out over a few weeks of listening while driving, running, walking, etc, that isn’t that much time. 

Categories
Essays Politics Uncategorized

LDS Doctrine is Silent on Homesexuality

Who am I to write about LDS doctrine? I’m not a leader in the church. I’m not even a member of the church. But I’m interested in understanding the doctrine, and I’ve spent a large part of my life attempting to understand it. And I have a question: why is it an overwhelmingly common belief that LDS doctrine forbids homosexuality?

To be clear, I’m not a conspiracy theorist who denies that leaders in the LDS Church have declared that having “homosexual relations” is a sin. For example, Gordon B. Hinckley said exactly that in his statement Reverence and Morality:

Prophets of God have repeatedly taught through the ages that the practices of homosexual relations, fornication, and adultery are grievous sins.

But, as it is said very often in the church, there is a crucial distinction between doctrine and policy, and between doctrine and the words of well-intentioned and righteous men/women. Doctrine is fixed and unchanging. It is defined in canonized works of LDS doctrine, especially the Book of Mormon and the Pearl of Great Price. It’s my understanding that unless a principle is made permanent and unambiguous in these works of doctrine, it is subject to change. I’ll cite the old and tired examples, and some fresh ones: polygamy, black people receiving the priesthood, the length of church on Sundays, the age missionaries leave. All of these things were declared as revelation by Church leaders, but are not immortalized in doctrine. They are not absolute; they may be wrong, and they certainly may change.

Imagine that you lived in the early days of the Church, before 1978. Would you have objected to the racist policy of excluding black men from the priesthood? Almost every modern Mormon would say yes to this question. Maybe they would have issued an impassioned criticism of the practice. Maybe they would have protested against it. Maybe they would have practiced civil disobedience, ordaining black men despite the words of their church leaders.

But in reality, only a miniscule, extremely select group of people in the church did anything like this. An overwhelming majority followed the policy for the century of its practice. While we understandably want to believe that we would be part of the minority, that is just statistically unlikely. Most people accepted and followed the incorrect revelation. You would have to be a very rare person to disobey it, using a different thought or revelatory process than everyone else in the church. This leads me to ask a critical question: If you, as a member of the church today, want to minimize your chance of practicing incorrect church policy, what would be the best approach?

To me, the answer seems clear: rest your beliefs and actions on personal revelation and on a deep and thorough understanding of the doctrine. If you followed this process, you are far less likely to follow incorrect policy. You would have been a conscientious objector to the racist practices of pre-1978 Mormonism. Nothing in the doctrine says anything about denying black people the priesthood. And it seems unlikely to me that a benevolent God would reveal to you that this practice is okay or good.

My argument is simple: there is nothing in LDS Doctrine that condemns homesexuality or declares that homosexual relations are a sin. Therefore, members should determine for themselves, through a personal revelation process, whether they should follow the policy of the church.

The Book of Mormon does not mention homosexuality anywhere. Neither does the Pearl of Great Price. This is a negative claim, and so can be disproved by a single instance — if you find an example of homosexuality being mentioned in these works, feel free to let me know and I’ll change my belief. But the Topical Guide, the official index of topics in the scriptures, has a section on Homosexual Behavior, and it exclusively cites verses in the Bible. Many of these verses are vague and only very tenuously connected to homosexuality.

After all, the Bible is not clear on homosexuality. All of the most commonly-cited proofs that the Bible condemns it are not actually about homesexual relations. For example, there’s the case of Sodom and Gomorrah in Genesis 19, where the men of Sodom seek to rape two male visitors (who are in reality angels sent by God to see if the city contains any righteousness). God subsequently obliterates the city of Sodom. So God must hate the gays, right? Uh, no. It seems clear to me that the problem here is that it’s rape. Why would you make the conclusion that God condemns homesexuality? The more sensible and humane conclusion, based on the text, would be that God condemns sexual violence and rape.

Another case in the Bible is Leviticus 18:22. They say that a man lying with another man instead of his wife is an ‘abomination.’ But this is a man committing adultery with another male. We already know that adultery is a sin and abomination according to the Bible. Why would we assume that this case is about homesexuality either? It seems more clear that it’s another condemnation of adultery. Also, I’d be cautious attaching too much meaning to the word ‘abomination.’ The Bible uses it very loosely. Things that the Bible says are abominations: Egyptians eating with Hebrews, sacrificing your child to Molech, eating pork, wearing mixed-fabric clothing, interbreeding animals of different species, and trimming your beard. You’ve gotta believe and do some weird things if you believe anything declared an abomination in the Bible is wrong.

Not to mention that if you’re a Mormon, the Old Testament is almost definitely not the highest-ranking thing on your “list of books that matter.” As far as I’ve seen, members think of The Book of Mormon, Pearl of Great Price, New Testament, and the words of modern prophets, in roughly that order, as more accurate (closer to the word of God) than the Old Testament with all its quirks.

The most important text on this topic is probably the The Family: A Proclamation to the World. While I’m not sure if it is Doctrine, the Proclamation is a key document, signed by all the members of the presidency, cited constantly as doctrinal support for church policies on homosexuality and gender. And yet even this document is not clear about homosexual relations being a sin. Here are the relevant lines:

solemnly proclaim that marriage between a man and a woman is ordained of God

This is not an exclusive statement. It merely says that marriage between men & women is ordained; not that marriage between men & men or women & women is not ordained. If you interpret this statement as exclusive of all other types of marriage, polygamy is also wrong — after all, it’s marriage between a man and multiple women, not a marriage between “a man and woman,” as it seems the Proclamation requires. Does that mean it’s not ordained of God? But it clearly was ordained of God in the past. Therefore, the question is still open.

We further declare that God has commanded that the sacred powers of procreation are to be employed only between man and woman, lawfully wedded as husband and wife.

You might think this one is abundantly clear. After all, it contains the key word: “only.” But it’s actually a tautology: the “sacred powers of procreation” CAN only be employed between men & women. Gay sex is not reproductive or procreative. This is a fact of biology; human reproduction through meiosis requires sperm and an egg, which can be naturally produced only in male and female reproductive organs respectively. So this statement doesn’t prohibit homosexual relations either or declare them a sin.

Maybe you could conceive this statement as prohibiting two Mormons in a gay marriage from having kids through surrogacy or in-vitro fertilization, as then they would be using the “sacred powers of procreation” with someone outside of their marriage (the surrogate or sperm donor). But then this would also prohibit an infertile Mormon man or woman in a straight marriage from using IVF or surrogacy either.

Also, slight loophole: gay people can adopt. They don’t have to use the “sacred powers of procreation” to have kids.

Marriage between man and woman is essential to His eternal plan.

Same case as above; this is not exclusive. Marriage between a man and a women is essential; that doesn’t mean other types of marriage aren’t also allowed or essential.

Children are entitled to birth within the bonds of matrimony, and to be reared by a father and a mother who honor marital vows with complete fidelity.

Gay people can do all of this. They can be fathers & mothers; they can honor marital vows with complete fidelity; they can have children within the bonds of matrimony (through adoption, IVF, surrogacy, etc).

My argument is not that LDS Church leaders are definitely wrong about their own doctrine. My argument is that is a possibility. It has precedent. Members should not by default accept the statements of leaders. And I think it’s clear that  nothing in the established, canonized LDS doctrine prohibits homosexuality. It is silent, or where it speaks, it is vague and open to multiple interpretations. This cannot be an accident — after all, for members of the church, the doctrine is revelation from God through His prophets. Is it likely that God would just forget about homosexuality, and fail to make a clear and unambiguous stand on this critical issue? Or is it more likely that it is not mentioned for a reason? What might that reason be?

I cannot answer these questions, only ask them. I cannot decide for members what their religion believes. I can only argue for caution and carefulness in following church policy without a thorough reading of the doctrine and an analysis of its interpretations. I hope that all church members undertake this reading. I would also hope that all church members use personal revelation in their decision-making process, including attempting to understand LGBT people through direct conversation, reading, and research.

If it turns out that after going through this entire process, LDS people find a doctrinal or personal-revelatory basis for treating homosexual relations as a sin, so be it. I’ll be surprised but interested. Please let me know what this basis is.

Categories
Politics

The Fetishization of Individuals: From Hitler to Ken Bone

Humans have a relentless tendency to treat individuals as microcosms for the world. If we can identify a certain individual who fits into a group, we generalize this individual and make him/her representative of the group or concept as a whole. When we speak about these concepts or groups, we are implicitly thinking of these fetishized individuals. Thus, the ‘philosopher’ becomes Plato; the ‘drug lord’ becomes Pablo Escobar; the ‘autocrat’ becomes Hitler. These people that stand as concrete symbols for entire ideas are what I call ‘fetishized individuals.’

There is a constant political battle for control over these fetishized individuals. If someone humanizes and normalizes Pablo Escobar, they successfully humanize and normalize the drug trade as a whole. They take control of the image of the drug trade – the vivid, personalized, and individual representation. Then, when someone thinks of the drug trade, they think of Pablo Escobar – the friend of the poor, the anti-corruption, anti-communist activist, the family man.

Pablo Escobar as a criminal – the negative fetishization of a drug lord.

Pablo Escobar with a child – the positive fetishization of a drug lord.

When another representation is introduced, it is considered in the context of the existing fetish. Thus, it is extremely difficult to argue that El Chapo is terrible to convince someone who has internalized a positive version of Pablo Escobar as the representation of drug lords. Any logical argument is subordinate to their personal ideology-based ‘experience’ of Escobar. Perhaps a poor man heard Escobar gave out money in the streets and built schools for the impoverished; this gives them an emotional attachment – a fetish in a non-sexual sense – to the narrative of Pablo Escobar.

Modern political conflicts have begun using fetishized individuals in more obvious ways than ever before. The most clear example of this is Hitler, for he is the most completely fetishized person in the world. For almost everyone with an elementary education, mentions of autocracy, fascism, dictatorship, and genocide generate immediate images of Hitler with arm raised. One cannot win the ideological battle of making autocracy acceptable until one has made Hitler acceptable.

The first ideological step of neo-Nazis, therefore, is making the fetish of Hitler positive. This can be done in a variety of ways. For example, the extreme right-wing and anti-semitic site Rense.com published a series of images of the ‘hidden’ Adolf Hitler. Using these images of him – holding children, walking in gardens, smiling – makes it much harder to imagine him other contexts. We find it conceptually difficult to unite the many disparate aspects of a person into a single unified identity. How could the same Hitler that ordered the Holocaust also kiss babies? Psychological research shows that cognitive dissonance like this causes tangible pain. The drive to eliminate the dissonance, then, leads some to fetishize Hitler in a wholly positive way.

A positive fetishization of Hitler

The Netflix original Narcos powerfully represents our difficulty in categorizing individuals. You see Escobar in a variety of contexts – at home with his family, in drug labs, on a farm working, and at war. It becomes difficult to remember his horrific crimes when he is watching clouds with his young children. We can’t really conceptualize a ‘whole’ person – only the person we are seeing at the time. Uniting all the different Escobars into one unified individual is almost impossible. Ideologies take advantage of this inability to unify, and summarize individuals by a single aspect. For some, the need to resolve cognitive dissonance means forgetting Escobar’s crimes to enable a positive fetishization of his figure.

In the most recent presidential elections made fetishization a key aspect of political strategy. In 2008, Samuel Wurzelbacher asked Obama a simple question about small business tax policy – almost instantly making him a key symbol of the presidential election. He mentioned something about wanting to buy a plumbing company, and the McCain campaign leaped at the chance to relate to an ‘ordinary American.’ They coined his new name – Joe the Plumber – and repeatedly used him as an example in campaign rhetoric. McCain used the symbol of Joe the Plumber to show that Obama was ‘out of touch with the average Joe.’ It didn’t matter that Samuel wasn’t really a plumber and his name wasn’t really Joe.  Throughout the campaign, writes Amarnath Amarasingam, “A fictional plumber’s false scenario dominated media discourse” (source). 

In the modern election, it seems that the myth of the ordinary Joe has taken hold even more firmly. America has a need to believe in the normal citizen, a 9-5er who wants only find his dreams, stick to his moral standards, and support his family. And yes, this citizen is a he – we seem unable or unwilling to use a female figure as a symbol of American life.

Why do we feel a drive for the ordinary? After all, we are obsessing over the nonexistent. There is no ‘ordinary Joe.’ Every citizen has quirks, mistakes, sins, hidden lies, and extravagant dreams that prevent them from being ordinary. Joe can only exist as an idealized symbol, not a concrete individual. And yet the idea of the ordinary citizen is permanently entrenched in our minds. In some way, many people aspire to be average. This aspect of the psyche creates political battles over the ability to protect the ordinary individual, who stands as a metaphor for the whole American citizenry.

Thus, Ken Bone was created. He was a symbol of an ordinary person – appropriately but not excessively involved in politics, working the day job, dreaming small dreams, providing for the family. He was 2016’s version of 2008’s Joe the Plumber. He represented simple authenticity, the everyman – as his Twitter profile proclaims, he is merely an “average midwestern guy.”

He did not decide to become a meme. The media did not make him a meme; they merely capitalized on the attention once Ken Bone had already gone viral. He was not mass-produced by campaign offices and political propagandists. In an act of near-randomness, he was dubbed a meme by the distributed irrational network of sensation-seeking individuals we call the Internet.  The random series of viral creations in 2016 revealed that memes are fundamentally uncontrollable. After Harambe, damn Daniel, Ted Cruz the zodiac killer, how could we be surprised that Ken Bone was crowned a meme? 

Ken Bone could not even control what he himself symbolized. He attempted to control his own signifier by consistently exhorting people to vote and make their voices heard. But all his efforts, for the most part, failed. Ken Bone does not symbolize democratic participation. After all, memes are inherently dehumanizing. To become a meme, an image must be dissociated from its reality and turned into something else. In linguistics terms, it’s a sign whose signifier is malleable — the image’s meaning, thus, is created by those who share it. The meme itself has no power over its meaning.

This is the danger of living memes – they are tossed around by the whims of the Internet. And when these whims turn sour, the person suffers. Ken’s slightly quirky reddit history was revealed, and he was painted as a monster. 

I expect this process to continue endlessly: an individual becomes a sign that stands as a placeholder for a piece of political ideology. The individual is the object of immense attention, and then is tossed out like discarded trash. We should be careful that our memes do not make us think this is what people truly are. And we should not be surprised when the myth of the ‘ordinary citizen’ is shattered by the reality of the individual’s life and being. 

 

Categories
Politics

The Gradual Causes and Long History of the ‘Fake News Crisis’

It seems undeniable that the specter of fake news has taken control of the media. It seems that we’ve now entered a dark age of journalism, where the fake is indistinguishable from the real. It seems that we have entered an unprecedented era of hoaxing and counterfeiting.

But journalism has never been free of fake news. The Columbia Journalism Review published a detailed history of fake news in the United States. In short: fake news isn’t new, and it has real impacts. For example, people fled the city in droves and marched into public parks with guns after the New York Herald published a fabricated report that dangerous animals had escaped the zoo.

And fake news existed even before Gutenberg invented the printing press. In 1475, an Italian preacher claimed that Jews had drunk the blood of an infant (source). This led a local Bishop to order the arrest of all local Jews, and fifteen Jews were burned alive. The fake story spawned even more hysteria about vampiric Jews, which spread across Europe despite declarations from the Pope to try and end the panic.

Fake news has unbelievable power. In Journalism: A Critical History, Martin Conboy demonstrated its dramatic role in history. In 1898, the USS Maine exploded off the coast of Havana, killing over 250 people. The cause was never explained. The Spanish government, which controlled Cuba, expressed sympathy for the disaster and denied any involvement. The captain of the Maine, one of the few survivors, urged Americans to withhold judgement to prevent conflict with the Spanish.

The headlines after the Maine explosion – based on fake news.

Regardless, Joseph Pulitzer, editor of the New York World, quickly condemned Spain, claiming that they sabotaged the Maine. The World published a cable showing that the Maine was not an accident – even though this cable was completely fake. Newspapers published imaginary drawings of the explosion – even though no one had seen it.  Sales of the World skyrocketed, and the public demanded revenge. Fake news helped start the Spanish-American War. Maybe we shouldn’t be surprised that the namesake of the highest award in journalism, the Pulitzer prize, was a purveyor of fake news.

So is there anything new about the recent fake news? Yes – because Americans are far more dependent on news. News, both print and digital, takes far more forms than at any other point in history – videos, images, blogs, tweets, posts, articles. Almost all Americans can read basic English (source), 84% of Americans use the internet (source), and 79% of American internet users are on Facebook (source)

Never before the last decades has the vast majority of the population been simultaneously connected to a source of instant news. A meme, story, or fake event can spread across the public awareness in a few hours. The fundamental nature of fake news hasn’t changed. It has just become far more common and accessible – just as the modern transportation system allows viruses to spread far more quickly.

A map of global disease spread – not too far from the transmission of fake news.

Furthermore, perhaps the American public has become increasingly vulnerable to fake news. While this claim is hard to demonstrate and somewhat unverifiable, it’s possible that the average reading level has declined. In 1776, the relatively complex, sophisticated pamphlet Common Sense sold 500,000 copies, roughly 20% of the colonial population (source). Now, less than 13% of Americans are proficient in “reading lengthy, complex, abstract prose texts” like Common Sense (source). It seems that the percent of Americans who can understand Common Sense is smaller than the proportion that owned Common Sense in 1776. Plus, the most recent studies show that American reading proficiency has declined over the last two decades (source). Even among college graduates, the proportion that can understand and reason about complex texts has decreased to less than 31% over the last decade (source).

It’s a viable theory that these two trends – increasing access to news and decreasing reading ability – have shaped a perfect storm for fake news. Americans aren’t as likely, or as able, to make nuanced, reasoned analyses about complex texts. They’re more likely to have access to the oversimplified and sensationalized world of internet news (and news in general). More people can be infected by the virus (fake news), and less people have the vaccine (critical thought). As a result, a single tweet can spawn a flurry of fake news that quickly becomes an accepted part of the American psyche.

However, the concept of fake news is also dangerous in other ways. It has already been used as a political weapon to shut down opposing journalism. The left has used it to deride right-wing sources, and the right has coopted it to attack left-wing news. Already, the LA Times and Washington Post have claimed that right-wing sites like the Ron Paul Institute and Breitbart.com are ‘fake news’ (source).

These sites could be derided as biased producers of dangerous propaganda, but this is not the type of fake news I’m interested in. Breitbart may be skewed, but it does base news loosely on actual events. Fake news is completely counterfeit – without referent in the real world. To avoid ‘fake news’ becoming a tool to eliminate enemy voices, we need to delineate the concept clearly and create solutions carefully.

This is an intro to some of my further research into fake news. This week, I’m going to write another article about the philosophy of fake news, and then one about the solutions to the problem. I’ll try to relate the issue to Baudrillard’s theory of hyperreality, examine the differences between Kantian and utilitarian journalistic ethics, and look back to Plato’s critique of postmodernism. Maybe I’ll even make up some ideas of my own.

Why am I so interested? I think that fake news is a microcosm into the larger issue of the ‘postmodern condition,’ which is what I’m focusing on for my three-week independent study. It relates to the need for classical education, which is what I’m studying in a directed readings class. And it’s a good area for philosophical research that hasn’t been fully delved into.

Categories
Politics

What Matters in a President, and Why Electability Doesn’t

I’m not going to argue for any of the candidates in this post. That’ll come later. For now, I think there are three main factors that should be considered in a president. They are all interrelated, and in order of importance. However, if a candidate doesn’t meet any one of these criteria, it is practically impossible to meet any of the other criteria.

  1. Character – This consists mostly of the the moral standards and honesty of the candidate. If I do not trust a candidate, their competence becomes irrelevant, as it will not be used ethically. Their positions become meaningless because they will abandon policy and ethical standards at will. Character also includes temperament and personality, as an angry, irrational, and unstable candidate is a danger to the world and ineffective in diplomacy.
  2. Competence – The proven experience of the candidate, their intelligence, and their ability to implement policies effectively. If a candidate isn’t politically competent, their policies won’t matter because they will never be implemented. Intelligence is not measured by IQ, but by the candidate’s understanding of the world, their rationality, their education, and their working ability.
  3. Policy – The stated positions of the candidate. If every candidate could be trusted to follow their policy statements exactly and implement them effectively, this would be the only issue. Despite its importance, policy is by far the least-discussed issue in this election.

On Electability

Electability, for me, is mostly a non-issue. Of course, a candidate must have some chance of becoming president, or we will be divided into minuscule factions and candidates will only have to win a small portion of the vote to take the election. However, “some chance” is a low margin. For example, Zoltan Istvan, the transhumanist candidate, is not on the ballot in any state and is not polling at more than 5% in any state (source). This is below the “some chance” margin, as 25 days from the election, he has no path to the presidency. However, Evan McMullin, an independent candidate, is on the ballot in 11 states (source), has a significant chance of winning Utah (source), and has a growing campaign nationally. If a candidate passes this minimum threshold of electability, we should move on and consider the three most important factors.

Our democratic obligation is to vote for the candidate we support. Otherwise, our system degrades and no longer represents the population, as the contemporary philosopher Slavoj Zizek described:

We have reached the third degree where we devote our intelligence to anticipating what average opinion expects the average opinion to be.

If we do not vote our conscience, we as a population fail to represent ourselves. We do not ‘throw our vote away’ when we vote for an unlikely candidate we genuinely support, rather, we throw our vote away when we do not vote what we believe. We are not voting for ourselves, but for someone else, for the polls, for the average. Popular opinion becomes the popular opinion of what the popular opinion is; democracy devolves into regressive guessing at the average. Furthermore, government is only legitimate when it represents the governed. When we do not represent ourselves, our government becomes illegitimate.

Finally, there are a ton of misconceptions about voting power in our democracy.

First, statistical analysis shows that, in general, your vote has the most power if you vote for a third-party candidate, not for a major party. I don’t really see the point of explaining this, as the linked post explains it very well. I’d definitely recommend reading it.

Second, the power of a single vote is extremely close to zero. This election, your vote will probably be around 1 in 125 million. Therefore, the best reason to vote is not really to control the election, but to represent ourselves. Don’t do it merely for the results; do it because you believe in your candidate.

Third, a lot of the time, your publicly expressed opinions matter more than your vote, because these opinions influence a significant amount of votes. Who you support actively matters more than who you vote for quietly.

Fourth, whether or not your candidate is elected is not the only measure of voting power. You could say all the Bernie Sanders votes this year were wasted because he didn’t win, but he still radically influenced the election and changed American politics permanently. Winning ≠ success.

Fifth, when you vote for a third party candidate, you break out of the mold. This draws attention far more than obediently voting for established candidates that adhere to the two-party system. Therefore, votes for a third party candidate are more influential than other votes.

That’s why I don’t think electability matters, and why don’t think it should matter. Vote your conscience this election.