What is Morality? Exploring the Definition of Right and Wrong

Morality is a concept that underpins much of human interaction and societal structure. Yet, defining morality is far from simple. This article aims to explore the multifaceted definition of morality, moving beyond simple descriptions to delve into its philosophical depths. We will not be focusing on moral theory directly, but rather on the very definition of morality itself – the target at which moral theorizing is aimed. Understanding this target is crucial, as it allows us to recognize different moral theories as attempts to grapple with the same fundamental concept. Defining morality also provides a framework for empirical researchers in psychology, anthropology, and evolutionary biology. It allows these scientists to design experiments and formulate hypotheses about moral behavior without being overly constrained by specific cultural or theoretical biases about what constitutes morality. Different academic disciplines may emphasize different aspects of morality based on their research objectives, and we will explore these varying criteria for an adequate definition.

However, it’s important to acknowledge from the outset that there might not be a single, universally applicable definition of morality, even within the realm of philosophy. The term “morality” itself appears to be used in two distinct broad senses: a descriptive sense and a normative sense. Specifically, “morality” can be understood:

  1. Descriptively: Referring to particular codes of conduct that are accepted by a society, a group (like a religious community), or even an individual to guide their own actions.
  2. Normatively: Referring to a code of conduct that should be endorsed by all rational individuals under specific conditions. This is often seen as a standard against which actual codes of conduct can be evaluated.

When defining “morality” in a descriptive sense, we must differentiate which codes of conduct within a society or group genuinely qualify as moral codes. Even in societies without written laws, distinctions often exist between morality, legal rules, and religious dictates. In more complex societies, these distinctions become even more pronounced. Therefore, “morality” cannot simply encompass every code of conduct a society endorses. As Dahl (2023) points out, a descriptive definition must be distinctive, setting apart moral judgments, principles, or codes from other types of normative judgments.

In the normative sense, “morality” points to a code of conduct that would be embraced by anyone meeting certain intellectual and volitional criteria, fundamentally including rationality. Such an individual is typically referred to as a moral agent. However, demonstrating that a code would be endorsed by all moral agents is not sufficient to establish it as the moral code. Rational moral agents might also endorse codes of prudence or rationality itself, but prudence is not synonymous with morality. Therefore, a normative definition requires additional elements, such as impartiality or the idea that morality serves to enable harmonious coexistence within groups.

As highlighted, not all codes endorsed by societies are moral in the descriptive sense, and not all codes endorsed by moral agents are moral in the normative sense. Any comprehensive definition of morality, in either sense, requires further specification. Yet, these initial descriptions provide key features that any adequate definition should incorporate. They offer definitional features of morality in both its descriptive and normative forms. A robust definition should identify enough of these features to categorize various theories – normative moral theories and descriptions of societal moralities – as belonging to a common subject. This is the understanding of “definition” that guides this exploration.

1. Is Morality Unified Enough for a Single Definition?

The very existence of this discussion suggests an underlying assumption: that there is a unifying set of features that allows us to categorize diverse moral systems as “moral.” However, philosopher Walter Sinnott-Armstrong (2016) challenges this assumption, arguing against a unified basis even for moral judgments. He suggests that this lack of unity extends to morality itself, questioning whether it is a truly unified domain. Sinnott-Armstrong contends that moral judgments cannot be unified solely by the concept of harm to others. He points to moral ideals and behaviors widely considered morally wrong but causing no harm, such as cannibalism or flag-burning. Whether these judgments are correct is beside the point; the question is whether they are genuinely moral judgments to begin with.

Sinnott-Armstrong’s argument that moral judgments cannot be delimited by content alone appears valid. It is conceivable that someone could be raised to believe that men wearing shorts is morally wrong. Similarly, his argument against identifying moral judgments through unique neurological features seems plausible. Another approach might be to define moral judgments as those arising from social practices with specific functions. However, defining this function simply as facilitating social interactions for societal flourishing is too broad, as many non-moral judgments also serve this purpose.

Furthermore, attempts to define descriptive moral codes based on their function often seem to project the function theorists believe morality should serve (normative sense) onto actual moralities (descriptive sense). For instance, Joshua Greene proposes that:

morality is a set of psychological adaptations that allow otherwise selfish individuals to reap the benefits of cooperation (2013: 23).

Jonathan Haidt similarly suggests:

moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate self-interest and make cooperative societies possible (2011: 270).

These claims must account for the existence of dysfunctional moralities that demonstrably fail to serve these functions. One possible response is to acknowledge that, like a malfunctioning heart, instances of something with a function can still fail to fulfill that function.

Even if Sinnott-Armstrong is correct about the potentially disunified nature of descriptive morality, normative morality might still be unified. While descriptive morality might be a loosely defined, open-textured concept, normative morality – a code endorsed by all rational agents – could be more cohesive. Consider the analogy of “food.” Descriptively, food is simply what people consider food, a vastly diverse category, even including indigestible and non-nutritious items. However, this doesn’t preclude us from theorizing about what it would be rational to consider food – a more normatively defined category.

2. Descriptive Definitions of “Morality”

An initial, simplistic attempt at a descriptive definition of “morality” might define it as the most important code of conduct endorsed and accepted by a society. However, in large, diverse societies, this definition becomes problematic as there may not be a single, universally recognized “most important” code. A more nuanced descriptive definition might refer to the most important code endorsed by any group within a society, or even by an individual. Such descriptively defined moralities, beyond generally prohibiting harm to certain individuals, can vary significantly in content.

Law is distinct from morality due to its explicit written rules, defined penalties, and designated officials for interpretation and enforcement. While morality and law often overlap in regulating conduct, laws are frequently evaluated and reformed based on moral considerations. Some legal theorists, like Ronald Dworkin (1986), even argue that moral principles are essential for legal interpretation.

While a group or society’s morality may stem from its religion, morality and religion are not identical. Morality is primarily a guide to conduct, whereas religion encompasses much more. Religion typically includes narratives about past events, often involving supernatural beings, which serve to explain or justify prescribed behaviors. Although religious and moral codes often share common ground in prohibited and required actions, religions may extend beyond explicitly moral guidelines, recommending behaviors that morality might even prohibit. Even when morality is not directly tied to formal religion, it’s often perceived as needing religious explanation or justification. However, similar to law, religious practices and precepts can be morally critiqued, for example, for promoting discrimination based on race, gender, or sexual orientation.

“Morality,” when used descriptively to denote a code of conduct endorsed by a group (including a society), distinct from law and religion, reflects a descriptive usage. This descriptive sense also applies when referring to individual attitudes. Just as we can discuss “Greek morality,” we can speak of “a person’s morality.” This descriptive use is increasingly prominent, particularly in the work of psychologists like Jonathan Haidt (2006), influenced by David Hume’s (1751) naturalistic perspective on moral judgments.

Guides to behavior considered moral generally involve avoiding and preventing harm to others (Frankena 1980) and often include a principle of honesty (Strawson 1961). However, moralities encompass broader concerns. R.M. Hare’s (1952, 1963) view of morality as what is considered most important allows for religious practices, customs, and traditions, such as purity and sanctity, to outweigh harm prevention in certain moral codes.

Descriptive morality can vary significantly in content and claimed foundations. Societies might prioritize purity and sanctity, grounding their morality in divine commands. This descriptive understanding, allowing for religion as the basis of morality, can lead to codes that conflict sharply with normative accounts of morality.

A society might have a morality that prioritizes tradition, custom, and loyalty to the group and its authorities over harm prevention. Such a morality might condone harmful actions against outsiders in the name of in-group loyalty. This familiar type of morality, where in-group loyalty is almost synonymous with morality itself, allows comparative and evolutionary psychologists, such as Frans De Waal (1996), to see parallels between human morality and the behavior of non-human animals.

While all societies include more than just harm minimization in their moralities, this element, unlike purity, sanctity, authority, or loyalty, appears to be a universal component of all recognized moral systems. Because harm minimization can conflict with authority and loyalty, fundamental disagreements can arise within a society about morally right conduct. Philosophers like Jeremy Bentham (1789) and John Stuart Mill (1861), advocating a normative view prioritizing harm prevention, critique actual moralities (descriptive sense) that prioritize purity or loyalty when they clash with minimizing harm.

Some psychologists, like Haidt, argue that morality descriptively includes concerns for harm, purity, and loyalty, with varying emphasis across individuals and societies. However, beyond harm avoidance within certain groups, there may be no universal content shared by all descriptive moralities. Justifications also vary, ranging from religion to tradition to rational human nature. Beyond harm, the common thread in descriptive moralities is their endorsement by an individual or group, typically a society, serving as a behavioral guide. Descriptive morality may lack impartiality towards all moral agents and may not be universally applicable (compare MacIntyre 1957).

While most philosophers do not use “morality” descriptively, some do. Ethical relativists like Gilbert Harman (1975), Edward Westermarck (1960), and Jesse Prinz (2007) deny a universal normative morality, asserting that societal or individual moralities are the only real moralities. They argue that “morality” only has a valid referent when used descriptively and that normative morality is a mistaken concept. While acknowledging that many English speakers use “morality” normatively, relativists believe they are mistaken in assuming a universal moral code exists. Ethical relativists are essentially moral skeptics regarding normative morality.

Descriptive “morality” can encompass codes with vastly different content while remaining unambiguous, similar to the unambiguous use of “law” despite diverse legal systems. However, descriptive “morality” can refer to societal, group, or individual codes. This can lead to ambiguity when a religious group’s code conflicts with societal norms: are these conflicting moralities, internal moral conflicts, or a clash between a group code and morality itself?

In small, homogenous societies, a single behavioral guide endorsed and accepted by nearly all members may exist. In such cases, the referent of “morality” is relatively unambiguous. However, in larger societies, individuals often belong to groups with conflicting behavioral codes, and societal codes are not always universally endorsed. If individuals prioritize a conflicting group code (often religious) over the societal one, they may view those following societal norms as immoral in cases of conflict.

Descriptively, an individual’s morality cannot be a code they would prefer others not to follow. However, adopting a personal moral code does not necessitate demanding universal adoption. Individuals may adopt demanding personal codes they deem too challenging for most. They might judge others who don’t adopt their code as less morally good, without necessarily considering them immoral. However, a behavioral guide is plausibly labeled “morality” only if the individual would permit others to follow it, understanding “follow” as “successfully follow.” They might not want others to try to follow it due to concerns about negative consequences from predictable failures due to bias, limited foresight, or intelligence.

3. Descriptive Definitions in Related Fields

Psychologists and anthropologists, needing to probe attitudes through questionnaires and other methods, might be expected to be particularly attentive to distinguishing moral judgments from other judgments. Yet, Abraham Edel (1962: 56) observed a lack of explicit concern for defining morality in anthropology, noting that “morality…is taken for granted, in the sense that one can invoke it or refer to it at will; but it is not explained, depicted, or analysed.” He warned of the danger of “merging the morality concept with social control concepts.” This danger was amplified by the influence of sociologist Émile Durkheim (1906/2009), who equated morality with how a society enforces its social rules.

This lack of operational definition for morality or moral judgment may contribute to the widespread but questionable assumption in anthropology, observed by James Laidlaw (2016: 456), that altruism is the core of ethics. However, Laidlaw also notes that many features of what Bernard Williams (1985) termed “the morality system”—features Williams critiqued as secularized Christian values—are prevalent beyond the West. This leads Laidlaw to ask:

Which features, formal or substantive, are shared by the “morality system” of the modern West and those of the other major agrarian civilizations and literate religions?

This question closely resembles a request for a descriptive definition of morality.

Michael Klenk (2019) notes a recent “ethical turn” in anthropology, with moral systems and ethics becoming distinct objects of study, moving away from the Durkheimian paradigm. This includes examining self-development, virtues, habits, and deliberation in moral breakdowns. However, Klenk’s survey of anthropological attempts to study morality as an independent domain concludes that their efforts so far:

do not readily allow a distinction between moral considerations and other normative considerations such as prudential, epistemic, or aesthetic ones (2019: 342).

In light of Edel’s concern about conflating moral systems with social control, Oliver Scott Curry’s (2016) hypothesis is relevant. Curry argues that:

morality turns out to be a collection of biological and cultural solutions to the problems of cooperation and conflict recurrent in human social life (2016: 29).

Curry points out that rules related to kinship, mutualism, exchange, and conflict resolution are found across societies. He argues that many have animal behavior precursors and can be explained by his hypothesis of morality as a solution to cooperation and conflict problems. He also notes that philosophers from Aristotle to Rawls have emphasized cooperation and conflict resolution in understanding morality. However, it’s unclear if Curry’s view adequately distinguishes morality from law or other systems that aim to reduce conflict through coordination.

In evolutionary biology, morality is sometimes equated with fairness (Baumard et al., 2013: 60, 77) or reciprocal altruism (Alexander 1987: 77). It’s also identified with an evolved capacity for specific judgments and signaling (Hauser 2006). This views morality as a natural kind, identifiable through causal/historical processes, lessening the need for content-based definitions. Instead, focusing on central features suffices to study psychologically and biologically distinct mechanisms, with moral study becoming a detailed inquiry into these mechanisms’ nature and evolution.

Psychology also reflects this “natural kind” view of moral judgment (Mikhail 2007). If moral judgment is a natural kind, an individual’s moral code could simply be their disposition to make moral judgments. Evidence for this hypothesis includes the relative universality of moral concepts like obligation, permission, and prohibition, and arguments similar to Chomsky’s “poverty of the stimulus” for universal grammar (Dwyer et al. 2010; Roedder and Harman 2010).

A key area in psychology is the distinction between the moral and the conventional (Machery and Stich 2022). This distinction differentiates between (a) acts wrong only due to convention or authority and (b) acts wrong independently, possessing seriousness, justified by harm, rights, or justice. Elliot Turiel emphasized this, highlighting the danger of conflating moral rules with non-moral “conventions that further the coordination of social interactions within social systems” (1983: 109–11). Those accepting this distinction implicitly offer a descriptive definition of morality. However, not all psychologists agree on this distinction (see Machery and Mallon 2010 and Kelly et al. 2007).

Psychologist Kurt Gray’s account of moral judgment offers a way to determine individual or group morality. He and colleagues propose:

morality is essentially represented by a cognitive template that combines a perceived intentional agent with a perceived suffering patient (Gray et al. 2012: 102).

While strong, this focuses on the template used in moral thought, not morality’s inherent nature. Like a “dog” template with four legs, tail, fur, the moral template doesn’t require all instances to perfectly fit.

Even if Gray et al.’s hypothesis is correct, our psychology doesn’t necessarily require us to always think of morality in terms of intentional agents and suffering patients. Despite some suggestions that “moral acts can be defined in terms of intention and suffering” (2012: 109), their considered view is that the dyadic template fits the majority of moral situations as we perceive them. The connection between immoral behavior and suffering they cite is sometimes indirect. For instance, they fit authority violations into their suffering template by noting that “authority structures provide a way of peacefully resolving conflict” and “violence results when social structures are threatened.”

In a recent discussion of morality as a psychological object, Audun Dahl (2023) defines “morality” as concerns agents deem obligatory related to others’ welfare, rights, fairness, and justice, and psychologically relevant effects. Dahl’s definition has merit, but he doesn’t present it as uniquely best. He emphasizes that empirical science doesn’t require a “correct” definition of “moral,” but clarity about the sense used in research. Dahl argues that definitions for psychologists should be technical (well-defined, applicable, somewhat aligning with everyday use), psychological (picking out common psychological characteristics), descriptive (independent of normative stances), and distinctive (differentiating moral attitudes from other normative attitudes).

The increasing prevalence of natural language processing and AI technologies makes defining morality practically relevant. It’s not just about AI agents avoiding immoral behavior; a morality definition is crucial for training AI and benchmarking its ethical acceptability. Research on moral phenomena in AI and NLP is expanding rapidly (see Hagendorff and Danks 2022). This offers new perspectives on human moral judgment determinants (e.g., Pauketat and Anthis 2022) as humans now interact with non-human agents resembling humans in action and communication. Vida et al. (2023) suggest this research may lead to broader morality definitions in psychology and beyond.

Standardized benchmarks for moral value alignment in AI would be beneficial. However, no such benchmarks exist due to the lack of a shared morality definition. Vida et al.’s survey of nearly 100 AI ethics papers reveals:

there is a lack of clarity and consistency as to whether morality in [natural language processing and artificial intelligence] is addressed purely empirically or also normatively. This lack of clarity persists also in regards to the further usage of ethical terminology (2023: 5538).

4. Normative Definitions of “Morality”

Explicit philosophical attempts to define normative morality are scarce, particularly since the early 20th century. This might stem from early positivist concerns about normative property metaphysics, compounded by Wittgensteinian doubts about defining significant terms. Whatever the reason, when definitions are offered, they often target moral judgment (Hare 1952, 1981) rather than morality itself.

Even moral realists with developed theories rarely offer explicit morality definitions. Instead, they typically justify a set of norms they assume their audience is already familiar with, implicitly defining morality through uncontroversial content like prohibitions against killing, stealing, lying, cheating, etc. This “reference-fixing” or “substantive definition” (Prinz and Nichols 2010: 122) is useful for theory-neutral starting points.

However, a reference-fixing definition is not a definition in the sense explored here. It merely specifies content, leaving implicit what makes that content moral. A better definitional schema is: morality is the behavioral code that all rational persons, under specified conditions, would endorse. Some who use this schema argue that no such code exists – these are moral skeptics regarding normative morality.

Moral skeptics are valuable in seeking normative morality definitions as their arguments often specify definitional features and then argue nothing possesses them. For example, some skeptics reject the “code” aspect at morality’s core, advocating normative theorizing around the good life or virtues instead. They view morality as definitionally requiring a “code.” Elizabeth Anscombe (1958) expressed this view, echoed by Bernard Williams (1985). J.L. Mackie (1977) was skeptical because he believed no code could infallibly motivate those who understood it. Mackie defined morality as requiring endorsement by all rational beings in a strong, motivation-including sense.

4.1 Morality and Rationality

Understanding normative morality heavily depends on one’s concept of rationality. Normative morality is sometimes seen as prohibiting consensual sexual activity or recreational drug use. Including such prohibitions in a universal guide for rational persons requires a specific view of rationality. Many would dispute that harmless consensual sex or recreational drug use is inherently irrational.

One rationality concept excluding sexual matters, at least at a basic moral level, is that irrational actions increase the risk of self-harm without compensating benefits (for oneself or others). This “hybrid” rationality concept blends self-interest and altruism. Morality based on hybrid rationality could align with Hobbes (1660), focusing on peaceful coexistence and harm prevention. Moral prohibitions against harmful actions are not absolute but require justification for violation. Kant (1797) seemingly believes some prohibitions, like lying, are never justified. This stems from Kant’s (1785) purely formal rationality concept, contrasting with the hybrid concept.

Consequentialist views may seem to deviate from the normative “morality” definition schema, not explicitly referencing endorsement or rationality. However, this is misleading. Mill defines morality as:

the rules and precepts for human conduct, by the observance of which [a happy existence] might be, to the greatest extent possible, secured (1861 [2002: 12]).

He believes a “right state” of mind is “most conducive to the general happiness,” favoring this morality. Act-consequentialist J.J.C. Smart (1956) explicitly sees ethics as studying rational behavior, embracing utilitarianism as maximizing utility is always rational. Many moral theorists implicitly assume their codes would be endorsed by rational people under certain conditions. Without this, moral requirements could be rationally dismissed with a shrug. Most moral realists don’t believe this option is open, especially with added conditions beyond rationality like belief restrictions (Rawls’ veil of ignorance) or impartiality.

Normative “morality” often implies inherent overridingness. Moral prohibitions/requirements should never be violated for non-moral reasons. This is trivial if “should” means “morally should.” The overridingness claim is typically understood with “should” meaning “rationally should,” asserting moral requirements are rational requirements. While common, this is not always definitional. Sidgwick (1874) doubted rationality required morality over egoism, though not requiring egoism either. Gert (2005) argued moral behavior is always rationally permissible but not always required. Foot (1972) suggested moral reasons stem from contingent commitments or objective interests, sometimes absent, making moral behavior not always rationally required. Desire-based reason moral realists and formal rationality theorists sometimes deny moral behavior is even rationally permissible (Goldman 2009), a consequence seemingly implied by Foot’s view, though she doesn’t emphasize it.

Despite theorists like Sidgwick, Gert, Foot, and Goldman not seeing moral behavior as rationally required, they can still use normative “morality.” Normative “morality” and its existence only require rational people endorsing a system, not always being motivated to follow it. However, denying even endorsement implies not using “morality” normatively or denying normative morality’s existence. Such theorists might use “morality” descriptively or without a specific sense in mind.

4.2 Morality as a Public System

Let’s define a public system as a norm system that (1) is knowable to all it applies to, and (2) is not irrational for anyone to follow (Gert 2005: 10). Legal systems ideally should be public, but in large societies, this is impossible. Games are closer to public systems; players know rules or know judges interpret rules. Game rules apply only to players. Quitting is an option if rules are undesirable. Normative morality is plausibly defined as the one public system no rational person can quit. This inescapability means one cannot avoid legitimate sanction for moral violations except by ceasing to be a moral agent. Morality applies simply by being a rational person aware of moral prohibitions/requirements and capable of behavior guided by them.

Public systems can be formal or informal. Informal systems lack authoritative judges and decision procedures for unique action guidance or disagreement resolution. Formal systems have one or both (Gert 2005: 9). Professional basketball is formal; referees’ foul calls are definitive. Pickup basketball is informal. Persistent moral disagreements suggest morality is an informal public system. This holds even for Divine Command Theory and act utilitarianism, lacking authoritative judges of God’s will or utility maximization and lacking decision procedures (Scanlon 2011: 261–2). Recognizing persistent moral disagreement and morality as an informal public system implies some moral issues are unresolvable. Political or legal systems can resolve them formally, but these systems don’t provide uniquely correct moral guides.

Despite significant moral controversies, morality, like all informal public systems, presupposes agreement on most situations. Everyone agrees that killing or seriously harming moral agents requires strong justification. Trivial daily decisions are rarely discussed, masking the vast agreement on moral rules and justifications for their violation. This agreement enables morality to function as an informal public system.

Using the “informal public system” notion improves the normative “morality” definition schema. The old schema was: morality is the code of conduct all rational persons would endorse. The improved schema is: morality is the informal public system all rational persons would endorse. Some theorists might not see informality as definitional, believing morality could provide precise, knowable answers to every question. This would imply conscientious moral agents often cannot know moral permissions, requirements, or allowances, a possibility some philosophers deny.

4.3 The Content of Morality

For moral realists seeing normative morality as an informal public system endorsed by all rational persons to govern moral agent behavior, morality has fairly definite content. Philosophers like Hobbes (1660), Mill (1861), and most non-religious Anglo-American philosophers limit morality to behavior directly or indirectly affecting others.

The claim that normative morality governs only behavior affecting others is somewhat controversial and perhaps shouldn’t be definitional, even if entailed by the correct moral theory. Some argue morality also governs self-regarding behavior like recreational drug use, masturbation, or talent neglect. Kant (1785) might represent this wider morality concept. Interpreted this way, Kant’s theory fits the basic schema but includes self-regarding moral requirements due to his specific rationality account. However, pace Kant, it’s doubtful all moral agents would endorse a universal guide governing entirely self-affecting behavior. When morality is fully separated from religion, moral rules seem to limit content to behavior directly or indirectly causing or risking harm to others. Some seemingly self-affecting behaviors, like recreational drug use, may indirectly harm others by supporting harmful illegal activities.

Confusion about moral content sometimes arises from insufficient distinction between morality and religion. Governing self-affecting behavior is supported by the idea of divine creation and obedience, possibly a holdover from when morality and religion were less distinct. This religious influence may also affect claims about the immorality of homosexuality. Those clearly distinguishing morality from religion typically don’t consider sexual orientation a moral matter.

One can argue that a specific social goal is definitional of morality (Frankena 1963). Stephen Toulmin (1950) proposed societal harmony. Kurt Baier (1958) suggested “the good of everyone alike.” Utilitarians often cite maximizing overall good. Gert (2005) proposed lessening evil or harm. This might seem a narrow utilitarian view, but utilitarians always include harm reduction as essential to maximizing good, and harm avoidance/prevention dominates their examples. Paradigm moral rules prohibit direct or indirect harm, like rules against killing, pain infliction, deception, and promise breaking.

Among moral realists, content similarities outweigh differences. All prohibit actions like killing, pain infliction, deception, and promise breaking. Some include charitable duties, but failing to be charitable isn’t justified in the same way as harm-causing acts. Kant (1785) and Mill (1861) distinguish perfect (not harming) and imperfect (helping) duties. Gert (2005) sees charity as morally good but not morally required; being charitable is always good, but not being charitable is not immoral.

5. Relations Between Normative and Descriptive Morality

Normative “morality” need not possess the two formal features essential to descriptive moralities: societal, group, or individual endorsement and acceptance as a behavioral guide. Normative morality might never have been endorsed by any society, group, or individual. This stems partly from its definition in terms of a likely counterfactual conditional: the code rational people would endorse under certain conditions.

Moral realists might expect descriptive moralities to approximate normative morality to some extent. They might argue that some societal codes lack so many essential normative morality features that they shouldn’t even be considered descriptive moralities (Luco 2014: 385). Even with such criteria, it remains plausible that all societies have something classifiable as “morality.” One could simply argue that many, perhaps all, are somewhat flawed. Moral realists might believe these guides have enough normative features to be descriptive moralities, but would not be fully endorsed by all moral agents.

While most moral realists don’t claim any society’s actual guide is normative morality, “natural law” theories argue that any rational person in any society, even with a flawed morality, can grasp general moral prohibitions, requirements, discouragements, encouragements, and allowances. Theological natural law theory (Aquinas) attributes this to God-implanted reason. Secular natural law theory (Hobbes 1660) credits natural reason alone. Natural law theorists also claim morality applies universally to all rational persons, past and present. Such views can blur normative and descriptive morality by suggesting a sense in which all societal members are already aware of and accept the same code.

In contrast to natural law, other moral theories hold less strong views on universal moral knowledge. Yet, many believe morality is knowable to all it legitimately judges. Baier (1958), Rawls (1971), and contractarians deny “esoteric” morality: judging people by unknowable rules. For these theorists, morality is a public system. Moral blame differs from legal or religious blame in that it cannot be applied to those legitimately ignorant of their moral obligations.

6. Variations

As one elaborates on “endorsement,” “rationality,” and “conditions” in normative morality definitions, one moves from definition towards actual moral theory. The same applies to descriptive morality definitions as one specifies “endorsement” by persons or groups. The following are four broad ways to refine normative morality definitions, focusing on “endorsement.” They are schematic enough to be definitions but relate to specific theories, showing the general schema’s applicability. Similar examples could be offered for descriptive morality.

Expressivist Allan Gibbard (1990) argued moral assertions express acceptance of norms for guilt and anger emotions. A moral realist could refine the normative morality schema using this:

V1: Morality is the informal public system identified by the set of norms for guilt and anger that all rational people, under specified conditions, would accept.

Similar refinements could use norms for praise and blame (Sprigge 1964: 317) or reward and punishment (Skorupski 1993). These “expressivist” adaptations specify “endorsement” more concretely.

Another “endorsement” interpretation is advocacy. Advocacy is second- or third-personal, directed at others. One can advocate a code without personally intending to follow it. Hypocritical advocacy is still advocacy. “Endorsement” as advocacy is applicable to descriptive morality (group/societal) and normative morality. Gert (2005) offers such a view. The corresponding definition is:

V2: Morality is the informal public system that, under specified conditions, would be advocated by all rational people.

“Advocacy” is less relevant to individual descriptive morality, as hypocrisy often makes us deny genuine moral views to advocates. “Endorsement” as acceptance applies to individuals and groups, yielding:

V3: Morality is the informal public system that, under specified conditions, would be accepted by all rational people.

Contractarians like Gauthier (1986), prioritizing a first-person perspective, might see morality this way, as acceptance is first-personal.

T.M. Scanlon (1982, 1998) suggests morality’s subject matter is rules for behavior regulation that are not reasonably rejectable based on a desire for informed, unforced general agreement. This can also be seen as an instance of the schema:

V4: Morality is the informal public system that, given reasonableness and a desire for unforced general agreement, would not be rejected by any rational people.

V4 is a limiting case, interpreting “endorsement” simply as non-rejection.

Bibliography

  • Alexander, Richard, 1987, The Biology of Moral Systems, New York: Routledge.
  • Anscombe, G. E. M., 1958, “Modern Moral Philosophy”, Philosophy, 33(124): 1–19. doi:10.1017/S0031819100037943
  • Aquinas, Thomas, c.1270, Summa Theologiae, Paris.
  • Baier, Kurt, 1958, The Moral Point of View, Ithaca, NY: Cornell University Press.
  • Baumard, Nicolas, Jean-Baptiste André, and Dan Sperber, 2013, “A Mutualistic Approach to Morality: The Evolution of Fairness by Partner Choice”, Behavioral and Brain Sciences, 36(1): 59–78. doi:10.1017/S0140525X11002202
  • Bentham, Jeremy, 1789, An Introduction to the Principles of Morals and Legislation, New York: Prometheus Books, 1988.
  • Brink, David, 1997, “Kantian Rationalism: Inescapability, Authority, and Supremacy”, in Ethics and Practical Reason, Garrett Cullity and Berys Gaut (eds.), Oxford: Oxford University Press, pp. 255–291.
  • Curry, Oliver Scott, 2016, “Morality as Cooperation: A Problem-Centred Approach”, in The Evolution of Morality, Todd K. Shackelford and Ranald Hansen (eds.), Cham: Springer, pp. 27–51. doi:10.1007/978-3-319-19671-8_2
  • Dahl, Audun, 2023, “What We Do When We Define Morality (and Why We Need to Do It),” Psychological Inquiry, 34(2): 53–79.
  • Darwall, Stephen, 2006, The Second-person Standpoint: Morality, Respect, and Accountability, Cambridge, MA: Harvard University Press.
  • De Waal, Frans, 1996, Good Natured: The Origins of Right and Wrong in Humans and Other Animals, Cambridge, MA: Harvard University Press.
  • Doris, John M. and The Moral Psychology Research Group (eds.), 2010, The Moral Psychology Handbook, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199582143.001.0001
  • Durkheim, Émile, 1906 [2009], “La Détermination du fait moral”, in Sociologie et Philosophie, Paris: Félix Alcan, 1924; translated as “The Determination of Moral Facts”, in Sociology and Philosophy, David Pocock (ed. and trans.), 1953; reprinted London: Routledge, 2009, pp. 16–31.
  • Dworkin, Ronald, 1986, Law’s Empire, Cambridge, MA: Belknap Press.
  • Dwyer, Susan, Bryce Huebner, and Marc D. Hauser, 2010, “The Linguistic Analogy: Motivations, Results, and Speculations”, Topics in Cognitive Science, 2(3): 486–510. doi:10.1111/j.1756-8765.2009.01064.x
  • Edel, Abraham, 1962, “Anthropology and Ethics in Common Focus”, The Journal of the Royal Anthropological Institute of Great Britain and Ireland, 92(1): 55–72. doi:10.2307/2844321
  • Foot, Philippa, 1972, “Morality as a System of Hypothetical Imperatives”, The Philosophical Review, 81(3): 305–316. doi:10.2307/2184328
  • Frankena, William, 1963, “Recent Conceptions of Morality”, in G. Nakhnikian and H. Castañeda (eds.), Morality and the Language of Conduct, Detroit, MI: Wayne State University Press, pp. 1–24.
  • –––, 1973, Ethics, Englewood Cliffs, N.J.: Prentice-Hall.
  • –––, 1980, Thinking about Morality, Ann Arbor, MI: University of Michigan Press.
  • Gauthier, David, 1986, Morals by Agreement, Oxford: Oxford University Press.
  • Gert, Bernard, 2005, Morality: Its Nature and Justification, Revised Edition, New York: Oxford University Press.
  • Gibbard, Allan, 1990, Wise Choices, Apt Feelings, Cambridge, MA: Harvard University Press.
  • Goldman, Alan H., 2009, Reasons from Within: Desires and Values, New York: Oxford University Press. doi:10.1093/acprof:oso/9780199576906.001.0001
  • Gray, Kurt, Liane Young, and Adam Waytz, 2012, “Mind Perception Is the Essence of Morality”, Psychological Inquiry, 23(2): 101–124. doi:10.1080/1047840X.2012.651387
  • Greene, Joshua, 2013, Moral Tribes: Emotion, Reason, and The Gap between Us and Them, New York: Penguin.
  • Hagendorff, Thilo, and David Danks, 2023, “Ethical and Methodological Challenges in Building Morally Informed AI Systems,” AI and Ethics, 3(2): 553–566.
  • Haidt, Jonathan, 2006, The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom, New York: Basic Books.
  • –––, 2011, The Righteous Mind: Why Good People Are Divided by Politics and Religion, New York: Pantheon.
  • Hare, R.M., 1952, The Language of Morals, New York: Oxford University Press.
  • –––, 1963, Freedom and Reason, New York: Oxford University Press.
  • –––, 1981, Moral Thinking, New York: Oxford University Press.
  • Harman, Gilbert, 1975, “Moral Relativism Defended”, The Philosophical Review, 84(1): 3–22. doi:10.2307/2184078
  • Hauser, Marc, 2006, Moral Minds: How Nature Designed our Universal Sense of Right and Wrong, New York: Harper Collins.
  • Hobbes, Thomas, 1660 [1994], Leviathan, edited by Edwin Curly, Indianapolis: Hackett Publishing Company, 1994.
  • Hume, David, 1751 [1975], Enquiries concerning Human Understanding and concerning the Principles of Morals, edited by L.A. Selby-Bigge, 3rd edition revised by P.H. Nidditch, Oxford: Clarendon Press, 1975.
  • Kant, Immanuel, 1785 and 1797 [1993], Groundwork of the Metaphysics of Morals: with On a Supposed Right to Lie because of Philanthropic Concerns, 3rd edition, translated by J. Ellington, Indianapolis: Hackett, 1993.
  • Kelly, Daniel, Stephen Stich, Kevin J. Haley, Serena J. Eng, and Daniel M. T. Fessler, 2007, “Harm, Affect, and the Moral/Conventional Distinction”, Mind & Language, 22(2): 117–131. doi:10.1111/j.1468-0017.2007.00302.x
  • Klenk, Michael, 2019, “Moral Philosophy and the ‘Ethical Turn’ in Anthropology”, Zeitschrift Für Ethik Und Moralphilosophie, 2(2): 331–353. doi:10.1007/s42048-019-00040-9
  • Laidlaw, James, 2016, “The Interactional Foundations of Ethics and the Formation and Limits of Morality Systems”, HAU: Journal of Ethnographic Theory, 6(1): 455–461. doi:10.14318/hau6.1.024
  • Liao, S. Matthew (ed.), 2016, Moral Brains. The Neuroscience of Morality, New York: Oxford University Press. doi:10.1093/acprof:oso/9780199357666.001.0001
  • Luco, Andrés, 2014, “The Definition of Morality: Threading the Needle,” Social Theory and Practice 40(3): 361–387.
  • Machery, Edouard and Ron Mallon, 2010, “The Evolution of Morality”, in Doris and The Moral Psychology Research Group 2010: 3–46.
  • Machery, Edouard and Stephen Stich, “The Moral/Conventional Distinction”, The Stanford Encyclopedia of Philosophy (Summer 2022 Edition), Edward N. Zalta and Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/sum2022/entries/moral-conventional/>.
  • Mackie, J. L., 1977. Morality: Inventing Right and Wrong, Harmondsworth: Penguin.
  • MacIntyre, Alasdair, 1957, “What Morality Is Not”, Philosophy, 32(123): 325–335. doi:10.1017/S0031819100051950
  • –––, 1999, Dependent Rational Animals, Chicago: Open Court.
  • Mikhail, John, 2007, “Universal Moral Grammar: Theory, Evidence and the Future”, Trends in Cognitive Sciences, 11(4): 143–152. doi:10.1016/j.tics.2006.12.007
  • Mill, John Stuart, 1861 [2002], Utilitarianism, edited by G. Sher, Indianapolis: Hackett, 2002.
  • Moore, G.E., 1912, Ethics, New York: H. Holt.
  • –––, 1903, Principia Ethica, New York: Cambridge University Press, 1993.
  • Pauketat, Janet, and Jacy Anthis, 2022, “Predicting the Moral Consideration of Artificial Intelligences,” Computers in Human Behavior, 136: 107372.
  • Prinz, Jesse, 2007, The Emotional Construction of Morals, Oxford: Clarendon Press.
  • Prinz, Jesse and Shaun Nichols, 2010, “Moral Emotions”, in Doris and The Moral Psychology Research Group 2010: 111–146.
  • Rawls, John, 1971, A Theory of Justice, Cambridge, MA: Harvard University Press.
  • Roedder, Erica and Gilbert Harman, 2010, “Linguistics and Moral Theory”, in Doris and The Moral Psychology Research Group 2010: 273–296.
  • Scanlon, T. M., 1982, “Contractualism and Utilitarianism”, in Utilitarianism and Beyond, Amartya Sen and Bernard Williams (eds.), Cambridge: Cambridge University Press, 103–128. doi:10.1017/CBO9780511611964.007
  • –––, 1998, What We Owe to Each Other, Cambridge, MA: Harvard University Press.
  • –––, 2011, “What Is Morality?” in J. Shephard, S. Kosslyn, and E. Hammonds (eds.), The Harvard Sampler: Liberal Education for the Twenty-First Century, Cambridge, MA: Harvard University Press, pp. 243–66.
  • Sidgwick, Henry, 1874, Methods of Ethics, Indianapolis: Hackett Pub. Co., 1981.
  • Sinnott-Armstrong, Walter (ed.), 2008, Moral Psychology Volume 1, The Evolution of Morality: Adaptations and Innateness, Cambridge, MA: MIT Press.
  • –––, 2016, “The Disunity of Morality”, in Liao 2016: 331–354.
  • Skorupski, John, 1993, “The Definition of Morality”, Royal Institute of Philosophy Supplement, 35: 121–144. doi:10.1017/S1358246100006299
  • Smart, J. J. C., 1956, “Extreme and Restricted Utilitarianism”, The Philosophical Quarterly, 6(25): 344–354. doi:10.2307/2216786
  • Smith, Michael, 1994, The Moral Problem, Oxford: Blackwell.
  • Sprigge, Timothy L. S., 1964, “Definition of a Moral Judgment”, Philosophy, 39(150): 301–322. doi:10.1017/S0031819100055777
  • Strawson, P. F., 1961, “Social Morality and Individual Ideal”, Philosophy, 36(136): 1–17. doi:10.1017/S003181910005779X
  • Thomson, J.J. and G. Dworkin (eds.), 1968, Ethics, New York: Harper & Row.
  • Toulmin, Stephen, 1950, An Examination of the Place of Reason in Ethics, Cambridge: Cambridge University Press.
  • Turiel, Elliot, 1983, The Development of Social Knowledge: Morality and Convention, Cambridge: Cambridge University Press.
  • Vida, Karina Vida, Judith Simon, and Anne Lauscher, 2023, “Values, Ethics, Morals? On the Use of Moral Concepts in NLP Research,” in Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore: Association for Computational Linguistics, pp. 5534–5554.
  • Warnock, Geoffrey, 1971, The Object of Morality, London: Methuen.
  • Westermarck, Edward, 1960, Ethical Relativity, Paterson, N.J.: Littlefield, Adams.
  • Williams, Bernard, 1985, Ethics and the Limits of Philosophy, London: Fontana.
  • Wren, T.E. (ed.), 1990, The Moral Domain: Essays in the Ongoing Discussion Between Philosophy and the Social Sciences, Cambridge, MA: MIT Press.

Academic Tools

How to cite this entry.
Preview the PDF version of this entry at the Friends of the SEP Society.
Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO).
Enhanced bibliography for this entry at PhilPapers, with links to its database.

Other Internet Resources

[Please contact the author with suggestions.]

Related Entries

consequentialism | definitions | ethics: natural law tradition | Hobbes, Thomas: moral and political philosophy | Kant, Immanuel | Mill, John Stuart | moral realism | moral relativism | moral skepticism

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *