Sentience is the ability to experience feelings and sensations. It may not necessarily imply higher cognitive functions such as awareness, reasoning, or complex thought processes. Some writers define sentience exclusively as the capacity for valenced (positive or negative) mental experiences, such as pain or pleasure.
Sentience is an important concept in ethics, as the ability to experience happiness or suffering often forms a basis for determining which entities deserve moral consideration, particularly in utilitarianism.
In Asian religions, the word "sentience" has been used to translate a variety of concepts. In science fiction, "sentience" is sometimes used interchangeably with "sapience", "self-awareness", or "consciousness".
Regarding animal consciousness, the Cambridge Declaration of Consciousness, publicly proclaimed on 7 July 2012 at Cambridge University, states that many non-human animals possess the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states, and can exhibit intentional behaviors. The declaration notes that all Vertebrate (including fish and reptiles) have this neurological substrate for consciousness, and that there is strong evidence that many invertebrates also have it.
In Jainism, many things are endowed with a soul, jīva, which is sometimes translated as 'sentience'. Some things are without a soul, ajiva, such as a chair or spoon. There are different rankings of jīva based on the number of senses it has. Water, for example, is a sentient being of the first order, as it is considered to possess only one sense, that of touch.
Sentience in Buddhism is the state of having senses. In Buddhism, there are six senses, the sixth being the subjective experience of the mind. Sentience is simply awareness prior to the arising of Skandha. Thus, an animal qualifies as a sentient being. According to Buddhism, sentient beings made of pure consciousness are possible. In Mahayana Buddhism, which includes Zen and Tibetan Buddhism, the concept is related to the Bodhisattva, an enlightened being devoted to the liberation of others. The first Bodhisattva vows of a Bodhisattva states, "Sentient beings are numberless; I vow to free them." In traditional Tibetan Buddhism, plants, stones and other inanimate objects are described as possessing spiritual vitality or a form of 'sentience'.Keiji, Nishitani (ed.)(1976). The Eastern Buddhist. 9.2: p.72. Kyoto: Eastern Buddhist Society; cited in Dumoulin, Henrich (author); Heisig, James (translator); and Knitter, Paul (translator)(2005). Zen Buddhism: A History ~ Volume 2: Japan. With an Introduction by Victor Sogen Hori. Bloomington, Indiana, USA: World Wisdom, Inc.
Richard D. Ryder defines sentientism broadly as the position according to which an entity has moral status if and only if it is sentient. In David Chalmer's more specific terminology, Bentham is a narrow sentientist, since his criterion for moral status is not only the ability to experience any phenomenal consciousness at all, but specifically the ability to experience conscious states with negative affective valence (i.e. suffering). Animal welfare and rights advocates often invoke similar capacities. For example, the documentary Earthlings argues that while animals do not have all the desires and ability to comprehend as do humans, they do share the desires for food and water, shelter and companionship, freedom of movement and avoidance of pain.Monson S (2005), "Earthlings".
Animal welfare advocates typically argue that sentient beings should be protected from unnecessary suffering, whereas animal rights advocates propose a set of basic rights for animals, such as the right to life, liberty, and freedom from suffering.
Gary Francione also bases his abolitionist theory of animal rights, which differs significantly from Singer's, on sentience. He asserts that, "All sentient beings, humans or nonhuman, have one right: the basic right not to be treated as the property of others."Francione, Gary. Official blog
Andrew Linzey, a British theologian, considers that Christianity should regard sentient animals according to their intrinsic worth, rather than their utility to humans.
In 1997 the concept of animal sentience was written into the basic law of the European Union. The legally binding protocol annexed to the Treaty of Amsterdam recognises that animals are "sentient beings", and requires the EU and its member states to "pay full regards to the welfare requirements of animals".
The presence of nociception indicates an organism's ability to detect harmful stimuli. A further question is whether the way these noxious stimuli are processed within the brain leads to a subjective experience of pain. To address that, researchers often look for behavioral cues. For example, "if a dog with an injured paw whimpers, licks the wound, limps, lowers pressure on the paw while walking, learns to avoid the place where the injury happened and seeks out analgesics when offered, we have reasonable grounds to assume that the dog is indeed experiencing something unpleasant." Avoiding painful stimuli unless the reward is significant can also provide evidence that pain avoidance is not merely an unconscious reflex (similarly to how humans "can choose to press a hot door handle to escape a burning building").
Historically, fish were not considered sentient, and their behaviors were often viewed as "reflexes or complex, unconscious species-typical responses" to their environment. Their dissimilarity with humans, including the absence of a direct equivalent of the neocortex in their brain, was used as an argument against sentience. Jennifer Jacquet suggests that the belief that fish do not feel pain originated in response to a 1980s policy aimed at banning catch and release. The range of animals regarded by scientists as sentient or conscious has progressively widened, now including animals such as fish, lobsters and octopus.
The AI research community does not consider sentience (that is, the "ability to feel sensations") as an important research goal, unless it can be shown that consciously "feeling" a sensation can make a machine more intelligent than just receiving input from sensors and processing it as information. Stuart Russell and Peter Norvig wrote in 2021: "We are interested in programs that behave intelligently. Individual aspects of consciousness—awareness, self-awareness, attention—can be programmed and can be part of an intelligent machine. The additional project making a machine conscious in exactly the way humans are is not one that we are equipped to take on."
Indeed, leading AI textbooks do not mention "sentience" at all.Leading AI textbooks in 2023:
Digital sentience is of considerable interest to the philosophy of mind. Functionalist philosophers consider that sentience is about "causal roles" played by mental states, which involve information processing. In this view, the physical substrate of this information processing does not need to be biological, so there is no theoretical barrier to the possibility of sentient machines. According to type physicalism however, the physical constitution is important; and depending on the types of physical systems required for sentience, it may or may not be possible for certain types of machines (such as electronic computing devices) to be sentient.
The discussion on the topic of alleged sentience of artificial intelligence has been reignited in 2022 by the claims made about Google's LaMDA (Language model for Dialogue Applications) artificial intelligence system that it is "sentient" and had a "soul". LaMDA is an artificial intelligence system that creates —AI robots designed to communicate with humans—by gathering vast amounts of text from the internet and using Algorithm to respond to queries in the most fluid and natural way possible. The transcripts of conversations between scientists and LaMDA reveal that the AI system excels at this, providing answers to challenging topics about the nature of Emotion, generating Aesop-style fables on cue, and even describing its alleged fears.
Nick Bostrom considers that while LaMDA is probably not sentient, being very sure of it would require understanding how consciousness works, having access to unpublished information about LaMDA's architecture, and finding how to apply the philosophical theory to the machine. He also said about LLMs that "it's not doing them justice to say they're simply regurgitating text", noting that they "exhibit glimpses of creativity, insight and understanding that are quite impressive and may show the rudiments of reasoning". He thinks that "sentience is a matter of degree".
In 2022, philosopher David Chalmers made a speech on whether large language models (LLMs) can be conscious, encouraging more research on the subject. He suggested that current LLMs were probably not conscious, but that the limitations are temporary and that future systems could be serious candidates for consciousness.
According to Jonathan Birch, "measures to regulate the development of sentient AI should run ahead of what would be proportionate to the risks posed by current technology, considering also the risks posed by credible future trajectories." He is concerned that AI sentience would be particularly easy to deny, and that if achieved, humans might nevertheless continue to treat AI systems as mere tools. He notes that the linguistic behaviour of LLMs is not a reliable way to assess whether they are sentient. He suggests to apply theories of consciousness, such as the global workspace theory, to the algorithms implicitly learned by LLMs, but noted that this technique requires advances in AI interpretability to understand what happens inside. He also mentions some other pathways that may lead to AI sentience, such as the brain emulation of sentient animals.
b. Quote: "Granted, these animals do not have all the desires we humans have; granted, they do not comprehend everything we humans comprehend; nevertheless, we and they do have some of the same desires and do comprehend some of the same things. The desires for food and water, shelter and companionship, freedom of movement and avoidance of pain."
|
|