Addition (usually signified by the plus symbol, +) is one of the four basic operations of arithmetic, the other three being subtraction, multiplication, and division. The addition of two Natural number results in the total or summation of those values combined. For example, the adjacent image shows two columns of apples, one with three apples and the other with two apples, totaling to five apples. This observation is expressed as , which is read as "three plus two equals five".
Besides counting items, addition can also be defined and executed without referring to , using abstractions called instead, such as , , and . Addition belongs to arithmetic, a branch of mathematics. In algebra, another area of mathematics, addition can also be performed on abstract objects such as Euclidean vector, matrices, and elements of .
Addition has several important properties. It is commutative, meaning that the order of the operand does not matter, so , and it is associativity, meaning that when one adds more than two numbers, the order in which addition is performed does not matter. Repeated addition of is the same as counting (see Successor function). Addition of does not change a number. Addition also obeys rules concerning related operations such as subtraction and multiplication.
Performing addition is one of the simplest numerical tasks to perform. Addition of very small numbers is accessible to toddlers; the most basic task, , can be performed by infants as young as five months, and even some members of other animal species. In primary education, students are taught to add numbers in the decimal system, beginning with single digits and progressively tackling more difficult problems. Mechanical aids range from the ancient abacus to the modern computer, where research on the most efficient implementations of addition continues to this day.
The numbers or the objects to be added in general addition are collectively referred to as the terms, the addends or the summands. This terminology carries over to the summation of multiple terms. This is to be distinguished from factors, which are multiplication. Some authors call the first addend the augend. In fact, during the Renaissance, many authors did not consider the first addend an "addend" at all. Today, due to the commutative property of addition, "augend" is rarely used, and both terms are generally called addends.
All of the above terminology derives from Latin. "" and "" are English language words derived from the Latin verb addere, which is in turn a compound of ad "to" and dare "to give", from the Proto-Indo-European root deh₃- "to give"; thus to add is to give to. Using the gerundive Affix -nd results in "addend", "thing to be added"."Addend" is not a Latin word; in Latin it must be further conjugated, as in numerus addendus "the number to be added". Likewise from augere "to increase", one gets "augend", "thing to be increased".
"Sum" and "summand" derive from the Latin noun summa "the highest" or "the top", used in Medieval Latin phrase summa linea ("top line") meaning the sum of a column of numerical quantities, following the Ancient Greece and Ancient Rome practice of putting the sum at the top of a column. Addere and summare date back at least to Boethius, if not to earlier Roman writers such as Vitruvius and Frontinus; Boethius also used several other terms for the addition operation. The later Middle English terms "adden" and "adding" were popularized by Geoffrey Chaucer.
Addition is used to model many physical processes. Even for the simple case of adding , there are many possible interpretations and even more visual representations.
This interpretation is easy to visualize, with little danger of ambiguity. It is also useful in higher mathematics (for the rigorous definition it inspires, see below). However, it is not obvious how one should extend this interpretation to include fractional or negative numbers.See for an example of the sophistication involved in adding with sets of "fractional cardinality".
One possibility is to consider collections of objects that can be easily divided, such as pies or, still better, segmented rods. Rather than solely combining collections of segments, rods can be joined end-to-end, which illustrates another conception of addition: adding not the rods but the lengths of the rods.
The sum can be interpreted as a binary operation that combines and algebraically, or it can be interpreted as the addition of more units to . Under the latter interpretation, the parts of a sum play asymmetric roles, and the operation is viewed as applying the unary operation to . Instead of calling both and addends, it is more appropriate to call the "augend" in this case, since plays a passive role. The unary view is also useful when discussing subtraction, because each unary addition operation has an inverse unary subtraction operation, and vice versa.
When addition is used together with other operations, the order of operations becomes important. In the standard order of operations, addition is a lower priority than exponentiation, , multiplication and division, but is given equal priority to subtraction.
Even some nonhuman animals show a limited ability to add, particularly . In a 1995 experiment imitating Wynn's 1992 result (but using instead of dolls), rhesus macaque and cottontop tamarin monkeys performed similarly to human infants. More dramatically, after being taught the meanings of the Arabic numerals 0 through 4, one chimpanzee was able to compute the sum of two numerals without further training. More recently, have demonstrated an ability to perform basic arithmetic.
Different nations introduce whole numbers and arithmetic at different ages, with many countries teaching addition in pre-school. Beckmann, S. (2014). The twenty-third ICMI study: primary mathematics study on whole numbers. International Journal of STEM Education, 1(1), 1–8. Chicago However, throughout the world, addition is taught by the end of the first year of elementary school.Schmidt, W., Houang, R., & Cogan, L. (2002). "A coherent curriculum". American Educator, 26(2), 1–18.
Learning to fluently and accurately compute single-digit additions is a major focus of early schooling in arithmetic. Sometimes students are encouraged to memorize the full addition table by rote learning, but pattern-based strategies are typically more enlightening and, for most people, more efficient:
4 5 . 1 0 + 0 4 . 3 4 ———————————— 4 9 . 4 4
For example:
This is known as carrying.P.E. Bates Bothman (1837) The common school arithmetic. Henry Benton. p. 31 When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary:
0 1 1 0 1 + 1 0 1 1 1 ————————————— 1 0 0 1 0 0 = 36
In this example, two numerals are being added together: 011012 (1310) and 101112 (2310). The top row shows the carry bits used. Starting in the rightmost column, . The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added: again; the 1 is carried, and 0 is written at the bottom. The third column: . This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives the final answer 1001002 (3610).
Addition is also fundamental to the operation of computer, where the efficiency of addition, in particular the carry mechanism, is an important limitation to overall performance.
The abacus, also called a counting frame, is a calculating tool that was in use centuries before the adoption of the written modern numeral system and is still widely used by merchants, traders and clerks in Asia, Africa, and elsewhere; it dates back to at least 2700–2300 BC, when it was used in Sumer. p. 11
Blaise Pascal invented the mechanical calculator in 1642;, p. 48. Quoting . it was the first operational adding machine. Pascal's calculator was limited by its gravity-assisted carry mechanism, which forced its wheels to only turn one way so it could add. To subtract, the operator had to use the Pascal's calculator's complement, which required as many steps as an addition. Gottfried Leibniz built the stepped reckoner, another mechanical calculator, finished in 1694, and Giovanni Poleni improved on the design in 1709 with a calculating clock made of wood that could perform all four arithmetical operations. These early attempts were not commercially successful but inspired later mechanical calculators of the 19th century.
Adders execute integer addition in electronic digital computers, usually using binary arithmetic. The simplest architecture is the ripple carry adder, which follows the standard multi-digit algorithm. One slight improvement is the carry skip design, again following human intuition; one does not perform all the carries in computing , but one bypasses the group of 9s and skips to the answer.
In practice, computational addition may be achieved via Exclusive or and AND bitwise logical operations in conjunction with bitshift operations. Both XOR and AND gates are straightforward to realize in digital logic, allowing the realization of full adder circuits, which in turn may be combined into more complex logical operations. In modern digital computers, integer addition is typically the fastest arithmetic instruction, yet it has the largest impact on performance since it underlies all floating-point operations as well as such basic tasks as memory address generation during memory access and fetching instructions during control flow. To increase speed, modern designs calculate digits in parallel; these schemes go by such names as carry select, carry lookahead, and the Ling adder pseudocarry. Many implementations are, in fact, hybrids of these last three designs.
Some decimal computers in the late 1950s and early 1960s used add tables instead of adders, e.g., RCA 301, IBM 1620.
Arithmetic implemented on a computer can deviate from the mathematical ideal in various ways. For example, if the result of an addition is too large for a computer to store, an arithmetic overflow occurs, resulting in an error message and/or an incorrect answer. Unanticipated arithmetic overflow is a fairly common cause of software bug. Such overflow bugs may be hard to discover and diagnose because they may manifest themselves only for very large input data sets, which are less likely to be used in validation tests.Joshua Bloch, "Extra, Extra – Read All About It: Nearly All Binary Searches and Mergesorts are Broken" . Official Google Research Blog, June 2, 2006. The Year 2000 problem was a series of bugs where overflow errors occurred due to the use of a 2-digit format for years.
Computers have another way of representing numbers, called floating-point arithmetic, which is similar to the scientific notation described above and which reduces the overflow problem. Each floating point number has two parts, an exponent and a mantissa. To add two floating-point numbers, the exponents must match, which typically means shifting the mantissa of the smaller number. If the disparity between the larger and smaller numbers is too great, a loss of precision may result. If many smaller numbers are to be added to a large number, it is best to add the smaller numbers together first and then add the total to the larger number, rather than adding small numbers to the large number one at a time. This makes floating-point addition non-associative in general.
Here means the union of and . An alternate version of this definition allows and to possibly overlap and then takes their disjoint union, a mechanism that allows common elements to be separated out and therefore counted twice.
The other popular definition is recursive:
Again, there are minor variations upon this definition in the literature. Taken literally, the above definition is an application of the recursion theorem on the partially ordered set .For a version that applies to any poset with the descending chain condition, see , p. 100 On the other hand, some sources prefer to use a restricted recursion theorem that applies only to the set of natural numbers. One then considers to be temporarily "fixed", applies recursion on to define a function "", and pastes these unary operations for all together to form the full binary operation., p. 79 observes, "But we want one binary operation , not all these little one-place functions."
This recursive formulation of addition was developed by Dedekind as early as 1854, and he would expand upon it in the following decades. He proved the associative and commutative properties, among others, through mathematical induction.
As an example, ; because −6 and 4 have different signs, their absolute values are subtracted, and since the absolute value of the negative term is larger, the answer is negative.
Although this definition can be useful for concrete problems, the number of cases to consider complicates proofs unnecessarily. So the following method is commonly used for defining integers. It is based on the remark that every integer is the difference of two natural integers and that two such differences, and are equal if and only if . So, one can define formally the integers as the equivalence classes of of natural numbers under the equivalence relation if and only if . The equivalence class of contains either if , or if otherwise. Given that is a natural number, then one can denote the equivalence class of , and by the equivalence class of . This allows identifying the natural number with the equivalence class .
The addition of ordered pairs is done component-wise: A straightforward computation shows that the equivalence class of the result depends only on the equivalence classes of the summands, and thus that this defines an addition of equivalence classes, that is, integers. Another straightforward computation shows that this addition is the same as the above case definition.
Addition of fractions is much simpler when the are the same; in this case, one can simply add the numerators while leaving the denominator the same: so .
The commutativity and associativity of rational addition are easy consequences of the laws of integer arithmetic.The verifications are carried out in , p. 104 and sketched for a general field of fractions over a commutative ring in , p. 263.
Unfortunately, dealing with the multiplication of Dedekind cuts is a time-consuming case-by-case process similar to the addition of signed integers.Schubert, E. Thomas, Phillip J. Windley, and James Alves-Foss. "Higher Order Logic Theorem Proving and Its Applications: Proceedings of the 8th International Workshop, volume 971 of." Lecture Notes in Computer Science (1995). Another approach is the metric completion of the rational numbers. A real number is essentially defined to be the limit of a Cauchy sequence of rationals, lim a n. Addition is defined term by term:Textbook constructions are usually not so cavalier with the "lim" symbol; see , p. 138 for a more careful, drawn-out development of addition with Cauchy sequences. This definition was first published by Georg Cantor, also in 1872, although his formalism was slightly different. One must prove that this operation is well-defined, dealing with co-Cauchy sequences. Once that task is done, all the properties of real addition follow immediately from the properties of rational numbers. Furthermore, the other arithmetic operations, including multiplication, have straightforward, analogous definitions.
In the special case where the order does not matter, the composition operator is sometimes called addition. Such groups are referred to as Abelian or commutative; the composition operator is often written as "+".
Matrix addition is defined for two matrices of the same dimensions. The sum of two m × n (pronounced "m by n") matrices A and B, denoted by , is again an matrix computed by adding corresponding elements:Lipschutz, S., & Lipson, M. (2001). Schaum's outline of theory and problems of linear algebra. Erlangga.
For example:
\begin{bmatrix} 1 & 3 \\ 1 & 0 \\ 1 & 2 \end{bmatrix}+
\begin{bmatrix} 0 & 0 \\ 7 & 5 \\ 2 & 1 \end{bmatrix}&= \begin{bmatrix}
1+0 & 3+0 \\ 1+7 & 0+5 \\ 1+2 & 2+1 \end{bmatrix}\\[8mu]&=
\begin{bmatrix} 1 & 3 \\ 8 & 5 \\ 3 & 3 \end{bmatrix}\end{align}
In modular arithmetic, the set of available numbers is restricted to a finite subset of the integers, and addition "wraps around" when reaching a certain value, called the modulus. For example, the set of integers modulo 12 has twelve elements; it inherits an addition operation from the integers that is central to musical set theory. The set of integers modulo 2 has just two elements; the addition operation it inherits is known in Boolean logic as the "exclusive or" function. A similar "wrap around" operation arises in geometry, where the sum of two angle is often taken to be their sum as real numbers modulo 2π. This amounts to an addition operation on the circle, which in turn generalizes to the operations of higher-dimensional .
The general theory of abstract algebra allows an "addition" operation to be any associative and commutative operation on a set. Basic algebraic structures with such an addition operation include commutative monoids and .
Linear combinations combine multiplication and summation; they are sums in which each term has a multiplier, usually a real numbers or complex numbers number. Linear combinations are especially useful in contexts where straightforward addition would violate some normalization rule, such as mixed strategy of strategies in game theory or superposition of quantum state in quantum mechanics.
In category theory, disjoint union is seen as a particular case of the coproduct operation, and general coproducts are perhaps the most abstract of all the generalizations of addition. The coproduct such as direct sum is named to evoke their connection with addition.
Multiplication can be thought of as repeated addition. If a single term appears in a sum times, then the sum is the product of and . Nonetheless, this works only for . By the definition in general, multiplication is the operation between two numbers, called the multiplier and the multiplicand, that are combined into a single number called the product.
In the real and complex numbers, addition and multiplication can be interchanged by the exponential function: This identity allows multiplication to be carried out by consulting a table of and computing addition by hand; it also enables multiplication on a slide rule. The formula is still a good first-order approximation in the broad context of , where it relates multiplication of infinitesimal group elements with addition of vectors in the associated Lie algebra.
There are even more generalizations of multiplication than addition., p. 49 observes, "By multiplication, properly speaking, a mathematician may mean practically anything. By addition he may mean a great variety of things, but not so great a variety as he will mean by 'multiplication'." In general, multiplication operations always distributivity over addition; this requirement is formalized in the definition of a ring. In some contexts, integers, distributivity over addition, and the existence of a multiplicative identity are enough to determine the multiplication operation uniquely. The distributive property also provides information about the addition operation; by expanding the product in both ways, one concludes that addition is forced to be commutative. For this reason, ring addition is commutative in general., p. 224. For this argument to work, one must assume that addition is a group operation and that multiplication has an identity.
Division is an arithmetic operation remotely related to addition. Since , division is right distributive over addition: .For an example of left and right distributivity, see , p. 15. However, division is not left distributive over addition, such as is not the same as .
The approximation becomes exact in a kind of infinite limit; if either or is an infinite cardinal number, their cardinal sum is exactly equal to the greater of the two. Accordingly, there is no subtraction operation for infinite cardinals.
Maximization is commutative and associative, like addition. Furthermore, since addition preserves the ordering of real numbers, addition distributes over "max" in the same way that multiplication distributes over addition: For these reasons, in tropical geometry one replaces multiplication with addition and addition with maximization. In this context, addition is called "tropical multiplication", maximization is called "tropical addition", and the tropical "additive identity" is negative infinity. Some authors prefer to replace addition with minimization; then the additive identity is positive infinity.
Tying these observations together, tropical addition is approximately related to regular addition through the logarithm: which becomes more accurate as the base of the logarithm increases. The approximation can be made exact by extracting a constant , named by analogy with the Planck constant from quantum mechanics, and taking the "classical limit" as tends to zero: In this sense, the maximum operation is a dequantized version of addition.
|
|