Product Code Database
Example Keywords: pokimon -paint $57
barcode-scavenger
   » » Wiki: Pythagorean Addition
Tag Wiki 'Pythagorean Addition'.
Tag

In , Pythagorean addition is a on the that computes the length of the of a , given its two sides. Like the more familiar addition and multiplication operations of , it is both and .

This operation can be used in the conversion of Cartesian coordinates to polar coordinates, and in the calculation of Euclidean distance. It also provides a simple notation and terminology for the diameter of a , the energy-momentum relation in , and the overall noise from independent sources of noise. In its applications to signal processing and propagation of measurement uncertainty, the same operation is also called addition in quadrature. A scaled version of this operation gives the or root mean square.

It is implemented in many programming libraries as the hypot function, in a way designed to avoid errors arising due to limited-precision calculations performed on computers. has written that "Most of the square root operations in computer programs could probably be avoided if Pythagorean were more widely available, because people seem to want square roots primarily when they are computing distances."


Definition

According to the Pythagorean theorem, for a with side lengths a and b, the length of the can be calculated as \sqrt{a^2+b^2}. This formula defines the Pythagorean addition operation, denoted here as \oplus: for any two a and b, the result of this operation is defined to be a\oplus b = \sqrt{a^2+b^2 \vphantom)}. For instance, the special right triangle based on the Pythagorean triple (3,4,5) gives 3\oplus 4=5.This example is from . uses two more integer Pythagorean triples, (119,120,169) and (19,180,181), as examples. However, the result of this example is unusual: for other integer arguments, Pythagorean addition can produce a quadratic irrational number as its result.


Properties
The operation \oplus is and . Therefore, if three or more numbers are to be combined with this operation, the order of combination makes no difference to the result: x_1 \oplus x_2 \oplus \cdots \oplus x_n=\sqrt{x_1^2 + x_2^2 + \cdots + x_n^2}. Additionally, on the non-negative real numbers, zero is an for Pythagorean addition. On numbers that can be negative, the Pythagorean sum with zero gives the : x\oplus 0=|x|. The three properties of associativity, commutativity, and having an identity element (on the non-negative numbers) are the defining properties of a commutative monoid.


Applications

Distance and diameter
The Euclidean distance between two points in the , given by their Cartesian coordinates (x_1,y_1) and (x_2,y_2), is (x_1-x_2)\oplus (y_1-y_2). In the same way, the distance between three-dimensional points (x_1,y_1,z_1) and (x_2,y_2,z_2) can be found by repeated Pythagorean addition as (x_1-x_2)\oplus (y_1-y_2)\oplus (z_1-z_2).

Pythagorean addition can also find the length of an interior of a or rectangular cuboid. For a rectangle with sides a and b, the diagonal length is a\oplus b. For a cuboid with side lengths a, b, and c, the length of a is a\oplus b\oplus c.


Coordinate conversion
Pythagorean addition (and its implementation as the hypot function) is often used together with the atan2 function (a two-parameter form of the ) to convert from Cartesian coordinates (x,y) to polar coordinates (r,\theta): \begin{align} r&=x\oplus y=\mathsf{hypot}(x,y)\\ \theta&=\mathsf{atan2}(y,x).\\ \end{align}


Quadratic mean and spread of deviation
The root mean square or quadratic mean of a of n numbers is \tfrac{1}{\sqrt n} times their Pythagorean sum. This is a of the numbers.

The standard deviation of a collection of observations is the quadratic mean of their individual deviations from the mean. When two or more independent random variables are added, the standard deviation of their sum is the Pythagorean sum of their standard deviations. Thus, the Pythagorean sum itself can be interpreted as giving the amount of overall noise when combining independent sources of noise.

If the engineering tolerances of different parts of an assembly are treated as independent noise, they can be combined using a Pythagorean sum. In experimental sciences such as , addition in quadrature is often used to combine different sources of measurement uncertainty. However, this method of propagation of uncertainty applies only when there is no correlation between sources of uncertainty, and it has been criticized for conflating experimental noise with .


Other
The energy-momentum relation in , describing the energy of a moving particle, can be expressed as the Pythagorean sum E = mc^2 \oplus pc, where m is the of a particle, p is its , c is the speed of light, and E is the particle's resulting relativistic energy.

When combining signals, it can be a useful design technique to arrange for the combined signals to be in polarization or phase, so that they add in quadrature. In early radio engineering, this idea was used to design directional antennas, allowing signals to be received while nullifying the interference from signals coming from other directions. When the same technique is applied in software to obtain a directional signal from a radio or , Pythagorean addition may be used to combine the signals. Other recent applications of this idea include improved efficiency in the frequency conversion of .

In the of haptic perception, Pythagorean addition has been proposed as a model for the perceived intensity of when two kinds of vibration are combined.

In , the for consists of a step to determine the of an image followed by a Pythagorean sum at each pixel to determine the magnitude of the gradient.


Implementation
In a 1983 paper, and Donald Morrison described an for computing Pythagorean sums, without taking square roots. This was soon recognized to be an instance of Halley's method, and extended to analogous operations on matrices.

Although many modern implementations of this operation instead compute Pythagorean sums by reducing the problem to the function, they do so in a way that has been designed to avoid errors arising from the limited-precision calculations performed on computers. If calculated using the natural formula, r = \sqrt{x^2 + y^2}, the squares of very large or small values of x and y may exceed the range of machine precision when calculated on a computer. This may to an inaccurate result caused by arithmetic underflow and overflow, although when overflow and underflow do not occur the output is within two ulp of the exact result. Common implementations of the hypot function rearrange this calculation in a way that avoids the problem of overflow and underflow and are even more precise.

If either input to hypot is infinite, the result is infinite. Because this is true for all possible values of the other input, the IEEE 754 floating-point standard requires that this remains true even when the other input is not a number (NaN).


Calculation order
The difficulty with the naive implementation is that x^2+y^2 may overflow or underflow, unless the intermediate result is computed with extended precision. A common implementation technique is to exchange the values, if necessary, so that |x|\ge|y|, and then to use the equivalent form r = |x| \sqrt{1 + \left(\frac{y}{x}\right)^2}.

The computation of y/x cannot overflow unless both x and y are zero. If y/x underflows, the final result is equal to |x|, which is correct within the precision of the calculation. The square root is computed of a value between 1 and 2. Finally, the multiplication by |x| cannot underflow, and overflows only when the result is too large to represent.

One drawback of this rearrangement is the additional division by x, which increases both the time and inaccuracy of the computation. More complex implementations avoid these costs by dividing the inputs into more cases:

  • When x is much larger than y, x\oplus y\approx|x|, to within machine precision.
  • When x^2 overflows, multiply both x and y by a small scaling factor (e.g. 2−64 for IEEE single precision), use the naive algorithm which will now not overflow, and multiply the result by the (large) inverse (e.g. 264).
  • When y^2 underflows, scale as above but reverse the scaling factors to scale up the intermediate values.
  • Otherwise, the naive algorithm is safe to use.
Additional techniques allow the result to be computed more accurately than the naive algorithm, e.g. to less than one ulp. Researchers have also developed analogous algorithms for computing Pythagorean sums of more than two values.


Fast approximation
The alpha max plus beta min algorithm is a high-speed approximation of Pythagorean addition using only comparison, multiplication, and addition, producing a value whose error is less than 4% of the correct result. It is computed as a\oplus b\approx \alpha\cdot\max(a,b)+\beta\cdot\min(a,b) for a careful choice of parameters \alpha and \beta.


Programming language support
Pythagorean addition function is present as the hypot function in many programming languages and their libraries. These include: , D, , Go, (since ES2015), Julia, , , and Python. C++11 includes a two-argument version of hypot, and a three-argument version for x\oplus y\oplus z has been included since C++17. The Java implementation of hypot can be used by its interoperable JVM-based languages including , , Kotlin, and Scala. Similarly, the version of hypot included with Ruby extends to Ruby-based domain-specific languages such as . In Rust, hypot is implemented as a method of objects rather than as a two-argument function.

has Pythagorean addition and as built-in operations, under the symbols ++ and +-+ respectively.


History
The Pythagorean theorem on which this operation is based was studied in ancient Greek mathematics, and may have been known earlier in Egyptian mathematics and Babylonian mathematics; see . However, its use for computing distances in Cartesian coordinates could not come until after René Descartes invented these coordinates in 1637; the formula for distance from these coordinates was published by in 1731.

The terms "Pythagorean addition" and "Pythagorean sum" for this operation have been used at least since the 1950s, and its use in signal processing as "addition in quadrature" goes back at least to 1919.

From the 1920s to the 1940s, before the widespread use of computers, multiple designers of included square-root scales in their devices, allowing Pythagorean sums to be calculated mechanically. Researchers have also investigated for approximating the value of Pythagorean sums.

Page 1 of 1
1
Page 1 of 1
1

Account

Social:
Pages:  ..   .. 
Items:  .. 

Navigation

General: Atom Feed Atom Feed  .. 
Help:  ..   .. 
Category:  ..   .. 
Media:  ..   .. 
Posts:  ..   ..   .. 

Statistics

Page:  .. 
Summary:  .. 
1 Tags
10/10 Page Rank
5 Page Refs
1s Time