The number (; spelled out as pi) is a mathematical constant, approximately equal to 3.14159, that is the ratio of a circle's circumference to its diameter. It appears in many formulae across mathematics and physics, and some of these formulae are commonly used for defining , to avoid relying on the definition of the arc length.
The number is an irrational number, meaning that it cannot be expressed exactly as a ratio of two integers, although fractions such as are commonly used to approximate it. Consequently, its decimal representation never ends, nor enters a permanently repeating pattern. It is a transcendental number, meaning that it cannot be a solution of an algebraic equation involving only finite sums, products, powers, and integers. The transcendence of implies that it is impossible to solve the ancient challenge of squaring the circle with a compass and straightedge. The decimal digits of appear to be random sequence, but no proof of this conjecture has been found.
For thousands of years, mathematicians have attempted to extend their understanding of , sometimes by computing its value to a high degree of accuracy. Ancient civilizations, including the Egyptians and Babylonians, required fairly accurate approximations of for practical computations. Around 250BC, the Greek mathematician Archimedes created an algorithm to approximate with arbitrary accuracy. In the 5th century AD, Chinese mathematicians approximated to seven digits, while Indian mathematicians made a five-digit approximation, both using geometrical techniques. The first computational formula for , based on infinite series, was discovered a millennium later. The earliest known use of the Greek letter π to represent the ratio of a circle's circumference to its diameter was by the Welsh mathematician William Jones in 1706. The invention of calculus soon led to the calculation of hundreds of digits of , enough for all practical scientific computations. Nevertheless, in the 20th and 21st centuries, mathematicians and computer science have pursued new approaches that, when combined with increasing computational power, extended the decimal representation of to many trillions of digits. These computations are motivated by the development of efficient algorithms to calculate numeric series, as well as the human quest to break records. The extensive computations involved have also been used to test as well as stress testing consumer computer hardware.
Because it relates to a circle, is found in many formulae in trigonometry and geometry, especially those concerning circles, ellipses and spheres. It is also found in formulae from other topics in science, such as cosmology, , thermodynamics, mechanics, and electromagnetism. It also appears in areas having little to do with geometry, such as number theory and statistics, and in modern mathematical analysis can be defined without any reference to geometry. The ubiquity of makes it one of the most widely known mathematical constants inside and outside of science. Several books devoted to have been published, and record-setting calculations of the digits of often result in news headlines.
The choice of the symbol is discussed in the section Adoption of the symbol .
is commonly defined as the [[ratio]] of a [[circle]]'s [[circumference]] to its [[diameter]] :
The ratio is constant, regardless of the circle's size. For example, if a circle has twice the diameter of another circle, it will also have twice the circumference, preserving the ratio . This definition of implicitly makes use of flat (Euclidean) geometry; although the notion of a circle can be extended to any curve (non-Euclidean) geometry, these new circles will no longer satisfy the formula .
Here, the circumference of a circle is the arc length around the perimeter of the circle, a quantity which can be formally defined independently of geometry using limits—a concept in calculus. For example, one may directly compute the arc length of the top half of the unit circle, given in Cartesian coordinates by the equation , as the integral:
An integral such as this was proposed as a definition of by Karl Weierstrass, who defined it directly as an integral in 1841.
Integration is no longer commonly used in a first analytical definition because, as explains, differential calculus typically precedes integral calculus in the university curriculum, so it is desirable to have a definition of that does not rely on the latter. One such definition, due to Richard Baltzer and popularized by Edmund Landau, is the following: is twice the smallest positive number at which the cosine function equals 0. is also the smallest positive number at which the sine function equals zero, and the difference between consecutive zeroes of the sine function. The cosine and sine can be defined independently of geometry as a power series, or as the solution of a differential equation.
In a similar spirit, can be defined using properties of the complex exponential, , of a complex number variable . Like the cosine, the complex exponential can be defined in one of several ways. The set of complex numbers at which is equal to one is then an (imaginary) arithmetic progression of the form:
and there is a unique positive real number with this property.
A variation on the same idea, making use of sophisticated mathematical concepts of topology and algebra, is the following theorem: there is a unique (up to automorphism) continuous isomorphism from the group R/ Z of real numbers under addition quotient group integers (the circle group), onto the multiplicative group of complex numbers of absolute value one. The number is then defined as half the magnitude of the derivative of this homomorphism.
is an irrational number, meaning that it cannot be written as the [[ratio of two integers|rational number]]. Fractions such as and are commonly used to approximate , but no [[common fraction]] (ratio of whole numbers) can be its exact value. Because is irrational, it has an infinite number of digits in its decimal representation, and does not settle into an infinitely repeating pattern of digits. There are several proofs that is irrational; they are generally proofs by contradiction and require calculus. The degree to which can be approximated by [[rational number]]s (called the irrationality measure) is not precisely known; estimates have established that the irrationality measure is larger or at least equal to the measure of but smaller than the measure of [[Liouville number]]s.
The digits of have no apparent pattern and have passed tests for statistical randomness, including tests for normal number; a number of infinite length is called normal when all possible sequences of digits (of any given length) appear equally often. The conjecture that is normal number has not been proven or disproven.
Since the advent of computers, a large number of digits of have been available on which to perform statistical analysis. Yasumasa Kanada has performed detailed statistical analyses on the decimal digits of , and found them consistent with normality; for example, the frequencies of the ten digits 0 to 9 were subjected to statistical significance tests, and no evidence of a pattern was found. Any random sequence of digits contains arbitrarily long subsequences that appear non-random, by the infinite monkey theorem. Thus, because the sequence of 's digits passes statistical tests for randomness, it contains some sequences of digits that may appear non-random, such as a sequence of six consecutive 9s that begins at the 762nd decimal place of the decimal representation of . This is also called the "Feynman point" in mathematical folklore, after Richard Feynman, although no connection to Feynman is known.
The transcendence of has two important consequences: First, cannot be expressed using any finite combination of rational numbers and square roots or nth root (such as or ). Second, since no transcendental number can be constructed with compass and straightedge, it is not possible to "square the circle". In other words, it is impossible to construct, using compass and straightedge alone, a square whose area is exactly equal to the area of a given circle. Squaring a circle was one of the important geometry problems of the classical antiquity. Amateur mathematicians in modern times have sometimes attempted to square the circle and claim success—despite the fact that it is mathematically impossible. , p. 185.
An unsolved problem thus far is the question of whether or not the numbers and are algebraically independent ("relatively transcendental"). This would be resolved by Schanuel's conjecture – a currently unproven generalization of the Lindemann–Weierstrass theorem.
Truncating the continued fraction at any point yields a rational approximation for ; the first four of these are , , , and . These numbers are among the best-known and most widely used historical approximations of the constant. Each approximation generated in this way is a best rational approximation; that is, each is closer to than any other fraction with the same or a smaller denominator. Because is transcendental, it is by definition not algebraic number and so cannot be a quadratic irrational. Therefore, cannot have a periodic continued fraction. Although the simple continued fraction for (with numerators all 1, shown above) also does not exhibit any other obvious pattern, several non-simple continued fractions do, such as:
Digits in other number systems
where is the imaginary unit satisfying . The frequent appearance of in complex analysis can be related to the behaviour of the exponential function of a complex variable, described by Euler's formula:
where the constant is the base of the natural logarithm. This formula establishes a correspondence between imaginary powers of and points on the unit circle centred at the origin of the complex plane. Setting in Euler's formula results in Euler's identity, celebrated in mathematics due to it containing five important mathematical constants:
There are different complex numbers satisfying , and these are called the "-th roots of unity" and are given by the formula:
In ancient China, values for included 3.1547 (around 1 AD), (100 AD, approximately 3.1623), and (3rd century, approximately 3.1556). Around 265 AD, the Cao Wei mathematician Liu Hui created a polygon-based iterative algorithm, with which he constructed a 3,072-sided polygon to approximate as 3.1416. Liu later invented a faster method of calculating and obtained a value of 3.14 with a 96-sided polygon, by taking advantage of the fact that the differences in area of successive polygons form a geometric series with a factor of 4. Around 480 AD, Zu Chongzhi calculated that and suggested the approximations and , which he termed the milü ('close ratio') and yuelü ('approximate ratio') respectively, iterating with Liu Hui's algorithm up to a 12,288-sided polygon. With a correct value for its seven first decimal digits, Zu's result remained the most accurate approximation of for the next 800 years.
The Indian astronomer Aryabhata used a value of 3.1416 in his Āryabhaṭīya (499 AD). Around 1220, Fibonacci computed 3.1418 using a polygonal method devised independently of Archimedes. Italian author Dante apparently employed the value .
The Persian astronomer Jamshīd al-Kāshī produced nine sexagesimal digits, roughly the equivalent of 16 decimal digits, in 1424, using a polygon with sides, which stood as the world record for about 180 years. French mathematician François Viète in 1579 achieved nine digits with a polygon of sides. Flemish mathematician Adriaan van Roomen arrived at 15 decimal places in 1593. In 1596, Dutch mathematician Ludolph van Ceulen reached 20 digits, a record he later increased to 35 digits (as a result, was called the "Ludolphian number" in Germany until the early 20th century). Dutch scientist Willebrord Snellius reached 34 digits in 1621, and Austrian astronomer Christoph Grienberger arrived at 38 digits in 1630 using 1040 sides. His evaluation was 3.14159 26535 89793 23846 26433 83279 50288 4196 < < 3.14159 26535 89793 23846 26433 83279 50288 4199. Christiaan Huygens was able to arrive at 10 decimal places in 1654 using a slightly different method equivalent to Richardson extrapolation.
In 1593, François Viète published what is now known as Viète's formula, an infinite product (rather than an infinite sum, which is more typically used in calculations):.
In 1655, John Wallis published what is now known as the Wallis product, also an infinite product:
used infinite series to compute to 15 digits, later writing "I am ashamed to tell you to how many figures I carried these computations".]] In the 1660s, the English scientist Isaac Newton and German mathematician Gottfried Wilhelm Leibniz discovered calculus, which led to the development of many infinite series for approximating . Newton himself used an arcsine series to compute a 15-digit approximation of in 1665 or 1666, writing, "I am ashamed to tell you to how many figures I carried these computations, having no other business at the time.". Newton quoted by Arndt.
In 1671, James Gregory, and independently, Leibniz in 1673, discovered the Taylor series expansion for arctangent:
This series, sometimes called the Gregory–Leibniz series, equals when evaluated with . But for , it converges impractically slowly (that is, approaches the answer very gradually), taking about ten times as many terms to calculate each additional digit.
In 1699, English mathematician Abraham Sharp used the Gregory–Leibniz series for to compute to 71 digits, breaking the previous record of 39 digits, which was set with a polygonal algorithm.
In 1706, John Machin used the Gregory–Leibniz series to produce an algorithm that converged much faster: -,\, \&c. =. This Series (among others for the same purpose, and drawn from the same Principle) I receiv'd from the Excellent Analyst, and my much Esteem'd Friend Mr. John Machin; and by means thereof, Van Ceulens Number, or that in Art. 64.38. may be Examin'd with all desireable Ease and Dispatch. }} Reprinted in
Machin reached 100 digits of with this formula. Other mathematicians created variants, now known as Machin-like formulae, that were used to set several successive records for calculating digits of .
Isaac Newton accelerated the convergence of the Gregory–Leibniz series in 1684 (in an unpublished work; others independently discovered the result):
Leonhard Euler popularized this series in his 1755 differential calculus textbook, and later used it with Machin-like formulae, including with which he computed 20 digits of in one hour. Reprinted in
Machin-like formulae remained the best-known method for calculating well into the age of computers, and were used to set records for 250 years, culminating in a 620-digit approximation in 1946 by Daniel Ferguson – the best approximation achieved without the aid of a calculating device.
In 1844, a record was set by Zacharias Dase, who employed a Machin-like formula to calculate 200 decimals of in his head at the behest of German mathematician Carl Friedrich Gauss.
In 1853, British mathematician William Shanks calculated to 607 digits, but made a mistake in the 528th digit, rendering all subsequent digits incorrect. Though he calculated an additional 100 digits in 1873, bringing the total up to 707, his previous mistake rendered all the new digits incorrect as well.
As individual terms of this infinite series are added to the sum, the total gradually gets closer to , and – with a sufficient number of terms – can get as close to as desired. It converges quite slowly, though – after 500,000 terms, it produces only five correct decimal digits of .
An infinite series for (published by Nilakantha in the 15th century) that converges more rapidly than the Gregory–Leibniz series is:
The following table compares the convergence rates of these two series:
4.0000||2.6666 ... ||3.4666 ... ||2.8952 ... ||3.3396 ... ||rowspan=2 = 3.1415 ... | |
3.0000||3.1666 ... ||3.1333 ... ||3.1452 ... |3.1396 ... |
After five terms, the sum of the Gregory–Leibniz series is within 0.2 of the correct value of , whereas the sum of Nilakantha's series is within 0.002 of the correct value. Nilakantha's series converges faster and is more useful for computing digits of . Series that converge even faster include Machin's series and Chudnovsky's series, the latter producing 14 correct decimal digits per term.
Swiss scientist Johann Heinrich Lambert in 1768 proved that is irrational, meaning it is not equal to the quotient of any two integers. Lambert's proof exploited a continued-fraction representation of the tangent function.Lambert, Johann, "Mémoire sur quelques propriétés remarquables des quantités transcendantes circulaires et logarithmiques", reprinted in . French mathematician Adrien-Marie Legendre proved in 1794 that 2 is also irrational. In 1882, German mathematician Ferdinand von Lindemann proved that is transcendental, confirming a conjecture made by both Legendre and Euler.Hardy and Wright 1938 and 2000: 177 footnote § 11.13–14 references Lindemann's proof as appearing at Math. Ann. 20 (1882), 213–225. Hardy and Wright states that "the proofs were afterwards modified and simplified by Hilbert, Hurwitz, and other writers".cf Hardy and Wright 1938 and 2000:177 footnote § 11.13–14. The proofs that e and π are transcendental can be found on pp. 170–176. They cite two sources of the proofs at Landau 1927 or Perron 1910; see the "List of Books" at pp. 417–419 for full citations.
The earliest known use of the Greek letter alone to represent the ratio of a circle's circumference to its diameter was by Welsh mathematician William Jones in his 1706 work italic=unset; or, a New Introduction to the Mathematics. The Greek letter appears on p. 243 in the phrase " Periphery ()", calculated for a circle with radius one. However, Jones writes that his equations for are from the "ready pen of the truly ingenious Mr. John Machin", leading to speculation that Machin may have employed the Greek letter before Jones. Jones' notation was not immediately adopted by other mathematicians, with the fraction notation still being used as late as 1767.
Euler started using the single-letter form beginning with his 1727 Essay Explaining the Properties of Air, though he used , the ratio of periphery to radius, in this and some later writing. English translation by Ian Bruce : " is taken for the ratio of the radius to the periphery note" English translation in Euler first used in his 1736 work Mechanica, English translation by Ian Bruce : "Let denote the ratio of the diameter to the circumference" and continued in his widely read 1748 work italic=yes (he wrote: "for the sake of brevity we will write this number as ; thus is equal to half the circumference of a circle of radius "). Because Euler corresponded heavily with other mathematicians in Europe, the use of the Greek letter spread rapidly, and the practice was universally adopted thereafter in the Western world, though the definition still varied between and as late as 1761.
Two additional developments around 1980 once again accelerated the ability to compute . First, the discovery of new iterative algorithms for computing , which were much faster than the infinite series; and second, the invention of fast multiplication algorithms that could multiply large numbers very rapidly. Such algorithms are particularly important in modern computations because most of the computer's time is devoted to multiplication. They include the Karatsuba algorithm, Toom–Cook multiplication, and Fourier transform-based methods.
The iterative algorithms were independently published in 1975–1976 by physicist Eugene Salamin and scientist Richard Brent. These avoid reliance on infinite series. An iterative algorithm repeats a specific calculation, each iteration using the outputs from prior steps as its inputs, and produces a result in each step that converges to the desired value. The approach was actually invented over 160 years earlier by Carl Friedrich Gauss, in what is now termed the AGM method (AGM method) or Gauss–Legendre algorithm. As modified by Salamin and Brent, it is also referred to as the Brent–Salamin algorithm.
The iterative algorithms were widely used after 1980 because they are faster than infinite series algorithms: whereas infinite series typically increase the number of correct digits additively in successive terms, iterative algorithms generally multiply the number of correct digits at each step. For example, the Brent–Salamin algorithm doubles the number of digits in each iteration. In 1984, brothers Jonathan Borwein and Peter Borwein produced an iterative algorithm that quadruples the number of digits in each step; and in 1987, one that increases the number of digits five times in each step. (5 times); pp. 113–114 (4 times). For details of algorithms, see
This series converges much more rapidly than most arctan series, including Machin's formula.Bill Gosper was the first to use it for advances in the calculation of , setting a record of 17 million digits in 1985. Ramanujan's formulae anticipated the modern algorithms developed by the Borwein brothers (Jonathan Borwein and Peter Borwein) and the Chudnovsky brothers. The Chudnovsky formula developed in 1987 is
It produces about 14 digits of per term and has been used for several record-setting calculations, including the first to surpass 1 billion (109) digits in 1989 by the Chudnovsky brothers, 10 trillion (1013) digits in 2011 by Alexander Yee and Shigeru Kondo,
In 2006, mathematician Simon Plouffe used the PSLQ integer relation algorithmPSLQ means Partial Sum of Least Squares. to generate several new formulae for , conforming to the following template:
where is (Gelfond's constant), is an odd number, and are certain rational numbers that Plouffe computed.
Another Monte Carlo method for computing is to draw a circle inscribed in a square, and randomly place dots in the square. The ratio of dots inside the circle to the total number of dots will approximately equal .. .
Another way to calculate using probability is to start with a random walk, generated by a sequence of (fair) coin tosses: independent such that with equal probabilities. The associated random walk is
so that, for each , is drawn from a shifted and scaled binomial distribution. As varies, defines a (discrete) stochastic process. Then can be calculated by
This Monte Carlo method is independent of any relation to circles, and is a consequence of the central limit theorem, discussed below.
These Monte Carlo methods for approximating are very slow compared to other methods, and do not provide any information on the exact number of digits that are obtained. Thus they are never used to approximate when speed or accuracy is desired...
Mathematicians Stan Wagon and Stanley Rabinowitz produced a simple spigot algorithm in 1995. Its speed is comparable to arctan algorithms, but not as fast as iterative algorithms.
Another spigot algorithm, the BBP digit extraction algorithm, was discovered in 1995 by Simon Plouffe:
This formula, unlike others before it, can produce any individual hexadecimal digit of without calculating all the preceding digits. Individual binary digits may be extracted from individual hexadecimal digits, and octal digits can be extracted from one or two hexadecimal digits. An important application of digit extraction algorithms is to validate new claims of record computations: After a new record is claimed, the decimal result is converted to hexadecimal, and then a digit extraction algorithm is used to calculate several randomly selected hexadecimal digits near the end; if they match, this provides a measure of confidence that the entire computation is correct.
Between 1998 and 2000, the distributed computing project PiHex used Bellard's formula (a modification of the BBP algorithm) to compute the quadrillionth (1015th) bit of , which turned out to be 0.. Bellards formula in: In September 2010, a Yahoo! employee used the company's Apache Hadoop application on one thousand computers over a 23-day period to compute 256 of at the two-quadrillionth (2×1015th) bit, which also happens to be zero.
In 2022, Plouffe found a base-10 algorithm for calculating digits of .
appears in formulae for areas and volumes of geometrical shapes based on circles, such as [[ellipse]]s, [[sphere]]s, cones, and [[tori|torus]]. Below are some of the more common formulae that involve ..
Apart from circles, there are other curves of constant width. By Barbier's theorem, every curve of constant width has perimeter times its width. The Reuleaux triangle (formed by the intersection of three circles with the sides of an equilateral triangle as their radii) has the smallest possible area for its width and the circle the largest. There also exist non-circular Smoothness and even of constant width.
Integral that describe circumference, area, or volume of shapes generated by circles typically have values that involve . For example, an integral that specifies half the area of a circle of radius one is given by:
In that integral, the function represents the height over the -axis of a semicircle (the square root is a consequence of the Pythagorean theorem), and the integral computes the area below the semicircle. The existence of such integrals makes an algebraic period.
Common trigonometric functions have periods that are multiples of ; for example, sine and cosine have period 2, so for any angle and any integer ,
In many applications, it plays a distinguished role as an eigenvalue. For example, an idealized vibrating string can be modelled as the graph of a function on the unit interval , with fixed ends . The modes of vibration of the string are solutions of the differential equation , or . Thus is an eigenvalue of the second derivative operator , and is constrained by Sturm–Liouville theory to take on only certain specific values. It must be positive, since the operator is negative definite, so it is convenient to write , where is called the wavenumber. Then satisfies the boundary conditions and the differential equation with .
The value is, in fact, the least such value of the wavenumber, and is associated with the fundamental mode of vibration of the string. One way to show this is by estimating the energy, which satisfies Wirtinger's inequality: for a function with and , both square integrable, we have:
with equality precisely when is a multiple of . Here appears as an optimal constant in Wirtinger's inequality, and it follows that it is the smallest wavenumber, using the variational characterization of the eigenvalue. As a consequence, is the smallest singular value of the derivative operator on the space of functions on vanishing at both endpoints (the Sobolev space ).
and equality is clearly achieved for the circle, since in that case and .
Ultimately, as a consequence of the isoperimetric inequality, appears in the optimal constant for the critical Sobolev inequality in n dimensions, which thus characterizes the role of in many physical phenomena as well, for example those of classical potential theory. In two dimensions, the critical Sobolev inequality is
for f a smooth function with compact support in , is the gradient of f, and and refer respectively to the Lp space. The Sobolev inequality is equivalent to the isoperimetric inequality (in any dimension), with the same best constants.
Wirtinger's inequality also generalizes to higher-dimensional Poincaré inequalities that provide best constants for the Dirichlet energy of an n-dimensional membrane. Specifically, is the greatest constant such that
for all convex set subsets of of diameter 1, and square-integrable functions u on of mean zero. Just as Wirtinger's inequality is the variational form of the Dirichlet eigenvalue problem in one dimension, the Poincaré inequality is the variational form of the Neumann problem eigenvalue problem, in any dimension.
Although there are several different conventions for the Fourier transform and its inverse, any such convention must involve somewhere. The above is the most canonical definition, however, giving the unique unitary operator on that is also an algebra homomorphism of to .
The Heisenberg uncertainty principle also contains the number . The uncertainty principle gives a sharp lower bound on the extent to which it is possible to localize a function both in space and in frequency: with our conventions for the Fourier transform,
The physical consequence, about the uncertainty in simultaneous position and momentum observations of a quantum mechanical system, is discussed below. The appearance of in the formulae of Fourier analysis is ultimately a consequence of the Stone–von Neumann theorem, asserting the uniqueness of the Schrödinger representation of the Heisenberg group.
The factor of makes the area under the graph of equal to one, as is required for a probability distribution. This follows from a change of variables in the Gaussian integral:
which says that the area under the basic bell curve in the figure is equal to the square root of .
The central limit theorem explains the central role of normal distributions, and thus of , in probability and statistics. This theorem is ultimately connected with the spectral characterization of as the eigenvalue associated with the Heisenberg uncertainty principle, and the fact that equality holds in the uncertainty principle only for the Gaussian function. Equivalently, is the unique constant making the Gaussian normal distribution equal to its own Fourier transform.; Theorem 1.13. Indeed, according to , the "whole business" of establishing the fundamental theorems of Fourier analysis reduces to the Gaussian integral.
where is the Euler characteristic, which is an integer.; Chapter 6. An example is the surface area of a sphere S of curvature 1 (so that its radius of curvature, which coincides with its radius, is also 1.) The Euler characteristic of a sphere can be computed from its and is found to be equal to two. Thus we have
reproducing the formula for the surface area of a sphere of radius 1.
The constant appears in many other integral formulae in topology, in particular, those involving characteristic classes via the Chern–Weil homomorphism.; Chapter XII Characteristic classes
Although the curve is not a circle, and hence does not have any obvious connection to the constant , a standard proof of this result uses Morera's theorem, which implies that the integral is invariant under homotopy of the curve, so that it can be deformed to a circle and then integrated explicitly in polar coordinates. More generally, it is true that if a rectifiable closed curve does not contain , then the above integral is times the winding number of the curve.
The general form of Cauchy's integral formula establishes the relationship between the values of a complex analytic function on the Jordan curve and the value of at any interior point of :
provided is analytic in the region enclosed by and extends continuously to . Cauchy's integral formula is a special case of the residue theorem, that if is a meromorphic function the region enclosed by and is continuous in a neighbourhood of , then
where the sum is of the residues at the poles of .
The factor of is necessary to ensure that is the fundamental solution of the Poisson equation in :
where is the Dirac delta function.
In higher dimensions, factors of are present because of a normalization by the n-dimensional volume of the unit n sphere. For example, in three dimensions, the Newtonian potential is:
which has the 2-dimensional volume (i.e., the area) of the unit 2-sphere in the denominator.
For a closed curve, this quantity is equal to for an integer called the turning number or index of the curve. is the winding number about the origin of the hodograph of the curve parametrized by arclength, a new curve lying on the unit circle, described by the normalized tangent vector at each point on the original curve. Equivalently, is the degree of the map taking each point on the curve to the corresponding point on the hodograph, analogous to the Gauss map for surfaces.
The gamma function is defined by its Weierstrass product development:
where is the Euler–Mascheroni constant. Evaluated at and squared, the equation reduces to the Wallis product formula. The gamma function is also connected to the Riemann zeta function and identities for the functional determinant, in which the constant plays an important role.
The gamma function is used to calculate the volume of the n-ball of radius r in Euclidean n-dimensional space, and the surface area of its boundary, the n-sphere:
Further, it follows from the functional equation that
The gamma function can be used to create a simple approximation to the factorial function for large : which is known as Stirling's approximation.. Equivalently,
As a geometrical application of Stirling's approximation, let denote the simplex in n-dimensional Euclidean space, and denote the simplex having all of its sides scaled up by a factor of . Then
Ehrhart's volume conjecture is that this is the (optimal) upper bound on the volume of a convex body containing only one lattice point.
Finding a simple solution for this infinite series was a famous problem in mathematics called the Basel problem. Leonhard Euler solved it in 1735 when he showed it was equal to . Euler's result leads to the number theory result that the probability of two random numbers being relatively prime (that is, having no shared factors) is equal to .. This theorem was proved by Ernesto Cesàro in 1881. For a more rigorous proof than the intuitive and informal one given here, see This probability is based on the observation that the probability that any number is divisible by a prime is (for example, every 7th integer is divisible by 7.) Hence the probability that two numbers are both divisible by this prime is , and the probability that at least one of them is not is . For distinct primes, these divisibility events are mutually independent; so the probability that two numbers are relatively prime is given by a product over all primes:
This probability can be used in conjunction with a random number generator to approximate using a Monte Carlo approach..
The solution to the Basel problem implies that the geometrically derived quantity is connected in a deep way to the distribution of prime numbers. This is a special case of Weil's conjecture on Tamagawa numbers, which asserts the equality of similar such infinite products of arithmetic quantities, localized at each prime p, and a geometrical quantity: the reciprocal of the volume of a certain locally symmetric space. In the case of the Basel problem, it is the hyperbolic 3-manifold .
The zeta function also satisfies Riemann's functional equation, which involves as well as the gamma function:
Furthermore, the derivative of the zeta function satisfies
A consequence is that can be obtained from the functional determinant of the harmonic oscillator. This functional determinant can be computed via a product expansion, and is equivalent to the Wallis product formula. The calculation can be recast in quantum mechanics, specifically the variational approach to the Bohr model.
There is a unique character on , up to complex conjugation, that is a group isomorphism. Using the Haar measure on the circle group, the constant is half the magnitude of the Radon–Nikodym derivative of this character. The other characters have derivatives whose magnitudes are positive integral multiples of 2. As a result, the constant is the unique number such that the group T, equipped with its Haar measure, is Pontrjagin dual to the lattice of integral multiples of 2. This is a version of the one-dimensional Poisson summation formula.
are holomorphic functions in the upper half plane characterized by their transformation properties under the modular group (or its various subgroups), a lattice in the group . An example is the Jacobi theta function
which is a kind of modular form called a Jacobi form. This is sometimes written in terms of the nome .
The constant is the unique constant making the Jacobi theta function an automorphic form, which means that it transforms in a specific way. Certain identities hold for all automorphic forms. An example is
which implies that transforms as a representation under the discrete Heisenberg group. General modular forms and other also involve , once again because of the Stone–von Neumann theorem.
is a probability density function. The total probability is equal to one, owing to the integral:
The Shannon entropy of the Cauchy distribution is equal to , which also involves .
The Cauchy distribution plays an important role in potential theory because it is the simplest Furstenberg measure, the classical Poisson kernel associated with a Brownian motion in a half-plane. Conjugate harmonic functions and so also the Hilbert transform are associated with the asymptotics of the Poisson kernel. The Hilbert transform H is the integral transform given by the Cauchy principal value of the singular integral
The constant is the unique (positive) normalizing factor such that H defines a linear complex structure on the Hilbert space of square-integrable real-valued functions on the real line. The Hilbert transform, like the Fourier transform, can be characterized purely in terms of its transformation properties on the Hilbert space : up to a normalization factor, it is the unique bounded linear operator that commutes with positive dilations and anti-commutes with all reflections of the real line.; Chapter II. The constant is the unique normalizing factor that makes this transformation unitary.
One of the key formulae of quantum mechanics is Heisenberg's uncertainty principle, which shows that the uncertainty in the measurement of a particle's position (Δ) and momentum (Δ) cannot both be arbitrarily small at the same time (where is the Planck constant):
The fact that is approximately equal to 3 plays a role in the relatively long lifetime of orthopositronium. The inverse lifetime to lowest order in the fine-structure constant is
where is the mass of the electron.
is present in some structural engineering formulae, such as the [[buckling]] formula derived by Euler, which gives the maximum axial load that a long, slender column of length , modulus of elasticity , and area moment of inertia can carry without buckling:
The field of fluid dynamics contains in Stokes' law, which approximates the drag force exerted on small, sphere objects of radius , moving with velocity in a fluid with dynamic viscosity :
In electromagnetics, the vacuum permeability constant μ0 appears in Maxwell's equations, which describe the properties of Electric field and Magnetic field fields and electromagnetic radiation. Before 20 May 2019, it was defined as exactly
One common technique is to memorize a story or poem in which the word lengths represent the digits of : The first word has three letters, the second word has one, the third has four, the fourth has one, the fifth has five, and so on. Such memorization aids are called . An early example of a mnemonic for pi, originally devised by English scientist James Jeans, is "How I want a drink, alcoholic of course, after the heavy lectures involving quantum mechanics." When a poem is used, it is sometimes referred to as a piem. Poems for memorizing have been composed in several languages in addition to English. Record-setting memorizers typically do not rely on poems, but instead use methods such as remembering number patterns and the method of loci.
A few authors have used the digits of to establish a new form of constrained writing, where the word lengths are required to represent the digits of . The Cadaeic Cadenza contains the first 3835 digits of in this manner, and the full-length book Not a Wake contains 10,000 words, each representing one digit of .
In the Palais de la Découverte (a science museum in Paris) there is a circular room known as the pi room. On its wall are inscribed 707 digits of . The digits are large wooden characters attached to the dome-like ceiling. The digits were based on an 1873 calculation by English mathematician William Shanks, which included an error beginning at the 528th digit. The error was detected in 1946 and corrected in 1949.. .
In Carl Sagan's 1985 novel Contact it is suggested that the creator of the universe buried a message deep within the digits of . This part of the story was omitted from the film adaptation of the novel..
In the United States, Pi Day falls on 14 March (written 3/14 in the US style), and is popular among students. and its digital representation are often used by self-described "math " for among mathematically and technologically minded groups. A college cheer variously attributed to the Massachusetts Institute of Technology or the Rensselaer Polytechnic Institute includes "3.14159". Pi Day in 2015 was particularly significant because the date and time 3/14/15 9:26:53 reflected many more digits of pi. In parts of the world where dates are commonly noted in day/month/year format, 22 July represents "Pi Approximation Day", as 22/7 ≈ 3.142857.
Some have proposed replacing by , arguing that , as the number of radians in one turn or the ratio of a circle's circumference to its radius, is more natural than and simplifies many formulae.
In 1897, an amateur mathematician attempted to persuade the Indiana legislature to pass the Indiana Pi Bill, which described a method to square the circle and contained text that implied various incorrect values for , including 3.2. The bill is notorious as an attempt to establish a value of mathematical constant by legislative fiat. The bill was passed by the Indiana House of Representatives, but rejected by the Senate, and thus it did not become a law.. .
In contemporary internet culture, individuals and organizations frequently pay homage to the number . For instance, the computer scientist Donald Knuth let the version numbers of his program TeX approach . The versions are 3, 3.1, 3.14, and so forth.
|
|