In mathematics and digital electronics, a binary number is a number expressed in the base2 numeral system or binary numeral system, which uses only two symbols: typically "0" (zero) and "1" (one).
The base2 numeral system is a positional notation with a radix of 2. Each digit is referred to as a bit. Because of its straightforward implementation in digital electronic circuitry using , the binary system is used by almost all modern computer.
The method used for ancient Egyptian multiplication is also closely related to binary numbers. In this method, multiplying one number by a second is performed by a sequence of steps in which a value (initially the first of the two numbers) is either doubled or has the first number added back into it; the order in which these steps are to be performed is given by the binary representation of the second number. This method can be seen in use, for instance, in the Rhind Mathematical Papyrus, which dates to around 1650 BC..
It is based on taoistic duality of yin and yang.
Ba gua and a set of 64 hexagrams ("sixtyfour" gua), analogous to the threebit and sixbit binary numerals, were in use at least as early as the Zhou Dynasty.The Song Dynasty scholar Shao Yong (1011–1077) rearranged the hexagrams in a format that resembles modern binary numbers, although he did not intend his arrangement to be used mathematically. Viewing the least significant bit on top of single hexagrams in Shao Yong's square and reading along rows either from bottom right to top left with solid lines as 0 and broken lines as 1 or from top left to bottom right with solid lines as 1 and broken lines as 0 hexagrams can be interpreted as sequence from 0 to 63.
In 1605 Francis Bacon discussed a system whereby letters of the alphabet could be reduced to sequences of binary digits, which could then be encoded as scarcely visible variations in the font in any random text. Importantly for the general theory of binary encoding, he added that this method could be used with any objects at all: "provided those objects be capable of a twofold difference only; as by Bells, by Trumpets, by Lights and Torches, by the report of Muskets, and any instruments of like nature". (See Bacon's cipher.)
John Napier in 1617 described a system he called location arithmetic for doing binary calculations using a nonpositional representation by letters. Thomas Harriot investigated several positional numbering systems, including binary, but did not publish his results; they were found later among his papers. Possibly the first publication of the system in Europe was by Juan Caramuel y Lobkowitz, in 1700.
In 1937, Claude Shannon produced his master's thesis at MIT that implemented Boolean algebra and binary arithmetic using electronic relays and switches for the first time in history. Entitled A Symbolic Analysis of Relay and Switching Circuits, Shannon's thesis essentially founded practical digital circuit design.
In November 1937, George Stibitz, then working at Bell Labs, completed a relaybased computer he dubbed the "Model K" (for " Kitchen", where he had assembled it), which calculated using binary addition. Bell Labs authorized a full research program in late 1938 with Stibitz at the helm. Their Complex Number Computer, completed 8 January 1940, was able to calculate complex numbers. In a demonstration to the American Mathematical Society conference at Dartmouth College on 11 September 1940, Stibitz was able to send the Complex Number Calculator remote commands over telephone lines by a teletype. It was the first computing machine ever used remotely over a phone line. Some participants of the conference who witnessed the demonstration were John von Neumann, John Mauchly and Norbert Wiener, who wrote about it in his memoirs.
The Z1 computer, which was designed and built by Konrad Zuse between 1935 and 1938, used Boolean logic and binary floating point numbers.
1 
 
☒ 
y 
The numeric value represented in each case is dependent upon the value assigned to each symbol. In a computer, the numeric values may be represented by two different ; on a Magnetic field Disk storage, magnetic polarities may be used. A "positive", "yes", or "on" state is not necessarily equivalent to the numerical value of one; it depends on the architecture in use.
In keeping with customary representation of numerals using Arabic numerals, binary numbers are commonly written using the symbols 0 and 1. When written, binary numerals are often subscripted, prefixed or suffixed in order to indicate their base, or radix. The following notations are equivalent:
When spoken, binary numerals are usually read digitbydigit, in order to distinguish them from decimal numerals. For example, the binary numeral 100 is pronounced one zero zero, rather than one hundred, to make its binary nature explicit, and for purposes of correctness. Since the binary numeral 100 represents the value four, it would be confusing to refer to the numeral as one hundred (a word that represents a completely different value, or amount). Alternatively, the binary numeral 100 can be read out as "four" (the correct value), but this does not make its binary nature explicit.
0  0 
1  1 
2  10 
3  11 
4  100 
5  101 
6  110 
7  111 
8  1000 
9  1001 
10  1010 
11  1011 
12  1100 
13  1101 
14  1110 
15  1111 
In the binary system, each digit represents an increasing power of 2, with the rightmost digit representing 2^{0}, the next representing 2^{1}, then 2^{2}, and so on. The equivalent decimal representation of a binary number is sum of the powers of 2 which each digit represents. For example, the binary number 100101 is converted to decimal form as follows:
1/1  1or0.999...  1or0.111...  1/2 + 1/4 + 1/8... 
1/2  0.5or0.4999...  0.1or0.0111...  1/4 + 1/8 + 1/16 . . . 
1/3  0.333...  0.010101...  1/4 + 1/16 + 1/64 . . . 
1/4  0.25or0.24999...  0.01or0.00111...  1/8 + 1/16 + 1/32 . . . 
1/5  0.2or0.1999...  0.00110011...  1/8 + 1/16 + 1/128 . . . 
1/6  0.1666...  0.0010101...  1/8 + 1/32 + 1/128 . . . 
1/7  0.142857142857...  0.001001...  1/8 + 1/64 + 1/512 . . . 
1/8  0.125or0.124999...  0.001or0.000111...  1/16 + 1/32 + 1/64 . . . 
1/9  0.111...  0.000111000111...  1/16 + 1/32 + 1/64 . . . 
1/10  0.1or0.0999...  0.000110011...  1/16 + 1/32 + 1/256 . . . 
1/11  0.090909...  0.00010111010001011101...  1/16 + 1/64 + 1/128 . . . 
1/12  0.08333...  0.00010101...  1/16 + 1/64 + 1/256 . . . 
1/13  0.076923076923...  0.000100111011000100111011...  1/16 + 1/128 + 1/256 . . . 
1/14  0.0714285714285...  0.0001001001...  1/16 + 1/128 + 1/1024 . . . 
1/15  0.0666...  0.00010001...  1/16 + 1/256 . . . 
1/16  0.0625or0.0624999...  0.0001or0.0000111...  1/32 + 1/64 + 1/128 . . . 
This is known as carrying. When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary:
0 1 1 0 1 + 1 0 1 1 1  = 1 0 0 1 0 0 = 36
In this example, two numerals are being added together: 01101_{2} (13_{10}) and 10111_{2} (23_{10}). The top row shows the carry bits used. Starting in the rightmost column, 1 + 1 = 10_{2}. The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added: 1 + 0 + 1 = 10_{2} again; the 1 is carried, and 0 is written at the bottom. The third column: 1 + 1 + 1 = 11_{2}. This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives the final answer 100100_{2} (36 decimal).
When computers must add two numbers, the rule that: x Exclusive or y = (x + y) Modulo operation 2 for any two bits x and y allows for very fast calculation, as well.
Binary Decimal 1 1 1 1 1 likewise 9 9 9 9 9 + 1 + 1 ——————————— ——————————— 1 0 0 0 0 0 1 0 0 0 0 0
Such long strings are quite common in the binary system. From that one finds that large binary numbers can be added using two simple steps, without excessive carry operations. In the following example, two numerals are being added together: 1 1 1 0 1 1 1 1 1 0_{2} (958_{10}) and 1 0 1 0 1 1 0 0 1 1_{2} (691_{10}), using the traditional carry method on the left, and the long carry method on the right:
Traditional Carry Method Long Carry Method vs. carry the 1 until it is one digit past the "string" below 1 1 1 0 1 1 1 1 1 01 1 101 1 1 1 10 cross out the "string", + 1 0 1 0 1 1 0 0 1 1 + 1 010 1 1 0 011 and cross out the digit that was added to it ——————————————————————— —————————————————————— = 1 1 0 0 1 1 1 0 0 0 1 1 1 0 0 1 1 1 0 0 0 1
The top row shows the carry bits used. Instead of the standard carry from one column to the next, the lowestordered "1" with a "1" in the corresponding place value beneath it may be added and a "1" may be carried to one digit past the end of the series. The "used" numbers must be crossed off, since they are already added. Other long strings may likewise be cancelled using the same technique. Then, simply add together any remaining digits normally. Proceeding in this manner gives the final answer of 1 1 0 0 1 1 1 0 0 0 1_{2} (1649_{10}). In our simple example using small numbers, the traditional carry method required eight carry operations, yet the long carry method required only two, representing a substantial reduction of effort.
The binary addition table is similar, but not the same, as the truth table of the logical disjunction operation $\backslash lor$. The difference is that $1$$\backslash lor$$1=1$, while $1+1=10$.
* * * * (starred columns are borrowed from) 1 1 0 1 1 1 0 − 1 0 1 1 1  = 1 0 1 0 1 1 1
* (starred columns are borrowed from) 1 0 1 1 1 1 1  1 0 1 0 1 1  = 0 1 1 0 1 0 0
Subtracting a positive number is equivalent to adding a negative number of equal absolute value. Computers use signed number representations to handle negative numbers—most commonly the two's complement notation. Such representations eliminate the need for a separate "subtract" operation. Using two's complement notation subtraction can be summarized by the following formula:
A − B = A + not B + 1
Since there are only two digits in binary, there are only two possible outcomes of each partial multiplication:
For example, the binary numbers 1011 and 1010 are multiplied as follows:
1 0 1 1 () × 1 0 1 0 ()  0 0 0 0 ← Corresponds to the rightmost 'zero' in + 1 0 1 1 ← Corresponds to the next 'one' in + 0 0 0 0 + 1 0 1 1  = 1 1 0 1 1 1 0
Binary numbers can also be multiplied with bits after a binary point:
1 0 1 . 1 0 1 (5.625 in decimal) × 1 1 0 . 0 1 (6.25 in decimal)  1 . 0 1 1 0 1 ← Corresponds to a 'one' in + 0 0 . 0 0 0 0 ← Corresponds to a 'zero' in + 0 0 0 . 0 0 0 + 1 0 1 1 . 0 1 + 1 0 1 1 0 . 1  = 1 0 0 0 1 1 . 0 0 1 0 1 (35.15625 in decimal)
See also Booth's multiplication algorithm.
The binary multiplication table is the same as the truth table of the logical conjunction operation $\backslash land$.
In the example below, the divisor is 101_{2}, or 5 decimal, while the dividend is 11011_{2}, or 27 decimal. The procedure is the same as that of decimal long division; here, the divisor 101_{2} goes into the first three digits 110_{2} of the dividend one time, so a "1" is written on the top line. This result is multiplied by the divisor, and subtracted from the first three digits of the dividend; the next digit (a "1") is included to obtain a new threedigit sequence:
1 ___________ 1 0 1 ) 1 1 0 1 1 − 1 0 1  0 0 1
The procedure is then repeated with the new sequence, continuing until the digits in the dividend have been exhausted:
1 0 1 ___________ 1 0 1 ) 1 1 0 1 1 − 1 0 1  1 1 1 − 1 0 1  1 0
Thus, the quotient of 11011_{2} divided by 101_{2} is 101_{2}, as shown on the top line, while the remainder, shown on the bottom line, is 10_{2}. In decimal, 27 divided by 5 is 5, with a remainder of 2.
1 0 0 1  √ 1010001 1  101 01 0  1001 100 0  10001 10001 10001  0
Conversion from base2 to base10 simply inverts the preceding algorithm. The bits of the binary number are used one by one, starting with the most significant (leftmost) bit. Beginning with the value 0, the prior value is doubled, and the next bit is then added to produce the next value. This can be organized in a multicolumn table. For example, to convert 10010101101_{2} to decimal:
!Prior value !× 2 + !Next bit !Next value 
= 1 
= 2 
= 4 
= 9 
= 18 
= 37 
= 74 
= 149 
= 299 
= 598 
= 1197 
The result is 1197_{10}. Note that the first Prior Value of 0 is simply an initial decimal value. This method is an application of the Horner scheme.
The fractional parts of a number are converted with similar methods. They are again based on the equivalence of shifting with doubling or halving.
In a fractional binary number such as 0.11010110101_{2}, the first digit is $\backslash begin\{matrix\}\; \backslash frac\{1\}\{2\}\; \backslash end\{matrix\}$, the second $\backslash begin\{matrix\}\; (\backslash frac\{1\}\{2\})^2\; =\; \backslash frac\{1\}\{4\}\; \backslash end\{matrix\}$, etc. So if there is a 1 in the first place after the decimal, then the number is at least $\backslash begin\{matrix\}\; \backslash frac\{1\}\{2\}\; \backslash end\{matrix\}$, and vice versa. Double that number is at least 1. This suggests the algorithm: Repeatedly double the number to be converted, record if the result is at least 1, and then throw away the integer part.
For example, $\backslash begin\{matrix\}\; (\backslash frac\{1\}\{3\})\; \backslash end\{matrix\}$_{10}, in binary, is:
!Converting!!Result 
0. 
0.0 
0.01 
0.010 
0.0101 
Thus the repeating decimal fraction 0.... is equivalent to the repeating binary fraction 0.... .
Or for example, 0.1_{10}, in binary, is:
! Converting !! Result 
0. 
0.0 
0.00 
0.000 
0.0001 
0.00011 
0.000110 
0.0001100 
0.00011001 
0.000110011 
0.0001100110 
This is also a repeating binary fraction 0.0... . It may come as a surprise that terminating decimal fractions can have repeating expansions in binary. It is for this reason that many are surprised to discover that 0.1 + ... + 0.1, (10 additions) differs from 1 in floating point arithmetic. In fact, the only binary fractions with terminating expansions are of the form of an integer divided by a power of 2, which 1/10 is not.
The final conversion is from binary to decimal fractions. The only difficulty arises with repeating fractions, but otherwise the method is to shift the fraction to an integer, convert it as above, and then divide by the appropriate power of two in the decimal base. For example:
Another way of converting from binary to decimal, often quicker for a person familiar with hexadecimal, is to do so indirectly—first converting ($x$ in binary) into ($x$ in hexadecimal) and then converting ($x$ in hexadecimal) into ($x$ in decimal).
For very large numbers, these simple methods are inefficient because they perform a large number of multiplications or divisions where one operand is very large. A simple divideandconquer algorithm is more effective asymptotically: given a binary number, it is divided by 10^{ k}, where k is chosen so that the quotient roughly equals the remainder; then each of these pieces is converted to decimal and the two are Concatenation. Given a decimal number, it can be split into two pieces of about the same size, each of which is converted to binary, whereupon the first converted piece is multiplied by 10^{ k} and added to the second converted piece, where k is the number of decimal digits in the second, leastsignificant piece before conversion.
To convert a hexadecimal number into its binary equivalent, simply substitute the corresponding binary digits:
To convert a binary number into its hexadecimal equivalent, divide it into groups of four bits. If the number of bits isn't a multiple of four, simply insert extra 0 bits at the left (called padding). For example:
To convert a hexadecimal number into its decimal equivalent, multiply the decimal equivalent of each hexadecimal digit by the corresponding power of 16 and add the resulting values:
!Octal!!Binary 
000 
001 
010 
011 
100 
101 
110 
111 
Converting from octal to binary proceeds in the same fashion as it does for hexadecimal:
And from binary to octal:
And from octal to decimal:
 1 × 2^{1}  (1 × 2 = 2)  plus 
plus 
plus 
(1 × = 0.25) 
For a total of 3.25 decimal.
All dyadic fraction $\backslash frac\{p\}\{2^a\}$ have a terminating binary numeral—the binary representation has a finite number of terms after the radix point. Other rational numbers have binary representation, but instead of terminating, they recur, with a finite sequence of digits repeating indefinitely. For instance
The phenomenon that the binary representation of any rational is either terminating or recurring also occurs in other radixbased numeral systems. See, for instance, the explanation in decimal. Another similarity is the existence of alternative representations for any terminating representation, relying on the fact that 0.111111... is the sum of the geometric series 2^{−1} + 2^{−2} + 2^{−3} + ... which is 1.
Binary numerals which neither terminate nor recur represent irrational numbers. For instance,

