The bit (a portmanteau of Binary number digit)
target="_blank" rel="nofollow">] is a basic unit of information used in computing and digital . A binary digit can have only one of two values, and may be physically represented with a two-state device. These state values are most commonly represented as either a .
The two values of a binary digit can also be interpreted as Truth value (true/false, yes/no), algebraic signed number (+/−), activation states (on/off), or any other two-valued attribute. The correspondence between these values and the physical states of the underlying storage or computing device is a matter of convention, and different assignments may be used even within the same device or computer program. The length of a binary number may be referred to as its bit-length.
In information theory, one bit is typically defined as the information entropy of a binary random variable that is 0 or 1 with equal probability,
[John B. Anderson, Rolf Johnnesson (2006) Understanding Information Transmission.] or the information that is gained when the value of such a variable becomes known. [Simon Haykin (2006), Digital Communications] [IEEE Std 260.1-2004]
Confusion often arises because the words bit and binary digit are used interchangeably. But, within information theory, a bit and a binary digit are fundamentally different types of entities. A binary digit is a number that can adopt one of two possible values (0 or 1), whereas a bit is the maximum amount of information that can be conveyed by a binary digit. By analogy, a binary digit is like a container, whereas a bit is the amount of matter in the container.
In quantum computing, a quantum bit or qubit is a quantum system that can exist in superposition of two classical (i.e., non-quantum) bit values.
The symbol for binary digit is either simply bit (recommended by the IEC 80000-13:2008 standard) or lowercase b (recommended by the IEEE 1541-2002 and IEEE Std 260.1-2004 standards). A group of eight binary digits is commonly called one byte, but historically the size of the byte is not strictly defined.
As a unit of information in information theory, the bit has alternatively been called a shannon,
named after Claude Shannon, the founder of field of information theory. This usage distinguishes the quantity of information from the form of the state variables used to represent it. When the logical values are not equally probable or when a signal is not conveyed perfectly through a communication system, a binary digit in the representation of the information will convey less than one bit of information. However, the shannon unit terminology is uncommon in practice.
The encoding of data by discrete bits was used in the
invented by Basile Bouchon
and Jean-Baptiste Falcon (1732), developed by Joseph Marie Jacquard (1804), and later adopted by Semen Korsakov
, Charles Babbage
, Hermann Hollerith, and early computer manufacturers like IBM
. Another variant of that idea was the perforated paper tape
. In all those systems, the medium (card or tape) conceptually carried an array of hole positions; each position could be either punched through or not, thus carrying one bit of information. The encoding of text by bits was also used in Morse code
(1844) and early digital communications machines such as
and stock ticker machines (1870).
Ralph Hartley suggested the use of a logarithmic measure of information in 1928.
[Norman Abramson (1963), Information theory and coding. McGraw-Hill.] Claude E. Shannon first used the word bit in his seminal 1948 paper A Mathematical Theory of Communication.
He attributed its origin to John W. Tukey, who had written a Bell Labs memo on 9 January 1947 in which he contracted "binary information digit" to simply "bit". Interestingly, Vannevar Bush had written in 1936 of "bits of information" that could be stored on the used in the mechanical computers of that time. The first programmable computer, built by Konrad Zuse, used binary notation for numbers.
A bit can be stored by a digital device or other physical system that exists in either of two possible distinct states. These may be the two stable states of a flip-flop, two positions of an Switch
, two distinct voltage
or electric current
levels allowed by a circuit, two distinct levels of Irradiance
, two directions of magnetism
or polarization, the orientation of reversible double stranded DNA
Bits can be implemented in several forms. In most modern computing devices, a bit is usually represented by an Electricity voltage or Electric current pulse, or by the electrical state of a flip-flop circuit.
For devices using positive logic, a digit value of 1 (or a logical value of true) is represented by a more positive voltage relative to the representation of 0. The specific voltages are different for different logic families and variations are permitted to allow for component aging and noise immunity. For example, in transistor–transistor logic (TTL) and compatible circuits, digit values 0 and 1 at the output of a device are represented by no higher than 0.4 volts and no lower than 2.6 volts, respectively; while TTL inputs are specified to recognize 0.8 volts or below as 0 and 2.2 volts or above as 1.
Transmission and processing
Bits are transmitted one at a time in serial transmission, and by a multiple number of bits in parallel transmission. A bitwise operation optionally processes bits one at a time. Data transfer rates are usually measured in decimal SI multiples of the unit bit per second (bit/s), such as kbit/s.
In the earliest non-electronic information processing devices, such as Jacquard's loom or Babbage's Analytical Engine, a bit was often stored as the position of a mechanical lever or gear, or the presence or absence of a hole at a specific point of a Punched card
or Punched tape
. The first electrical devices for discrete logic (such as elevator
and traffic light
control circuits, telephone switches, and Konrad Zuse's computer) represented bits as the states of
which could be either "open" or "closed". When relays were replaced by
, starting in the 1940s, computer builders experimented with a variety of storage methods, such as pressure pulses traveling down a mercury delay line, charges stored on the inside surface of a cathode-ray tube
, or opaque spots printed on optical disc
by photolithographic techniques.
In the 1950s and 1960s, these methods were largely supplanted by magnetic storage devices such as magnetic core memory, , magnetic drum, and Disk storage, where a bit was represented by the polarity of magnetism of a certain area of a ferromagnetic film, or by a change in polarity from one direction to the other. The same principle was later used in the magnetic bubble memory developed in the 1980s, and is still found in various magnetic strip items such as Rapid transit tickets and some .
In modern semiconductor memory, such as dynamic random-access memory, the two values of a bit may be represented by two levels of electric charge stored in a capacitor. In certain types of programmable logic arrays and read-only memory, a bit may be represented by the presence or absence of a conducting path at a certain point of a circuit. In , a bit is encoded as the presence or absence of a microscopic pit on a reflective surface. In one-dimensional , bits are encoded as the thickness of alternating black and white lines.
Unit and symbol
The bit is not defined in the International System of Units (SI). However, the International Electrotechnical Commission issued standard IEC 60027, which specifies that the symbol for binary digit should be bit
, and this should be used in all multiples, such as kbit
, for kilobit.
[National Institute of Standards and Technology (2008), Guide for the Use of the International System of Units. Online version. ]
However, the lower-case letter b is widely used as well and was recommended by the IEEE 1541 Standard (2002). In contrast, the upper case letter B is the standard and customary symbol for byte.
Multiple bits may be expressed and represented in several ways. For convenience of representing commonly reoccurring groups of bits in information technology, several units of information have traditionally been used. The most common is the unit byte
, coined by Werner Buchholz
in June 1956, which historically was used to represent the group of bits used to encode a single character of text (until UTF-8 multibyte encoding took over) in a computer
and for this reason it was used as the basic address space
element in many computer architectures. The trend in hardware design converged on the most common implementation of using eight bits per byte, as it is widely used today. However, because of the ambiguity of relying on the underlying hardware design, the unit octet was defined to explicitly denote a sequence of eight bits.
Computers usually manipulate bits in groups of a fixed size, conventionally named "words". Like the byte, the number of bits in a word also varies with the hardware design, and is typically between 8 and 80 bits, or even more in some specialized computers. In the 21st century, retail personal or server computers have a word size of 32 or 64 bits.
The International System of Units defines a series of decimal prefixes for multiples of standardized units which are commonly also used with the bit and the byte. The prefixes kilo- (103) through yotta- (1024) increment by multiples of 1000, and the corresponding units are the kilobit (kbit) through the yottabit (Ybit).
Information capacity and information compression
When the information capacity of a storage system or a communication channel is presented in bits
or bits per second
, this often refers to binary digits, which is a computer hardware capacity to store binary data (0 or 1, up or down, current or not, etc.). Information capacity of a storage system is only an upper bound to the quantity of information stored therein. If the two possible values of one bit of storage are not equally likely, that bit of storage contains less than one bit of information. Indeed, if the value is completely predictable, then the reading of that value provides no information at all (zero entropic bits, because no resolution of uncertainty occurs and therefore no information is available). If a computer file that uses n
bits of storage contains only m
bits of information, then that information can in principle be encoded in about m
bits, at least on the average. This principle is the basis of data compression technology. Using an analogy, the hardware binary digits refer to the amount of storage space available (like the number of buckets available to store things), and the information content the filling, which comes in different levels of granularity (fine or coarse, that is, compressed or uncompressed information). When the granularity is finer—when information is more compressed—the same bucket can hold more.
For example, it is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007. However, when this storage space is filled and the corresponding content is optimally compressed, this only represents 295 exabytes of information.
[ "The World's Technological Capacity to Store, Communicate, and Compute Information" , especially Supporting online material , Martin Hilbert and Priscila López (2011), Science, 332(6025), 60-65; free access to the article through here: martinhilbert.net/WorldInfoCapacity.html]
When optimally compressed, the resulting carrying capacity approaches Shannon information or information entropy.
Certain bitwise computer processor instructions (such as bit set
) operate at the level of manipulating bits rather than manipulating data interpreted as an aggregate of bits.
In the 1980s, when computer displays became popular, some computers provided specialized bitblt ("bitblt" or "blit") instructions to set or copy the bits that corresponded to a given rectangular area on the screen.
In most computers and programming languages, when a bit within a group of bits, such as a byte or word, is referred to, it is usually specified by a number from 0 upwards corresponding to its position within the byte or word. However, 0 can refer to either the most or least significant bit depending on the context.
Other information units
Other units of information, sometimes used in information theory, include the natural digit
also called a nat
and defined as logarithm2 e
(≈ 1.443) bits, where e
is the base of the natural logarithms; and the dit
, or hartley
, defined as log2
10 (≈ 3.322) bits.
This value, slightly less than 10/3, may be understood because 103
= 1000 ≈ 1024 = 210
: three decimal digits are slightly less information than ten binary digits, so one decimal digit is slightly less than 10/3 binary digits. Conversely, one bit of information corresponds to about ln 2 (≈ 0.693) nats, or log10
2 (≈ 0.301) hartleys. As with the inverse ratio, this value, approximately 3/10, but slightly more, corresponds to the fact that 210
= 1024 ~ 1000 = 103
: ten binary digits are slightly more information than three decimal digits, so one binary digit is slightly more than 3/10 decimal digits. Some authors also define a binit
as an arbitrary information unit equivalent to some fixed but unspecified number of bits.
Integer (computer science)
Primitive data type
Trit (Trinary digit)
Entropy (information theory)
Baud (bits per second)
Binary numeral system
Ternary numeral system
Bit Calculator – a tool providing conversions between bit, byte, kilobit, kilobyte, megabit, megabyte, gigabit, gigabyte
BitXByteConverter – a tool for computing file sizes, storage capacity, and digital information in various units