Ethernet ( ) is a family of wired technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was commercially introduced in 1980 and first standardized in 1983 as ECMA-82 and shortly after as IEEE 802.3. It is a good example of an open standard.
Ethernet has since been refined to support higher , a greater number of nodes, and longer link distances, but retains much backward compatibility. Over time, Ethernet has largely replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET.
The original 10BASE5 Ethernet uses a thick coaxial cable as a shared medium. This was largely superseded by 10BASE2, which used a thinner and more flexible cable that was both less expensive and easier to use. More modern Ethernet variants use twisted pair and fiber optic links in conjunction with Network switch. Over the course of its history, Ethernet data transfer rates have been increased from the original to the latest Terabit Ethernet, with rates up to under development. The Ethernet standards include several wiring and signaling variants of the Physical layer.
Systems communicating over Ethernet divide a stream of data into shorter pieces called frames. Each frame contains source and destination addresses, and error-checking data so that damaged frames can be detected and discarded; most often, higher-layer protocols trigger retransmission of lost frames. Per the OSI model, Ethernet provides services up to and including the data link layer. The 48-bit MAC address was adopted by other IEEE 802 networking standards, including IEEE 802.11 (Wi-Fi), as well as by FDDI. EtherType values are also used in Subnetwork Access Protocol (SNAP) headers.
Ethernet is widely used in homes and industry, and interworks well with wireless Wi-Fi technologies. The Internet Protocol is commonly carried over Ethernet and so it is considered one of the key technologies that make up the Internet.
In 1972, Robert Metcalfe and David Boggs adapted the ALOHAnet approach to transmission over a shared coaxial cable in the Xerox Palo Alto Research Center (Xerox PARC). This network connected ALTO computers using a coaxial cable. It first ran on May 22, 1973 with a bit rate of 2.94 Mbps. In a memo written at that time, Metcalfe named the concept "Ethernet." The name was inspired by the former idea that the universe was filled with a "luminiferous aether" that carried electromagnetic waves, and calling it Ethernet emphasized its ability to run over any transmission medium. Ethernet improved the original ALOHANet design because a sender would first listen to the channel to determine if it was already in use. The combination of the new idea of Carrier Sense with Multiple Access and Collision Detection from ALOHANet became Carrier-Sense Multiple Access/Collision Detection, or CSMA/CD.
In 1975, Metcalfe, Boggs and their colleagues Charles Thacker and Butler Lampson filed for a patent on Ethernet, which was granted in 1977. "Multipoint data communication system (with collision detection)" By 1976, 100 ALTOs at Xerox PARC were connected using Ethernet. In July 1976, Metcalfe and Boggs published the seminal paper Ethernet: Distributed Packet Switching for Local Computer Networks in Communications of the ACM (CACM). Subsequently between 1976-1978 Ron Crane, Bob Garner, Hal Murray, and Roy Ogus designed a 10Mbps version of Ethernet running over coaxial cable.
There were multiple local area network technologies in the 1970s. These included IBM's token ring, Network Systems Corporation's HYPERchannel and Datapoint's ARCnet. All were proprietary at the time. Metcalfe, Gordon Bell, and David Liddle developed a strategy of standardizing Ethernet rather than keeping it vendor-specific, and convinced Digital Equipment Corporation (DEC), Intel, and Xerox to work together on a standard, subsequently known as the DIX standard, based on the 10Mbps version of Ethernet and published in 1980 as the Ethernet Blue Book. Version 2 was published in November 1982.
In June 1981, the Institute of Electronic and Electrical Engineers (IEEE) Project 802 (for local area network standards) created an 802.3 subcommittee to produce an Ethernet standard based on DIX. In 1983, a standard was published for 10 Mbps Ethernet over a coaxial cable of up to 500 meters (10BASE5). It differed only in some details from the DIX standard. As part of the standardization process, Xerox turned over all its Ethernet patents to the IEEE, and anyone can implement 802.3. IEEE 802.3 is now considered the same as Ethernet. The cooperation of Xerox with Intel and Digital on the Ethernet standard ultimately made it a truly open standard.
In June 1979, Metcalfe left Xerox to found the Computer, Communication, and Compatibility Corporation, better known as 3Com, along with Howard Charney, Ron Crane, Greg Shaw, and Bill Kraus. Metcalfe's vision was to sell Ethernet adapters for all personal computers. Apple quickly agreed, but IBM was committed to their own LAN protocol, the Token Ring. Nonetheless, 3Com developed the EtherLink ISA adapter and started shipping it with DOS driver software, making it usable on IBM PCs.
The EtherLink adapter had several advantages over competitors. It was the first network interface card (NIC) to use VLSI semiconductor technology (developed in partnership with Seeq Technologies). This meant most of the functions, including the transceiver, could be contained on a single chip, so the price for Etherlink ($950) was significantly lower than of its competitors. 3Com introduced a new, thinner coaxial cable for the card, called Thin Ethernet, making it more convenient to install and use. Finally, Etherlink was the first Ethernet adapter for the IBM PC.
Because both businesses and home users adopted the IBM PC, its market expanded rapidly, and by 1982, IBM was shipping 200,000 units a month. Since IBM hadn't realized that businesses would want the computers connected by a network, Etherlink sales filled the vacuum and in 1984 3Com was able to file for a public stock offering. The Etherlink approach was standardized by IEEE as 10BASE2 in 1984.
Also in the early 1980s, Novell began selling Network Interface Cards (NICs) to go with its NetWare operating system. These NE2000 NICs were all Ethernet, and because NetWare became an important application for businesses, this increased the demand for Ethernet adapters. Then in 1989, Novell sold its NIC business and licensed the NE2000 card, creating a highly competitive market and driving the price of Ethernet cards down, while cards for other technologies such as IBM's token ring remained high.
Starting in late 1983, AT&T and NCR Voyix promoted a star configuration using unshielded twisted pair cabling (UTP), or regular telephone wire. This became StarLAN, running at 1Mbps over cables up to 500 meters, and was standardized as 1BASE5 by IEEE 802.3, but on August 17, 1987, SynOptics introduced LATTISNET with 10Mbps Ethernet also over regular telephone wire (UTP). In the fall of 1990, the IEEE issued the 802.3i standard for 10BASE-T, Ethernet over twisted pairs, and the following year, Ethernet sales nearly doubled. By 1992, Ethernet was the de facto standard for LANS.
In the 1990s, the proliferation of PCs combined with their increasing power drove demand for much faster network infrastructure. The Kalpana EtherSwitch EPS-700 helped to meet this demand by increasing the speed of Ethernet dramatically. The switch allowed multiple simultaneous data transmission paths and it used faster cut-through bridging technology in place of store-and-forward. The switch was marketed as a way to improve network performance rather than as a way to connect different LANs, creating a new market category. Then in 1993, Kalpana introduced full-duplex mode for switches, potentially doubling the data transmission rate. In 1997, the IEEE standardized full-duplex flow-control switched in 802.3x.
The 10Mbps rate of Ethernet was still too slow for some networks, though, and most larger networks planned to use FDDI, a very expensive alternative to Ethernet. In August 1991 Howard Charney, David Boggs, Ron Crane, and Larry Birenbaum founded Grand Junction Networks to build and market 100Mbps Ethernet equipment. Their announcement in 1992 triggered a standards war over whether to maintain backward compatibility with the original Ethernet CSMA/CD standard or to adopt a demand-priority protocol pushed by Hewlett-Packard and AT&T. Since the competing groups were unable to come to an agreement, IEEE set up a new group, 802.12, for the demand-priority scheme. The supporters of backward compatibility formed the Fast Ethernet Alliance in 1993 to publish an interoperability specification that became the 100BASE-TX standard. At the same time, Grand Junction shipped the first Fast Ethernet hubs and NICs, and more companies announced Fast Ethernet equipment. In 1994, Sun Microsystems followed by 3Com, DEC and others shipped 100BASE-TX compliant products, and the IEEE 802.3u specification for FastEthernet was approved.
The development of the CSMA/CD standard was slowed by conflict over issues such as baseband versus broadband and the lengths of address fields. Some members of the DIX group became impatient with the process and concerned that the ultimate CSMA/CD standard would differ significantly from their "Blue Book" de facto standard. They turned instead to the European Computer Manufacturers Association (ECMA), where Friedrich Röscheisen of Siemens helped to introduce the Blue Book as a candidate standard to a newly-created "Local Networks" Task Group (TC24). Gary Robinson later claimed to have instigated the effort to convince ECMA to standardize CSMA/CD. ECMA approved a standard in June 1982 that was very close to the DIX de facto standard. Because the DIX proposal was the most technically complete and because of the speedy action taken by ECMA, the IEEE group felt compelled to approve the 802.3 CSMA/CD standard in December 1982. It differed only slightly from the DIX standard in terminology and frame format. IEEE published the 802.3 standard as a draft in 1983 and as a standard in 1985.IEEE 802.3-2008, p.iv
Approval of Ethernet on the international level was achieved by a similar, cross-partisan action with Ingrid Fromm, Siemens' representative to IEEE 802, as the liaison officer working to integrate with International Electrotechnical Commission (IEC) Technical Committee 83 and International Organization for Standardization (ISO) Technical Committee 97 Sub Committee 6. The ISO 8802-3 standard was published on March 23, 1989.
The IEEE has approved changes to its 802.3 (Ethernet) standard regularly since 1985. The current standard is available from the IEEE website. With each change to the standard, the IEEE first issues a supplement with a letter designation added to IEEE 802.3. For example, IEEE 802.3u refers to Fast Ethernet. Then when the supplement is formally approved, it is merged with the main standard.
Subsequent standards have provided for ever-faster versions of Ethernet, additional physical media, and network management. For a table of IEEE Ethernet standards, see .
Ethernet stations communicate by sending each other : blocks of data individually sent and delivered. As with other IEEE 802 LANs, adapters come programmed with globally unique 48-bit MAC address so that each Ethernet station has a unique address. The MAC addresses are used to specify both the destination and the source of each data packet. Ethernet establishes link-level connections, which can be defined using both the destination and source addresses. On reception of a transmission, the receiver uses the destination address to determine whether the transmission is relevant to the station or should be ignored. A network interface normally does not accept packets addressed to other Ethernet stations.
An EtherType field in each frame is used by the operating system on the receiving station to select the appropriate protocol module (e.g., an Internet Protocol version such as IPv4). are said to be self-identifying, because of the EtherType field. Self-identifying frames make it possible to intermix multiple protocols on the same physical network and allow a single computer to use multiple protocols together. 2.4.9 – Ethernet Hardware Addresses, p. 29, explains the filtering. Despite the evolution of Ethernet technology, all generations of Ethernet (excluding early experimental versions) use the same frame formats. Mixed-speed networks can be built using Ethernet switches and repeaters supporting the desired Ethernet variants.
Due to the ubiquity of Ethernet, and the ever-decreasing cost of the hardware needed to support it, by 2004 most manufacturers built Ethernet interfaces directly into , eliminating the need for a separate network card.
The original Ethernet's shared coaxial cable (the shared medium) traversed a building or campus to connect every attached machine. A scheme known as carrier-sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than competing Token Ring or Token Bus technologies. Computers are connected to an Attachment Unit Interface (AUI) transceiver, which is in turn connected to the cable (with thin Ethernet the transceiver is usually integrated into the network adapter). While a simple passive wire is highly reliable for small networks, it is not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector, can make the whole Ethernet segment unusable.
Through the first half of the 1980s, Ethernet's 10BASE5 implementation utilised a coaxial cable in diameter, later referred to as thick Ethernet or thicknet. Its successor, 10BASE2, called thin Ethernet or thinnet, used the RG-58 coaxial cable. The emphasis was on making installation of the cable easier and less costly.
Since all communication happens on the same wire, any information sent by one computer is received by all, even if that information is intended for just one destination. The network interface card interrupts the CPU only when applicable packets are received: the card ignores information not addressed to it. Use of a single cable also means that the data bandwidth is shared, such that, for example, available data bandwidth to each device is halved when two stations are simultaneously active.
A collision happens when two stations attempt to transmit at the same time. They corrupt transmitted data and require stations to re-transmit. The loss of data and retransmission reduce throughput. In the worst case, where multiple active hosts connected with maximum allowed cable length attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, a Xerox report in 1980, published in Communications of the ACM, studied the performance of an existing Ethernet installation under both normal and artificially generated heavy load. The report claimed that 98% throughput on the LAN was observed. This is in contrast with token passing LANs (Token Ring, Token Bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits. This report was controversial, as modeling showed that collision-based networks theoretically became unstable under loads as low as 37% of nominal capacity. Many early researchers failed to understand these results. Performance on real networks is significantly better.
In a modern Ethernet, the stations do not all share one channel through a shared cable or a simple repeater hub; instead, each station communicates with a switch, which in turn forwards that traffic to the destination station. In this topology, collisions are only possible if the station and switch attempt to communicate with each other at the same time, and collisions are limited to this link. Furthermore, the 10BASE-T standard introduced a full duplex mode of operation, which became familiar with Fast Ethernet and the de facto standard with Gigabit Ethernet. In full duplex, a switch and a station can send and receive simultaneously, and therefore modern Ethernet networks are completely collision-free.
Shared cable Ethernet is always hard to install in offices because its bus topology is in conflict with the star topology cable plans designed into buildings for telephony. Modifying Ethernet to conform to twisted-pair telephone wiring already installed in commercial buildings provided another opportunity to lower costs, expand the installed base, and leverage building design, and, thus, twisted-pair Ethernet was the next logical development in the mid-1980s.
Ethernet on unshielded Twisted pair (UTP) began with StarLAN at 1 Mbit/s in the mid-1980s. In 1987 SynOptics introduced the first twisted-pair Ethernet at 10 Mbit/s in a star-wired cabling topology with a central hub, later called LattisNet. These evolved into 10BASE-T, which was designed for point-to-point links only, and all termination was built into the device. This changed repeaters from a specialist device used at the center of large networks to a device that every twisted pair-based network with more than two machines had to use. The tree structure that resulted from this made Ethernet networks easier to maintain by preventing most faults with one peer or its associated cable from affecting other devices on the network.
Despite the physical star topology and the presence of separate transmit and receive channels in the twisted pair and fiber media, repeater-based Ethernet networks still use half-duplex and CSMA/CD, with only minimal activity by the repeater, primarily the generation of the jam signal in dealing with packet collisions. Every packet is sent to every other port on the repeater, so bandwidth and security problems are not addressed. The total throughput of the repeater is limited to that of a single link, and all links must operate at the same speed.
To alleviate these problems, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed Ethernet packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. At initial startup, Ethernet bridges work somewhat like Ethernet repeaters, passing all traffic between segments. By observing the source addresses of incoming frames, the bridge then builds an address table associating addresses to segments. Once an address is learned, the bridge forwards network traffic destined for that address only to the associated segment, improving overall performance. Broadcast traffic is still forwarded to all network segments. Bridges also overcome the limits on total segments between two hosts and allow the mixing of speeds, both of which are critical to the incremental deployment of faster Ethernet variants.
In 1989, Motorola Codex introduced their 6310 EtherSpan, and Kalpana introduced their EtherSwitch; these were examples of the first commercial Ethernet switches. Early switches such as this used cut-through switching where only the header of the incoming packet is examined before it is either dropped or forwarded to another segment. This reduces the forwarding latency. One drawback of this method is that it does not readily allow a mixture of different link speeds. Another is that packets that have been corrupted are still propagated through the network. The eventual remedy for this was a return to the original store and forward approach of bridging, where the packet is read into a buffer on the switch in its entirety, its frame check sequence verified and only then the packet is forwarded. In modern network equipment, this process is typically done using application-specific integrated circuits allowing packets to be forwarded at wire speed.
When a twisted pair or fiber link segment is used and neither end is connected to a repeater, full-duplex Ethernet becomes possible over that segment. In full-duplex mode, both devices can transmit and receive to and from each other at the same time, and there is no collision domain. This doubles the aggregate bandwidth of the link and is sometimes advertised as double the link speed (for example, 200 Mbit/s for Fast Ethernet). The elimination of the collision domain for these connections also means that all the link's bandwidth can be used by the two devices on that segment and that segment length is not limited by the constraints of collision detection.
Since packets are typically delivered only to the port they are intended for, traffic on a switched Ethernet is less public than on shared-medium Ethernet. Despite this, switched Ethernet should still be regarded as an insecure network technology, because it is easy to subvert switched Ethernet systems by means such as ARP spoofing and MAC flooding.
The bandwidth advantages, the improved isolation of devices from each other, the ability to easily mix different speeds of devices and the elimination of the chaining limits inherent in non-switched Ethernet have made switched Ethernet the dominant network technology.
Advanced networking features in switches use Shortest Path Bridging (SPB) or the Spanning Tree Protocol (STP) to maintain a loop-free, meshed network, allowing physical loops for redundancy (STP) or load-balancing (SPB). Shortest Path Bridging includes the use of the link-state routing protocol IS-IS to allow larger networks with shortest path routes between devices.
Advanced networking features also ensure port security, provide protection features such as MAC lockdown and broadcast radiation filtering, use to keep different classes of users separate while using the same physical infrastructure, and use link aggregation to add bandwidth to overloaded links and to provide some redundancy.
In 2016, Ethernet replaced InfiniBand as the most popular system interconnect of TOP500 supercomputers.
In many industrial systems, Ethernet and fieldbus coexist, each performing certain roles, and data exchange between them through gateways.
The most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three use twisted-pair cables and 8P8C modular connectors. They run at , , and , respectively.IEEE 802.3 14. Twisted-pair medium attachment unit (MAU) and baseband medium, type 10BASE-T including type 10BASE-TeIEEE 802.3 25. Physical Medium Dependent (PMD) sublayer and baseband medium, type 100BASE-TXIEEE 802.3 40. Physical Coding Sublayer (PCS), Physical Medium Attachment (PMA) sublayer and baseband medium, type 1000BASE-T
Fiber optic variants of Ethernet (that commonly use ) are also very popular in larger networks, offering high performance, better electrical isolation and longer distance (tens of kilometers with some versions). In general, network protocol stack software will work similarly on all varieties.IEEE 802.3 4.3 Interfaces to/from adjacent layers
A physical topology that contains switching or bridge loops is attractive for redundancy reasons, yet a switched network must not have loops. The solution is to allow physical loops, but create a loop-free logical topology using the SPB protocol or the older STP on the network switches.
Standardization
Evolution
Shared medium
Repeaters and hubs
Bridging and switching
Advanced networking
Varieties
Frame structure
Autonegotiation
Error conditions
Switching loop
Jabber
Runt frames
See also
Notes
Further reading
External links
|
|