In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. A binary classifier is a function that can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of Weighting with the feature vector.
History
The artificial neuron network was invented in 1943 by
Warren McCulloch and
Walter Pitts in
A logical calculus of the ideas immanent in nervous activity.
In 1957, Frank Rosenblatt was at the Cornell Aeronautical Laboratory. He simulated the perceptron on an IBM 704. Later, he obtained funding by the Information Systems Branch of the United States Office of Naval Research and the Rome Air Development Center, to build a custom-made computer, the Mark I Perceptron. It was first publicly demonstrated on 23 June 1960. The machine was "part of a previously secret four-year NPIC the effort from 1963 through 1966 to develop this algorithm into a useful tool for photo-interpreters".
Rosenblatt described the details of the perceptron in a 1958 paper. His organization of a perceptron is constructed of three kinds of cells ("units"): AI, AII, R, which stand for "Projection areas", "association" and "response". He presented at the first international symposium on AI, Mechanisation of Thought Processes, which took place in 1958 November.[Frank Rosenblatt, ‘ Two Theorems of Statistical Separability in the Perceptron’, Symposium on the Mechanization of Thought, National Physical Laboratory, Teddington, UK, November 1958, vol. 1, H. M. Stationery Office, London, 1959.]
Rosenblatt's project was funded under Contract Nonr-401(40) "Cognitive Systems Research Program", which lasted from 1959 to 1970,[Rosenblatt, Frank, and CORNELL UNIV ITHACA NY. Cognitive Systems Research Program. Technical report, Cornell University, 72, 1971.] and Contract Nonr-2381(00) "Project PARA" ("PARA" means "Perceiving and Recognition Automata"), which lasted from 1957 to 1963.[Muerle, John Ludwig, and CORNELL AERONAUTICAL LAB INC BUFFALO NY. Project Para, Perceiving and Recognition Automata. Cornell Aeronautical Laboratory, Incorporated, 1963.]
In 1959, the Institute for Defense Analysis awarded his group a $10,000 contract. By September 1961, the ONR awarded further $153,000 worth of contracts, with $108,000 committed for 1962.
The ONR research manager, Marvin Denicoff, stated that ONR, instead of DARPA, funded the Perceptron project, because the project was unlikely to produce technological results in the near or medium term. Funding from ARPA go up to the order of millions dollars, while from ONR are on the order of 10,000 dollars. Meanwhile, the head of IPTO at ARPA, J.C.R. Licklider, was interested in 'self-organizing', 'adaptive' and other biologically-inspired methods in the 1950s; but by the mid-1960s he was openly critical of these, including the perceptron. Instead he strongly favored the logical AI approach of Simon and Allen Newell.
Mark I Perceptron machine
The perceptron was intended to be a machine, rather than a program, and while its first implementation was in software for the IBM 704, it was subsequently implemented in custom-built hardware as the Mark I Perceptron with the project name "Project PARA",
designed for image recognition. The machine is currently in Smithsonian National Museum of American History.
The Mark I Perceptron had three layers. One version was implemented as follows:
-
An array of 400 arranged in a 20x20 grid, named "sensory units" (S-units), or "input retina". Each S-unit can connect to up to 40 A-units.
-
A hidden layer of 512 perceptrons, named "association units" (A-units).
-
An output layer of eight perceptrons, named "response units" (R-units).
Rosenblatt called this three-layered perceptron network the
alpha-perceptron, to distinguish it from other perceptron models he experimented with.
The S-units are connected to the A-units randomly (according to a table of random numbers) via a plugboard (see photo), to "eliminate any particular intentional bias in the perceptron". The connection weights are fixed, not learned. Rosenblatt was adamant about the random connections, as he believed the retina was randomly connected to the visual cortex, and he wanted his perceptron machine to resemble human visual perception.
The A-units are connected to the R-units, with adjustable weights encoded in , and weight updates during learning were performed by electric motors.The hardware details are in an operators' manual.
In a 1958 press conference organized by the US Navy, Rosenblatt made statements about the perceptron that caused a heated controversy among the fledgling AI community; based on Rosenblatt's statements, The New York Times reported the perceptron to be "the embryo of an electronic computer that the expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence."
The Photo Division of Central Intelligence Agency, from 1960 to 1964, studied the use of Mark I Perceptron machine for recognizing militarily interesting silhouetted targets (such as planes and ships) in aerial photos.
Principles of Neurodynamics (1962)
Rosenblatt described his experiments with many variants of the Perceptron machine in a book
Principles of Neurodynamics (1962). The book is a published version of the 1961 report.
[ , by Frank Rosenblatt, Report Number VG-1196-G-8, Cornell Aeronautical Laboratory, published on 15 March 1961. The work reported in this volume has been carried out under Contract Nonr-2381 (00) (Project PARA) at C.A.L. and Contract Nonr-401(40), at Cornell Univensity.]
Among the variants are:
-
"cross-coupling" (connections between units within the same layer) with possibly closed loops,
-
"back-coupling" (connections from units in a later layer to units in a previous layer),
-
four-layer perceptrons where the last two layers have adjustible weights (and thus a proper multilayer perceptron),
-
incorporating time-delays to perceptron units, to allow for processing sequential data,
-
analyzing audio (instead of images).
The machine was shipped from Cornell to Smithsonian in 1967, under a government transfer administered by the Office of Naval Research.
Perceptrons (1969)
Although the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognise many classes of patterns. This caused the field of neural network research to stagnate for many years, before it was recognised that a feedforward neural network with two or more layers (also called a multilayer perceptron) had greater processing power than perceptrons with one layer (also called a single-layer perceptron).
Single-layer perceptrons are only capable of learning linearly separable patterns. For a classification task with some step activation function, a single node will have a single line dividing the data points forming the patterns. More nodes can create more dividing lines, but those lines must somehow be combined to form more complex classifications. A second layer of perceptrons, or even linear nodes, are sufficient to solve many otherwise non-separable problems.
In 1969, a famous book entitled Perceptrons by Marvin Minsky and Seymour Papert showed that it was impossible for these classes of network to learn an XOR function. It is often incorrectly believed that they also conjectured that a similar result would hold for a multi-layer perceptron network. However, this is not true, as both Minsky and Papert already knew that multi-layer perceptrons were capable of producing an XOR function. (See the page on Perceptrons (book) for more information.) Nevertheless, the often-miscited Minsky and Papert text caused a significant decline in interest and funding of neural network research. It took ten more years until neural network research experienced a resurgence in the 1980s. This text was reprinted in 1987 as "Perceptrons - Expanded Edition" where some errors in the original text are shown and corrected.
Subsequent work
Rosenblatt continued working on perceptrons despite diminishing funding. The last attempt was Tobermory, built between 1961 and 1967, built for speech recognition.
[Rosenblatt, Frank (1962). “''
]
target="_blank" rel="nofollow"> A Description of the Tobermory Perceptron .” Cognitive Research Program. Report No. 4. Collected Technical Papers, Vol. 2. Edited by Frank Rosenblatt. Ithaca, NY: Cornell University. It occupied an entire room.
[Nagy, George. 1963.
]
target="_blank" rel="nofollow"> System and circuit designs for the Tobermory perceptron . Technical report number 5, Cognitive Systems Research Program, Cornell University, Ithaca New York. It had 4 layers with 12,000 weights implemented by toroidal
. By the time of its completion, simulation on digital computers had become faster than purpose-built perceptron machines.
[Nagy, George. "Neural networks-then and now." IEEE Transactions on Neural Networks'' 2.2 (1991): 316-318.] He died in a boating accident in 1971.
A simulation program for neural networks was written for IBM 7090/7094, and was used to study various pattern recognition applications, such as character recognition, particle tracks in Bubble chamber photographs; phoneme, isolated word, and continuous speech recognition; speaker verification; and Foveated imaging.
The kernel perceptron algorithm was already introduced in 1964 by Aizerman et al. Margin bounds guarantees were given for the Perceptron algorithm in the general non-separable case first by Yoav Freund and Robert Schapire (1998), and more recently by Mehryar Mohri and Rostamizadeh (2013) who extend previous results and give new and more favorable L1 bounds.[[5] Foundations of Machine Learning, MIT Press (Chapter 8).]
The perceptron is a simplified model of a biological neuron. While the complexity of biological neuron models is often required to fully understand neural behavior, research suggests a perceptron-like linear model can produce some behavior seen in real neurons.
The solution spaces of decision boundaries for all binary functions and learning behaviors are studied in.
Definition
the modern sense, the perceptron is an algorithm for learning a binary classifier called a threshold function: a function that maps its input
(a real-valued
Vector space) to an output value
(a single
Binary function value):
where is the Heaviside step-function (where an input of outputs 1; otherwise 0 is the output ), is a vector of real-valued weights, is the dot product , where is the number of inputs to the perceptron, and is the bias. The bias shifts the decision boundary away from the origin and does not depend on any input value.
Equivalently, since , we can add the bias term as another weight and add a coordinate to each input , and then write it as a linear classifier that passes the origin:
The binary value of (0 or 1) is used to perform binary classification on as either a positive or a negative instance. Spatially, the bias shifts the position (though not the orientation) of the planar decision boundary.
In the context of neural networks, a perceptron is an artificial neuron using the Heaviside step function as the activation function. The perceptron algorithm is also termed the single-layer perceptron, to distinguish it from a multilayer perceptron, which is a misnomer for a more complicated neural network. As a linear classifier, the single-layer perceptron is the simplest feedforward neural network.
Power of representation
Information theory
From an information theory point of view, a single perceptron with
K inputs has a capacity of
2K of information.
This result is due to Thomas Cover.
Specifically let be the number of ways to linearly separate N points in K dimensions, then