In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) whose value at a point gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of . If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute value directional derivative.
Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function may be defined by:
where is the total infinitesimal change in for an infinitesimal displacement , and is seen to be maximal when is in the direction of the gradient . The nabla symbol , written as an upside-down triangle and pronounced "del", denotes the Del.
When a coordinate system is used in which the basis vectors are not functions of position, the gradient is given by the vector whose components are the partial derivatives of at .
That is, for , its gradient is defined at the point in n-dimensional space as the vector
Note that the above definition for gradient is defined for the function only if is differentiable at . There can be functions for which partial derivatives exist in every direction but fail to be differentiable. Furthermore, this definition as the vector of partial derivatives is only valid when the basis of the coordinate system is orthonormal. For any other basis, the metric tensor at that point needs to be taken into account.
For example, the function unless at origin where , is not differentiable at the origin as it does not have a well defined tangent plane despite having well defined partial derivatives in every direction at the origin. In this particular example, under rotation of x-y coordinate system, the above formula for gradient fails to transform like a vector (gradient becomes dependent on choice of basis for coordinate system) and also fails to point towards the 'steepest ascent' in some orientations. For differentiable functions where the formula for gradient holds, it can be shown to always transform as a vector under transformation of the basis so as to always point towards the fastest increase.
The gradient is dual to the total derivative : the value of the gradient at a point is a tangent vector – a vector at each point; while the value of the derivative at a point is a cotangent vector – a linear functional on vectors. They are related in that the dot product of the gradient of at a point with another tangent vector equals the directional derivative of at of the function along ; that is, . The gradient admits multiple generalizations to more general functions on ; see .
Consider a surface whose height above sea level at point is . The gradient of at a point is a plane vector pointing in the direction of the steepest slope or grade at that point. The steepness of the slope at that point is given by the magnitude of the gradient vector.
The gradient can also be used to measure how a scalar field changes in other directions, rather than just the direction of greatest change, by taking a dot product. Suppose that the steepest slope on a hill is 40%. A road going directly uphill has slope 40%, but a road going around the hill at an angle will have a shallower slope. For example, if the road is at a 60° angle from the uphill direction (when both directions are projected onto the horizontal plane), then the slope along the road will be the dot product between the gradient vector and a unit vector along the road, as the dot product measures how much the unit vector along the road aligns with the steepest slope, which is 40% times the cosine of 60°, or 20%.
More generally, if the hill height function is differentiable, then the gradient of dot product with a unit vector gives the slope of the hill in the direction of the vector, the directional derivative of along the unit vector.
where the right-hand side is the directional derivative and there are many ways to represent it. Formally, the derivative is dual to the gradient; see relationship with derivative.
When a function also depends on a parameter such as time, the gradient often refers simply to the vector of its spatial derivatives only (see Spatial gradient).
The magnitude and direction of the gradient vector are independent of the particular coordinate representation.
where , , are the standard basis unit vectors in the directions of the , and coordinates, respectively. For example, the gradient of the function is or
In some applications it is customary to represent the gradient as a row vector or column vector of its components in a rectangular coordinate system; this article follows the convention of the gradient being a column vector, while the derivative is a row vector.
where is the axial distance, is the azimuthal or azimuth angle, is the axial coordinate, and , and are unit vectors pointing along the coordinate directions.
In spherical coordinates with a Euclidean metric, the gradient is given by:.
where is the radial distance, is the azimuthal angle and is the polar angle, and , and are again local unit vectors pointing in the coordinate directions (that is, the normalized covariant basis).
For the gradient in other orthogonal coordinate systems, see Orthogonal coordinates (Differential operators in three dimensions).
(Note that its Dual space is ),
where and refer to the unnormalized local covariant and contravariant bases respectively, is the inverse metric tensor, and the Einstein summation convention implies summation over i and j.
If the coordinates are orthogonal we can easily express the gradient (and the differential) in terms of the normalized bases, which we refer to as and , using the scale factors (also known as Lamé coefficients) :
(and ),
where we cannot use Einstein notation, since it is impossible to avoid the repetition of more than two indices. Despite the use of upper and lower indices, , , and are neither contravariant nor covariant.
The latter expression evaluates to the expressions given above for cylindrical and spherical coordinates.
While these both have the same components, they differ in what kind of mathematical object they represent: at each point, the derivative is a cotangent vector, a linear form (or covector) which expresses how much the (scalar) output changes for a given infinitesimal change in (vector) input, while at each point, the gradient is a tangent vector, which represents an infinitesimal change in (vector) input. In symbols, the gradient is an element of the tangent space at a point, , while the derivative is a map from the tangent space to the real numbers, . The tangent spaces at each point of can be "naturally" identified with the vector space itself, and similarly the cotangent space at each point can be naturally identified with the dual vector space of covectors; thus the value of the gradient at a point can be thought of a vector in the original , not just as a tangent vector.
Computationally, given a tangent vector, the vector can be multiplied by the derivative (as matrices), which is equal to taking the dot product with the gradient:
Much as the derivative of a function of a single variable represents the slope of the tangent to the graph of the function, the directional derivative of a function in several variables represents the slope of the tangent hyperplane in the direction of the vector.
The gradient is related to the differential by the formula for any , where is the dot product: taking the dot product of a vector with the gradient is the same as taking the directional derivative along the vector.
If is viewed as the space of (dimension ) column vectors (of real numbers), then one can regard as the row vector with components so that is given by matrix multiplication. Assuming the standard Euclidean metric on , the gradient is then the corresponding column vector, that is,
for close to , where is the gradient of computed at , and the dot denotes the dot product on . This equation is equivalent to the first two terms in the multivariable Taylor series expansion of at .
As a consequence, the usual properties of the derivative hold for the gradient, though the gradient is not a derivative itself, but rather dual to the derivative:
More generally, if instead , then the following holds: where T denotes the transpose Jacobian matrix.
For the second form of the chain rule, suppose that is a real valued function on a subset of , and that is differentiable at the point . Then
If is differentiable, then the dot product of the gradient at a point with a vector gives the directional derivative of at in the direction . It follows that in this case the gradient of is orthogonal to the of . For example, a level surface in three-dimensional space is defined by an equation of the form . The gradient of is then normal to the surface.
More generally, any embedded hypersurface in a Riemannian manifold can be cut out by an equation of the form such that is nowhere zero. The gradient of is then normal to the hypersurface.
Similarly, an affine algebraic hypersurface may be defined by an equation , where is a polynomial. The gradient of is zero at a singular point of the hypersurface (this is the definition of a singular point). At a non-singular point, it is a nonzero normal vector.
Let be an arbitrary unit vector. With the directional derivative defined as
we get, by substituting the function with its Taylor series,
where denotes higher order terms in .
Dividing by , and taking the limit yields a term which is bounded from above by the Cauchy–Schwarz inequality
Choosing maximizes the directional derivative, and equals the upper bound
Suppose is a function such that each of its first-order partial derivatives exist on . Then the Jacobian matrix of is defined to be an matrix, denoted by or simply . The th entry is . Explicitly
In rectangular coordinates, the gradient of a vector field is defined by:
(where the Einstein summation notation is used and the tensor product of the vectors and is a dyadic tensor of type (2,0)). Overall, this expression equals the transpose of the Jacobian matrix:
In curvilinear coordinates, or more generally on a curved manifold, the gradient involves Christoffel symbols:
where are the components of the inverse metric tensor and the are the coordinate basis vectors.
Expressed more invariantly, the gradient of a vector field can be defined by the Levi-Civita connection and metric tensor:.
where is the connection.
So, the local form of the gradient takes the form:
Generalizing the case , the gradient of a function is related to its exterior derivative, since More precisely, the gradient is the vector field associated to the differential 1-form using the musical isomorphism (called "sharp") defined by the metric . The relation between the exterior derivative and the gradient of a function on is a special case of this in which the metric is the flat metric given by the dot product.
|
|