Failure is the social concept of not meeting a desirable or intended Goal, and is usually viewed as the opposite of success. The criteria for failure depends on context, and may be relative to a particular observer or belief system. One person might consider a failure what another person considers a success, particularly in cases of direct competition or a zero-sum game. Similarly, the degree of success or failure in a situation may be differently viewed by distinct observers or participants, such that a situation that one considers to be a failure, another might consider to be a success, a qualified success or a neutral situation.
It may also be difficult or impossible to ascertain whether a situation meets criteria for failure or success due to ambiguous or ill-defined definition of those criteria. Finding useful and effective criteria or to judge the success or failure of a situation may itself be a significant task.
Most of the items listed below had high expectations, significant financial investments, and/or widespread publicity, but fell far short of success. Due to the subjective nature of "success" and "meeting expectations", there can be disagreement about what constitutes a "major flop".
Sometimes, commercial failures can receive a cult following, with the initial lack of commercial success even lending a cachet of subcultural coolness.
Wan and Chan note that outcome and process failures are associated with different kinds of detrimental effects to the consumer. They observe that "an outcome failure involves a loss of economic resources (i.e., money, time) and a process failure involves a loss of social resources (i.e., social esteem)".
By the year 1884, Mount Holyoke College was evaluating students' performance on a 100-point or percentage scale and then summarizing those numerical grades by assigning letter grades to numerical ranges. Mount Holyoke assigned letter grades A through E, with E indicating lower than 75% performance and designating failure. The A– E system spread to Harvard University by 1890. In 1898, Mount Holyoke adjusted the grading system, adding an F grade for failing (and adjusting the ranges corresponding to the other letters). The practice of letter grades spread more broadly in the first decades of the 20th century. By the 1930s, the letter E was dropped from the system, for unclear reasons.
Both actions and omissions may be morally significant. The classic example of a morally significant omission is one's failure to rescue someone in dire need of assistance. It may seem that one is morally Culpability for failing to rescue in such a case.
Patricia G. Smith notes that there are two ways one can not do something: consciously or unconsciously. A conscious omission is intentional, whereas an unconscious omission may be Negligence, but is not intentional. Accordingly, Smith suggests, we ought to understand failure as involving a situation in which it is reasonable to expect a person to do something, but they do not do it—regardless of whether they intend to do it or not.
Randolph Clarke, commenting on Smith's work, suggests that "what makes a failure to act an omission is the applicable norm". In other words, a failure to act becomes morally significant when a norm demands that some action be taken, and it is not taken.
Wired magazine editor Kevin Kelly explains that a great deal can be learned from things going wrong unexpectedly, and that part of science's success comes from keeping blunders "small, manageable, constant, and trackable". He uses the example of engineers and programmers who push systems to their limits, breaking them to learn about them. Kelly also warns against creating a culture that punishes failure harshly, because this inhibits a creative process, and risks teaching people not to communicate important failures with others (e.g., null results). Failure can also be used productively, for instance to find identify ambiguous cases that warrant further interpretation. When studying biases in machine learning, for instance, failure can be seen as a " cybernetic rupture where pre-existing biases and structural flaws make themselves known".
|
|