Product Code Database
Example Keywords: tetris -undershirt $7-145
   » » Wiki: Uncertainty
Tag Wiki 'Uncertainty'.
Tag

Uncertainty or incertitude refers to situations involving imperfect or unknown . It applies to predictions of future events, to physical measurements that are already made, or to the unknown, and is particularly relevant for . Uncertainty arises in partially observable or or or environments, as well as due to , , or both. It arises in any number of fields, including , , , , , , , , , , , , , and information science.


Concepts
Although the terms are used in various ways among the general public, many specialists in , and other quantitative fields have defined uncertainty, risk, and their measurement as:


Uncertainty
The lack of , a state of limited knowledge where it is impossible to exactly describe one or more of the following: the existing state or goal or set of options, the full set of possible future states or their probabilities or their values or utilities to stakeholders, the full set of stakeholders involved, or any other piece of information that inhibits the calculation of the full set of expected values for the options available.Hubbard, D. W. (2014). How to measure anything: finding the value of "intangibles" in business. Wiley.Arend, R.J. (2024). Uncertainty in Strategic Decision Making: Analysis, Categorization, Causation and Resolution. Palgrave Macmillan.


Measurement
Uncertainty, only when considered as objective or subjective risk, can be measured through a set of possible states or outcomes where are assigned to each possible state or outcome – this also includes the application of a probability density function to continuous variables.Kabir, H. D., Khosravi, A., Hosen, M. A., & Nahavandi, S. (2018). Neural Network-based Uncertainty Quantification: A Survey of Methodologies and Applications. IEEE Access. Vol. 6, Pages 36218 - 36234,


Second-order uncertainty
In statistics and economics, second-order uncertainty - expressed as the confidence over outcome probability estimates - is represented in probability density functions over (first-order) probabilities.David Sundgren and Alexander Karlsson. Uncertainty levels of second-order probability. Polibits, 48:5–11, 2013.

Opinions in Audun Jøsang. Subjective Logic: A Formalism for Reasoning Under Uncertainty. Springer, Heidelberg, 2016. carry this type of uncertainty.


Risk
exists when the future realized point value of a distribution of possible outcomes is not known but an estimated value can be calculated, and where some possible outcomes have an undesired effect or significant loss. Measurement of risk includes a set of outcome probabilities along with the valuations of those outcomes, where some possible outcomes involve losses. This also includes loss functions over continuous variables. Because an expected value for a decision can be calculated, some do not consider risk to be a true type of uncertainty (see below).Douglas Hubbard (2010). How to Measure Anything: Finding the Value of Intangibles in Business, 2nd ed. John Wiley & Sons. Description , contents , and preview.Jean-Jacques Laffont (1989). The Economics of Uncertainty and Information, MIT Press. Description and chapter-preview links.Jean-Jacques Laffont (1980). Essays in the Economics of Uncertainty, Harvard University Press. Chapter-preview links.Robert G. Chambers and (2000). Uncertainty, Production, Choice, and Agency: The State-Contingent Approach. Cambridge. Description and preview.


Risk versus variability
There is a difference between risk and variability. Risk is quantified by a probability distribution which depends upon knowledge about the likelihood of what the single, true value of the future quantity will be, as in the case of a roll of the dice. Variability is quantified by a distribution of frequencies of multiple instances of the quantity, derived from observed data, as in the case of a batting average.Begg, Steve H., Matthew B. Welsh, and Reidar B. Bratvold. "Uncertainty vs. Variability: What's the Difference and Why is it Important?." SPE Hydrocarbon Economics and Evaluation Symposium. OnePetro, 2014.


Knightian uncertainty
In economics, in 1921 distinguished uncertainty from risk with uncertainty being lack of knowledge which is immeasurable and impossible to calculate. To Knight, uncertainty is uninsurable while risk is (hypothetically) insurable. Because of the absence of clearly defined statistics in most economic decisions where people face uncertainty, he believed that we cannot measure probabilities in such cases; this is now referred to as Knightian uncertainty.

Knight pointed out that the unfavorable outcome of known risks can be insured during the decision-making process because it has a clearly defined expected probability distribution. Uncertainties have no known expected probability distribution, which can lead to extreme outcomes when borne. Because Knight refers to those who do bear uncertainty as 'entrepreneurs', the field of entrepreneurship includes a stream of research on uncertainty and how it creates opportunities. Arend, R.J. (2024). Uncertainty and Entrepreneurship: A Critical Review of the Research, with Implications for the Field. Foundations and Trends in Entrepreneurship, 20(2): 109-244.

Other taxonomies of uncertainties and decisions include more specific characterizations of uncertainty, specifying what is known, knowable, and unknowable about the phenomenon, as well as how it should be approached from an ethics perspective:


Risk and uncertainty
For example, if it is unknown whether or not it will rain tomorrow, then there is a state of uncertainty. If probabilities are applied to the possible outcomes using weather forecasts or even just a calibrated probability assessment, the risk has been quantified. Suppose it is quantified as a 90% chance of sunshine. If there is a major, costly, outdoor event planned for tomorrow then there is a risk since there is a 10% chance of rain, and rain would be undesirable. Furthermore, if this is a business event and $100,000 would be lost if it rains, then the risk has been quantified (a 10% chance of losing $100,000). These situations can be made even more realistic by quantifying light rain vs. heavy rain, the cost of delays vs. outright cancellation, etc.

Some may represent the risk in this example as the "expected opportunity loss" (EOL) or the chance of the loss multiplied by the amount of the loss (10% × $100,000 = $10,000). That is useful if the organizer of the event is "risk neutral", which most people are not. Most would be willing to pay a premium to avoid the loss. An insurance company, for example, would compute an EOL as a minimum for any insurance coverage, then add onto that other operating costs and profit. Since many people are willing to buy insurance for many reasons, then clearly the EOL alone is not the perceived value of avoiding the risk.

Quantitative uses of the terms uncertainty and risk are fairly consistent among fields such as probability theory, actuarial science, and information theory. Some also create new terms without substantially changing the definitions of uncertainty or risk. For example, is a variation on uncertainty sometimes used in information theory. But outside of the more mathematical uses of the term, usage may vary widely. In cognitive psychology, uncertainty can be real, or just a matter of perception, such as expectations, threats, etc.

is a form of uncertainty where the analyst is unable to clearly differentiate between two different classes, such as 'person of average height' and 'tall person'. This form of vagueness can be modelled by some variation on Zadeh's or .

(1994). 9780415033312, Psychology Press.

is a form of uncertainty where even the possible outcomes have unclear meanings and interpretations. The statement "He returns from the bank" is ambiguous because its interpretation depends on whether the word 'bank' is meant as "the side of a river" or "a financial institution". Ambiguity typically arises in situations where multiple analysts or observers have different interpretations of the same statements. Ambiguity can also refer to a type of uncertainty where the range of a distribution of possible outcomes is known, but not their probabilities. is famous for his urn experiments that illustrated ambiguity, its delineation from risk, and its separate avoidance.Ellsberg, D. (1961). Risk, Ambiguity, and the Savage Axioms. Quarterly Journal of Economics, 75(4), 643-669.

At the subatomic level, uncertainty may be a fundamental and unavoidable property of the universe. In quantum mechanics, the Heisenberg uncertainty principle puts limits on how much an observer can ever know about the position and velocity of a particle. This may not just be ignorance of potentially obtainable facts but that there is no fact to be found. There is some controversy in physics as to whether such uncertainty is an irreducible property of nature or if there are "hidden variables" that would describe the state of a particle even more exactly than Heisenberg's uncertainty principle allows.


Radical uncertainty
The term 'radical uncertainty' was popularised by John Kay and Mervyn King in their book Radical Uncertainty: Decision-Making for an Unknowable Future, published in March 2020. It is distinct from Knightian uncertainty, by whether or not it is 'resolvable'. If uncertainty arises from a lack of knowledge, and that lack of knowledge is resolvable by acquiring knowledge (such as by primary or secondary research) then it is not radical uncertainty. Only when there are no means available to acquire the knowledge which would resolve the uncertainty, is it considered 'radical'.


In measurements
The most commonly used procedure for calculating measurement uncertainty is described in the "Guide to the Expression of Uncertainty in Measurement" (GUM) published by . A derived work is for example the National Institute of Standards and Technology (NIST) Technical Note 1297, "Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results", and the Eurachem/Citac publication "Quantifying Uncertainty in Analytical Measurement". The uncertainty of the result of a measurement generally consists of several components. The components are regarded as , and may be grouped into two categories according to the method used to estimate their numerical values:
  • Type A, those evaluated by methods
  • Type B, those evaluated by other means, e.g., by assigning a probability distribution
By propagating the of the components through a function relating the components to the measurement result, the combined measurement uncertainty is given as the square root of the resulting variance. The simplest form is the standard deviation of a repeated observation.

In , , and , the uncertainty or margin of error of a measurement, when explicitly stated, is given by a range of values likely to enclose the true value. The uncertainty is often the standard uncertainty, which assumes an approximately Gaussian distribution, with the uncertainty expressing one standard deviation. This may be denoted by on a graph, or by the following notations:

  • measured value ± uncertainty
  • measured value
  • measured value ( uncertainty)

In the last notation, parentheses are the concise notation for the ± notation. For example, applying 10 meters in a scientific or engineering application, it could be written or , by convention meaning accurate to within one tenth of a meter, or one hundredth. The precision is symmetric around the last digit. In this case it's half a tenth up and half a tenth down, so 10.5 means between 10.45 and 10.55. Thus it is understood that 10.5 means , and 10.50 means , also written and respectively. But if the accuracy is within two tenths, the uncertainty is ± one tenth, and it is required to be explicit: and or and . The numbers in parentheses apply to the numeral left of themselves, and are not part of that number, but part of a notation of uncertainty. They apply to the least significant digits. For instance, stands for , while stands for . This concise notation is used for example by in stating the atomic mass of .

The middle notation is used when the error is not symmetrical about the value – for example . This can occur when using a logarithmic scale, for example.

Uncertainty of a measurement can be determined by repeating a measurement to arrive at an estimate of the standard deviation of the values. Then, any single value has an uncertainty equal to the standard deviation. However, if the values are averaged, then the mean measurement value has a much smaller uncertainty, equal to the standard error of the mean, which is the standard deviation divided by the square root of the number of measurements. This procedure neglects , however.

When the uncertainty represents the standard error of the measurement, then about 68.3% of the time, the true value of the measured quantity falls within the stated uncertainty range. For example, it is likely that for 31.7% of the atomic mass values given on the list of elements by atomic mass, the true value lies outside of the stated range. If the width of the interval is doubled, then probably only 4.6% of the true values lie outside the doubled interval, and if the width is tripled, probably only 0.3% lie outside. These values follow from the properties of the normal distribution, and they apply only if the measurement process produces normally distributed errors. In that case, the quoted standard errors are easily converted to 68.3% ("one "), 95.4% ("two sigma"), or 99.7% ("three sigma") confidence intervals.

In this context, uncertainty depends on both the accuracy and precision of the measurement instrument. The lower the accuracy and precision of an instrument, the larger the measurement uncertainty is. Precision is often determined as the standard deviation of the repeated measures of a given value, namely using the same method described above to assess measurement uncertainty. However, this method is correct only when the instrument is accurate. When it is inaccurate, the uncertainty is larger than the standard deviation of the repeated measures, and it appears evident that the uncertainty does not depend only on instrumental precision.


In the media
Uncertainty in science, and science in general, may be interpreted differently in the public sphere than in the scientific community.Zehr, S. C. (1999). Scientists' representations of uncertainty. In Friedman, S.M., Dunwoody, S., & Rogers, C. L. (Eds.), Communicating uncertainty: Media coverage of new and controversial science (3–21). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. This is due in part to the diversity of the public audience, and the tendency for scientists to misunderstand lay audiences and therefore not communicate ideas clearly and effectively. One example is explained by the information deficit model. Also, in the public realm, there are often many scientific voices giving input on a single topic. For example, depending on how an issue is reported in the public sphere, discrepancies between outcomes of multiple scientific studies due to methodological differences could be interpreted by the public as a lack of consensus in a situation where a consensus does in fact exist. This interpretation may have even been intentionally promoted, as scientific uncertainty may be managed to reach certain goals. For example, climate change deniers took the advice of to frame as an issue of scientific uncertainty, which was a precursor to the conflict frame used by journalists when reporting the issue.

"Indeterminacy can be loosely said to apply to situations in which not all the parameters of the system and their interactions are fully known, whereas ignorance refers to situations in which it is not known what is not known." These unknowns, indeterminacy and ignorance, that exist in science are often "transformed" into uncertainty when reported to the public in order to make issues more manageable, since scientific indeterminacy and ignorance are difficult concepts for scientists to convey without losing credibility. Conversely, uncertainty is often interpreted by the public as ignorance. The transformation of indeterminacy and ignorance into uncertainty may be related to the public's misinterpretation of uncertainty as ignorance.

Journalists may inflate uncertainty (making the science seem more uncertain than it really is) or downplay uncertainty (making the science seem more certain than it really is).

(1999). 9780805827279, Lawrence Erlbaum.
One way that journalists inflate uncertainty is by describing new research that contradicts past research without providing context for the change. Journalists may give scientists with minority views equal weight as scientists with majority views, without adequately describing or explaining the state of scientific consensus on the issue. In the same vein, journalists may give non-scientists the same amount of attention and importance as scientists.

Journalists may downplay uncertainty by eliminating "scientists' carefully chosen tentative wording, and by losing these caveats the information is skewed and presented as more certain and conclusive than it really is". Also, stories with a single source or without any context of previous research mean that the subject at hand is presented as more definitive and certain than it is in reality. There is often a "product over process" approach to science journalism that aids, too, in the downplaying of uncertainty. Finally, and most notably for this investigation, when science is framed by journalists as a triumphant quest, uncertainty is erroneously framed as "reducible and resolvable".

Some media routines and organizational factors affect the overstatement of uncertainty; other media routines and organizational factors help inflate the certainty of an issue. Because the general public (in the United States) generally trusts scientists, when science stories are covered without alarm-raising cues from special interest organizations (religious groups, environmental organizations, political factions, etc.) they are often covered in a business related sense, in an economic-development frame or a social progress frame. The nature of these frames is to downplay or eliminate uncertainty, so when economic and scientific promise are focused on early in the issue cycle, as has happened with coverage of plant biotechnology and nanotechnology in the United States, the matter in question seems more definitive and certain.

Sometimes, stockholders, owners, or advertising will pressure a media organization to promote the business aspects of a scientific issue, and therefore any uncertainty claims which may compromise the business interests are downplayed or eliminated.


Applications
  • Uncertainty-as-risk is designed into , most notably in , where is central to play.
  • In scientific modelling, in which the prediction of future events should be understood to have a range of expected values.
  • In , and in particular , is commonplace and can be modeled and stored within an uncertain database.
  • In , uncertainty permits one to describe situations where the user does not have full control on the outcome of the optimization procedure, see scenario optimization and stochastic optimization.
  • In , it is now commonplace to include data on the degree of uncertainty in a .
  • Uncertainty or is used in science and engineering notation. Numerical values should only have to be expressed in those digits that are physically meaningful, which are referred to as significant figures. Uncertainty is involved in every measurement, such as measuring a distance, a temperature, etc., the degree depending upon the instrument or technique used to make the measurement. Similarly, uncertainty is propagated through calculations so that the calculated value has some degree of uncertainty depending upon the uncertainties of the measured values and the equation used in the calculation.
  • In , the Heisenberg uncertainty principle forms the basis of modern quantum mechanics.
  • In , measurement uncertainty is a central concept quantifying the dispersion one may reasonably attribute to a measurement result. Such an uncertainty can also be referred to as a measurement .
  • In daily life, measurement uncertainty is often implicit ("He is 6 feet tall" give or take a few inches), while for any serious use an explicit statement of the measurement uncertainty is necessary. The expected measurement uncertainty of many measuring instruments (scales, oscilloscopes, force gages, rulers, thermometers, etc.) is often stated in the manufacturers' specifications.
  • In , uncertainty can be used in the context of validation and verification of material modeling.
  • Uncertainty has been a common theme in art, both as a thematic device (see, for example, the indecision of ), and as a quandary for the artist (such as 's difficulty with deciding what artworks to make).
  • Uncertainty is an important factor in . According to economist , it is different from , where there is a specific assigned to each outcome (as when flipping a fair coin). Knightian uncertainty involves a situation that has unknown probabilities.
  • Investing in such as the stock market involves Knightian uncertainty when the probability of a rare but catastrophic event is unknown.


Philosophy
In Western philosophy the first philosopher to embrace uncertainty was Pyrrho Https://www.iep.utm.edu/pyrrho/< /ref> resulting in the Hellenistic philosophies of and Academic Skepticism, the first schools of philosophical skepticism. and represent key concepts in ancient Greek philosophy regarding uncertainty.

William MacAskill, a philosopher at Oxford University, has also discussed the concept of Moral Uncertainty.MacAskill, William, Krister Bykvist, & Toby Ord (2020) Moral Uncertainty, Oxford: Oxford University Press. Moral Uncertainty is "uncertainty about how to act given lack of certainty in any one moral theory, as well as the study of how we ought to act given this uncertainty."


Artificial intelligence

See also
  • Dempster–Shafer theory
  • Further research is needed
  • Fuzzy set theory
  • Information entropy
  • Interval finite element
  • Keynes' Treatise on Probability
  • Measurement uncertainty
  • Morphological analysis (problem-solving)
  • Propagation of uncertainty
  • Schrödinger's cat
  • Scientific consensus
  • Statistical mechanics
  • Uncertainty quantification
  • Uncertainty tolerance
  • Volatility, uncertainty, complexity and ambiguity


Further reading
  • (2006). 9780470043837, Wiley-Interscience.
  • (2026). 9780521517324, Cambridge University Press.
  • (2005). 9780521517324, .
  • (1989). 9780387969459, .
  • (2026). 9783031485527, Palgrave Macmillan.
  • "Treading Thin Air: Geoff Mann on Uncertainty and Climate Change", London Review of Books, vol. 45, no. 17 (7 September 2023), pp. 17–19. "We are in desperate need of a that looks the catastrophic uncertainty of square in the face. That would mean taking much bigger and more transformative steps: all but eliminating ... and prioritizing institutions over markets. The burden of this effort must fall almost entirely on the richest people and richest parts of the world, because it is they who continue to gamble with everyone else's fate." (p. 19.)


External links

Page 1 of 1
1
Page 1 of 1
1

Account

Social:
Pages:  ..   .. 
Items:  .. 

Navigation

General: Atom Feed Atom Feed  .. 
Help:  ..   .. 
Category:  ..   .. 
Media:  ..   .. 
Posts:  ..   ..   .. 

Statistics

Page:  .. 
Summary:  .. 
1 Tags
10/10 Page Rank
5 Page Refs
2s Time