This article further criticizes the notion of Knightian uncertainty which is often used to attack equilibrium theory. Section II highlights some past work.

I recently watched and old debate between Caplan and Boettke. The debate focuses on the validity of Austrian economics, but my focus is on Bryan’s criticism of uncertainty.

See 5:41 of this clip where Bryan begins a rebuttal. He claims that Neoclassical Search Theory, properly implemented, can overcome objections about uncertainty.

He likens uncertainty to serendipity or pleasant surprise. He proceeds to claim that this can be modeled as an ordinary search theory model with some additional baseline return.

His point is well taken, but let me present the objection Boettke should have raised: Uncertainty cannot be modeled with a baseline benefit. It might consist of a baseline cost. Moreover, the baseline benefit is unknown. As a result, the proper search theory representation of uncertainty would be the following:

F(t) is the outcome expected under uncertainty. f(t) is the underlying search-theoretic functional form, trivially supposed as a function of time. k is the uncertainty modifier. As you can see, because k is uncertain the entire function collapses into uncertainty and the search theoretic result is undecidable. This is the inevitable result from uncertainty and it is the conclusion of George Shackle’s radical uncertainty, although it is also called Knightian risk, sheer ignorance, or simply uncertainty. Applied to equilibrium theory, it would entail that we have no reason to believe the economy is generally equilibrating in the long run, or any run.

Now let me defend search theory against the above criticism:

- If such were the case then the economy would not predictably improve, but it does.
- If such were the case then equilibrium theory and neoclassical price theory would be empirically impotent, but they aren’t.
- If such were the case then individuals would not be able to rationally connect any particular action to an expected value, but they do.
- Corollary to 3, human action would not be observed, but it is.

In contrast to F(t) and its qualitative implications, we observe that people do act rationally on the basis of calculated expected values and decidable means chosen to obtain definite ends. This observation might be called the Law of Decidability. This law should not be confused with that non-existent straw-man, the Law of Certainty. Humans can and do often decide to act with terribly little confidence in success. The only requirement is that some action is perceived as the best available option, not that it is expected to actually work with any confidence.

Uncertainty does exist. It’s non-existence is not claimed by me here or anywhere. Rather, I claim that undecidability does not follow from uncertainty. The fact of uncertainty in conjunction with the Law of Decidability can be seen as a Paradox of Decidability. If any consequence might come from any action, how can we choose any course? I hinted at the answer which is that there is some expectation, even if it is held with low confidence. In statistical terms, there is some point-estimate. Paradoxically, the probability of any point-estimate in a continuous distribution is 0, and yet it is the single maximum-likelihood estimate. A 0 probability occurs because definite probability requires some expected interval, but this interval collapses to a point in the limit.

Statistics holds the answer to the paradox in the existence of the point estimate, but what distribution should we expect? Our expectations should proceed as follows:

- Horizontal linear distribution
- Symmetric angled linear distribution
- Either of these. I’m not convinced at this time that logic causes one to be preferred prior to the other.
- Asymmetric linear distribution
- Symmetric non-linear distribution

- Asymmetric non-linear distribution

From 1 to 4, the earlier cases are properly basic while the latter cases are opinionated. Without reason to reject the former and accept the latter, logic calls that we prefer the former.

Options 1, 2, and 3.1 would render the area under the density curve infinite. As a result, F(t) would be undecidable. As a result, we must reject these possibilities due to the existence of the Law of Decidability. The simplest explanation remaining is a symmetric non-linear distribution. Moreover, we have no evidence in the context of this article that an asymmetric distribution exists, so 4 would be inappropriate.

On second thought, 4 might be the most correct explanation, but that discussion is out of scope for this article. To be clear, the symmetry we are referring to is the tendency for things searched for to either result in unexpected benefits or costs, whether in reality or in expectation. It might be the cases that either or both are genuinely asymmetric, and a number of asymmetric expectations from Behavioral Economics come to mind. Whether the final answer is 3.2 or 4 doesn’t much disturb the final result, and 3.2 will be here assumed for simplicity.

The symmetric non-linear distribution may be solvable if the distribution is convergent. Which one should we choose? We should expect the least risky and least opinionated function. The simplest nonlinear functional form is X^A. If A is less than -1 or between 0 and 1 then the function will diverge.

Due to the logic of statistical risk minimization, just as when we estimate the outcome of flipping a coin, we should choose the midpoint of 0 and 1 for the positive A. That is, E[A] = .5. For the negative A we can do the large value a priori estimation trick to create a solvable problem, and the result is that -2 is the risk-minimizing estimate. Now, given some definite point-estimate, t, what does each function entail about the confidence distribution on k?

Given that either of these two A values, .5 or -2, will do the trick, what is our solution? We should assume maximum risk. Said otherwise, we should prefer the function which converges more slowly. This will result in wider probability distribution function and a minimum confidence bound for any given F(t) value. Unfortunately it’s 2 AM and I don’t feel like doing the apparently non-trivial math which is required to calculate comparative convergence rates. Maybe someone more familiar with that sort of math can just look at the exponents and know which is which. For now, I’ll leave the answer as “Whichever converges more slowly.”

**Section II: Prior Articles Contra Uncertainty**

- Categorical Certainty: Contra Radical Uncertainty
- Forecasting and Austrian Economics
- Short Essay Regarding: “The Use of Knowledge in Political Economy: Paretian Insight into a Hayekian Challenge”
- Rational Estimation and Price Under Uncertainty
- Statistical Reasoning with Uncertainty
- A Priori Probability: Larger Values

[…] but something unexpected might come along and muck everything up. In the past I have described ways of overcoming this issue. These article contains 3 sections which do the […]