**Interesting Paper**: Florian Lengyel And Benoit St-Pierre, Denial Logic:

We define Denial Logic DL, a system of justification logic that models an agent whose justified beliefs are false, who cannot avow his own propositional attitudes and who can believe contradictions but not tautologies of classical propositional logic…. We use DL with what we call coherent negative constant specifications to model a Putnamian brain in a vat with the justified false belief that it is not a brain in a vat, and derive a model of JL in which “I am a brain in a vat” is false. We define the fusion of Denial Logic with the Logic of Proofs to model an agent who can justify and check tautologies and who can believe his justified false beliefs. Denial Logic was inspired by the contemporary debate over anthropogenic global warming.

For many years I have been both an arm-chair epistemologist and an enthusiast of skeptical organizations like CSICOP. Epistemology traditionally concerns the *justification* of true beliefs. But science and skepticism are also centrally concerned with *falsification* of wrong beliefs. This paper on denial logic goes further by suggesting a formal system of beliefs that are known to be false. Systems of formal justification are usually derived from modal logic, which has a special sub-family known as epistemic modal logic. These systems aid in classifying beliefs as “justified” or “unjustified.” But the “unjustified” status can apply equally to good beliefs that have inadequate support, and to bad beliefs that are outright wrong. In that sense, denial logic may provide a much stronger sense of dealing with faulty belief systems.

Denial logic also provides a means for modeling an *aggressive skeptic*, i.e. an agent who is concerned only with knowing that certain beliefs are false. The aggressive skeptic may have no need for true beliefs. The paper’s authors suggest that Denial Logic may be used to model an aggressive skeptical position in the debate on global climate change. In this context, there are certainly economic and political motives that might inspire an agent to adopt a stance of pure denial. For the climate-change-denial agent, is not sufficient to merely disbelieve the prevailing views on climate change (as would a conventional skeptic). Furthermore, within the climate-debate context, the agent may have no use for true beliefs about climate change. This agent is concerned only with aggressively debunking the prevailing claims.

*reasons*with logical inferences. The formal system is described in Artemov’s paper, “Why do we need justification logic.” As described by Artemov, the motivation for JL is illustrated by Putnam’s “Red Barn” example:

Suppose I am driving through a neighborhood in which, unbeknownst to me, papier- mache barns are scattered, and I see that the object in front of me is a barn. Because I have barn-before-me percepts, I believe that the object in front of me is a barn. Our intuitions suggest that I fail to know barn. But now suppose that the neighborhood has no fake red barns, and I also notice that the object in front of me is red, so I know a red barn is there. This juxtaposition, being a red barn, which I know, entails there being a barn, which I do not, “is an embarrassment.”

In this situation, the problem arises from the manner in which evidence is associated with conclusions. If I say *only* that “I see a barn,” then we conclude I am (or might be) mistaken because we know there are fake barns in the area. But if I say “I see a red barn,” then we conclude I am not mistaken because none of the fake barns are red. This seems simple enough in plain English, but it is a real problem for modal logic systems. Consider, for instance, that we are trying to build a computer system that makes automated decisions based on various information. How can the computer be programmed to avoid this kind of problem?

In Justification Logic, the evidence is associated with its corresponding belief via colon, as in “(I see a barn) : (there is a barn)”. Given our extra knowledge about fake barns, we can say this statement is false. But the alternative statement “(I see a red barn) : (there is a red barn)” is different. Finally we can make a logical inference to arrive at “(I see a red barn) : (there is a barn)”. This example demonstrates that the reasons need to be carried along with the logic, so that we can tolerate different truth-outcomes for the same statement (e.g. “there is a barn”) depending on the exact reasons.

Justification Logic reminds me of *conditional probabilities* that are used in Bayesian inference. Since I have spent the bulk of my career implementing Bayesian concepts in electronic systems, I feel fairly comfortable with reasoning in this framework. In a Bayesian system, a statement S is not necessarily true or false, rather it has an associated probability Pr(S). The probability may change depending on some relevant information I. We say that Pr(S | I) is the probability that statement S is true, given the condition that information I is true. For example Pr(there is a barn) is different from Pr(there is a barn | I see a barn), which is different from Pr(there is a red barn | I see a red barn). In a Bayesian inference, we may apply logical deductions to the statements (e.g. “red barn” therefore “barn”), but when doing so we must drag along all of the conditions that are associated with those statements.

It might be naive for me to suggest that a formal system of justification can be built on conditional probabilities; I’m not sure. But it does appear that probability theorists long ago implemented a formal system that associates statements with their reasons, and that properly accounts for such reasons when performing logical operations. Is it possible that a probabilistic framework could encompass the goals of both Justification Logic and Denial Logic?

Pingback: Aggressive Skepticism and Denial Logic | F Lengyel

Thank you for reviewing our paper.

The red barn is a good way to illustrate the problem.

If you don’t mind, we’d like to borrow your “aggressive skeptic”.

Your question is interesting. We have not tought about it. All I can say for now is that Jean-Yves Girard views probabilistic logics with some skepticism.

I am not sure he holds an aggressive skepticism or not.

Hehe, well he can be skeptical about probabilistic logic, but I think there is probably something to it. If you intend to reason about things in the real world, I can’t see how to avoid probabilistic reasoning. All physical concepts and theories rest on some degree of approximation, abstraction and simplification. Furthermore all observation is subject to measurement error, misperception or uncertainty (e.g. “Was that a red barn or a brown barn?”). An agent who relies on absolute deduction from sensory data is likely to be surprised a lot, because a small amount of sensory error can propagate catastrophically.

You are welcome to borrow the “aggressive skeptic” term. I’m flattered that you like it.