The memristor skeptics

Illustration of the memristor in electrical network theory. Image from Wikipedia, produced by Parcly Taxel.

Illustration of the memristor in electrical network theory. Image from Wikipedia, produced by Parcly Taxel.

A story of skepticism gone horribly wrong.

In 2008, researchers at HP Labs announced their discovery of the memristor, a type of electrical device that had been predicted by Leon Chua in a 1971 paper titled, “Memristor– the missing circuit element.” Memristors have been in the news again recently due to HP’s announcement of a bold new computing project called The Machine, which reportedly makes heavy use of memristor devices. Thanks to the sudden attention being paid to memristors in the past few years, we now know that they were with us all along, and you can even make one yourself with a few simple hardware items.

Since I teach my department’s introductory course on electronic devices, I’ve been studying memristors to see if it’s time to add them into the basic curriculum. During my reading, I started to notice a small percolation of skeptical voices. They appeared in popular science magazines, blog posts, and comment threads, and said some very unexpected things, like “HP didn’t really invent a memristor” and even “the memristor may be impossible as a really existing device.” I soon noticed that several of the critics were published researchers, and some of them had published their critiques on the arXiv, a preprint site used by credentialed researchers to post draft articles prior to peer review. The skeptics reached their peek in 2012, but fizzled out in 2013. One of those skeptics went out with a bang, crafting a bold conspiracy theory that still echoes in discussion fora and in the comment threads of tech industry articles. This post chronicles the rise and fall of his career as a memristor scholar. I also offer some speculation as to how the debacle could have been avoided.

Continue reading

Non-testable facts are commonplace in mathematically driven science

Observation that lacks a theory

An observation that lacks a theory

At first inspection, the scientific method seems to dictate that all accepted facts should rest on concrete observations. Based on this notion, some skeptics are quick to dismiss the scientific legitimacy of mathematically driven research. But there are many examples of important scientific findings that are essentially mathematical theorems with no prospect for physical falsification. A simple class of examples is the family of bounds and asymptotes. In this post I’ll examine a couple of specific examples from information science and engineering.

There’s an interesting set of articles that recently appeared on Pigliucci’s ScientiaSalon site. The first of these articles, titled “The multiverse as a scientific concept,” defends a mathematically-driven hypothesis that has no prospect for empirical validation. This article was authored by Coel Hellier, a professor of astrophysics at Keele University in the UK. The second article, titled “The evidence crisis,” offers a highly skeptical critique of the mathematical research methods used by string theorists, who introduce unobservable physical dimensions (and perhaps other controversial features) in order to produce a self-consistent mathematical theory that unifies the known physical laws. The second article is by Jim Baggott, who holds an Oxford PhD in physical chemistry, and has authored some critical books on modern physics, like this one.

I am very interested in the relationship between empirical and mathematical research. At just this moment, I have two article revisions in progress on my desktop. The first article provides an almost entirely empirical approach to validate a new heuristic technique; the reviewers are upset that I only have empirically tested results without a single mathematical theorem to back them up. The second article is more theoretically driven, but has limited empirical results; the reviewers complain that the experimental results are inadequate. This is a very typical situation for my field. There is an expectation of balance between theory and experiment. Purely empirical results can easily represent experimental or numerical mistakes, so you should ideally have a predictive theory to cohere with the observations. On the other hand, a strictly theoretical or mathematical result may not have any practical utility, so should be connected to some empirical demonstration (I am in an engineering field, after all).

Since I’m not a physicist, I won’t weigh in on the merits of string theory or the multiverse. In thinking about these topics, however, it occurs to me that there are a lot of scientific concepts that are purely mathematical results, and are effectively unfalsifiable. I think one such example is Shannon’s Capacity Theorem, which plays a foundational role in information theory. Simply put, Shannon’s Theorem predicts that any communication channel should have a maximum information capacity, i.e. a maximum rate at which information can be reliably communicated. There is a whole branch of information science devoted to modeling channels, solving for their capacity, and devising practical information coding techniques that push toward those capacity limits. A large amount of work in this field is purely mathematical.

With regard to empiricism, here are the features that I think are interesting about the capacity theorem: First, capacity is a limit. It tells us that we can’t achieve higher rates on a given channel. In terms of empirical testing, all we can do is build systems and observe that they don’t beat the capacity limit. That is not really an empirical test of the limit itself. Second, we usually don’t measure capacity directly. Instead, we use an assumed model for a hypothetical physical channel, and then apply some mathematical optimization theory to predict or infer the capacity limit.

Given these two features, I think the capacity theorem — along with a huge body of related research — is not truly testable or falsifiable in the way many empiricists would prefer (and I think that’s okay). Here are some specific points:

  1. We cannot falsify the proposition that every channel has a capacity. It is a consequence of the same mathematics that grounds all of probability theory and statistics research. In order to falsify the capacity theorem, we have to discard most other modern scientific practices as well. It is interesting to me that this is a strictly mathematical theorem, yet it forces inescapable conclusions about the physical world.
  2. If we did observe a system that beats capacity, we would assume that the system was measured improperly or used an incorrect channel model. Nearly every graduate student finds a way to “beat” the capacity limit early in their studies, but this is always because they made some mistake in their simulations or measurements. Even if we keep beating capacity and never find any fault in the measurements or models, it still would not suffice to falsify the capacity theorem. It’s a theorem — you can’t contradict it! Not unless you revise the axioms that lie at the theorem’s foundations. Such a revision would be amazing, but it would still have to be consistent with the traditional axioms as a degenerate case, because those axioms generate a system of theories that are overwhelmingly validated across many fields. This revision could therefore not be considered a falsification, but should rather be thought of as an extension to the theory.

The point of this analysis is to show that an unfalsifiable, untestable mathematical result is perfectly fine, if the result arises from a body of theory that is already solidly in place. To add another example, I mentioned before about how some researchers try to find information coding schemes that achieve the capacity bound. For a long time (about 50 years), the coding community took a quasi-empirical approach to this problem, devising dozens (maybe even hundreds or thousands) of coding schemes and testing them through pure analysis and simulations on different channel models. In the 1990’s, several methods were finally discovered that come extremely close to capacity on some of the most important channels. To some researchers, these methods were not good enough, since they only appear to approach capacity based on empirical observations. To these researchers, it would be preferable to construct a coding method that is mathematically proven to exactly achieve capacity.

In 2009, a method known as Polar Coding appeared, which was rigorously shown to asymptotically achieve capacity, i.e. it’s performance should get better and better as the amount of coded data goes to infinity, and when the amount of data reaches infinity, then it should work at a rate equal to capacity. This was hailed as a great advance in coding and information theory, but again the asymptotic claim is not truly verifiable through empirical methods. We can’t measure what happens when the information size reaches infinity. We can only make mathematical projections. Because of this, some researchers I know have quietly criticized the value of polar codes, calling them meaningless from a practical standpoint. I disagree; I value the progress of mathematical insight in concert with empirical research and practical applications.

To conclude, I want to offer one further observation about the mathematical system from which these theorems arise. When studying the axiomatic development of probability theory, statistics, and stochastic processes, I was really struck by how little attachment they have to empirical observations. They are mathematical frameworks with a number of fill-in-the-gap places where you specify, for instance, a physically plausible probability distribution (a commenter on Baggott’s article similarly described string theory as a mathematical framework for building theories, rather than a single fully-qualified physical theory). But even the physical probability distributions are frequently replaceable by a priori concepts derived, say, from the Bernoulli distribution (i.e. the coin toss process), or the Gaussian distribution under support from the Central Limit Theorem (another purely mathematical result!).

While we like to think that the history of science is a story of theories devised to explain observations (which may be true in some sciences), in many fields the story is partially reversed. The sciences of probability, statistics, and information theory (among many others) developed first from a priori mathematical considerations which defined the experimental procedures to be used for empirical studies. This history is chronicled in two of my favorite books on scientific history — The Emergence of Probability and The Taming of Chance — both written by philosopher Ian Hacking (who has authored a number of other interesting books worth examining).

Some may rightly argue that these claims are not totally unfalsifiable, since they are anchored to a theory that could have been independently falsified. The main point of my post, however, is to point out that a purely mathematical exposition can expose novel, very real truths about the physical world — truths that cannot be verified or falsified on their own.

Is this scientism?


Is there science happening here? I need a biologist to tell me.

PZ Myers and Laurence Moran say “Physicians and engineers are not scientists” (a point argued with, I think, malicious intent). Meanwhile Jerry Coyne and others think that car mechanics and plumbers are doing “science, broadly construed.” Sam Harris and Steven Pinker suggest (or at least imply) that scientists will ultimately overtake the humanities; Massimo Pigliucci has strenuously critiqued this latter view, calling it “scientism.”

This debate revolves around a basic rhetorical fallacy: the claim that “scientists” have a unique legitimacy attached to their beliefs, together with a claim of demarcational privilege to decide who is and isn’t a scientist. The arational imposition of intellectual privilege is, I think, the essence of the fuzzily defined “scientism” that non-scientists find threatening. It’s threatening because it is a threat. It attacks the legitimacy of entire classes of scholarship, and the Myers/Moran attack on engineers is one example.

This style of argument is used to de-legitimize a perceived opponent, or (as in Pigliucci’s case) to defend the legitimacy of his own profession. Such defenses are, according to Coyne, “defensive” — check out Coyne’s reaction to a historian who proposed that scientists might benefit from studying history. To paraphrase his position: we (scientists) don’t need you (non-scientists), you need us. On this level, the debate has nothing to do with science or the quality of ideas; instead it is a purely sophistic (and egoistic) effort to disqualify others.

I’ll pause now to remind the reader that I’m an engineer. Speaking as an engineer, I think there is a clear distinction between engineering and science: engineers have to actually get things right or they may suffer immediate economic, functional or ethical consequences. Scientists, on the other hand, have to pass their work through a process of critical review by their peers. The latter process is important to the long-term filtering of ideas, but peer review doesn’t have the same falsifying power as a collapsing bridge, an exploding boiler, a crashing train, a killer radiation leak or a misfired missile. So if we’re talking about legitimacy, I’d sooner trust the beliefs of a randomly selected engineer over those of a random scientist.

But Moran and Myers think engineers are something less. They are annoyed by Ken Ham’s claim that creationists can be successful in scientific careers, something that was argued during the Bill Nye / Ken Ham debate. They are so annoyed by the creationists that they are willing to degrade entire classes of scholars in order to win a fake point.

Continue reading

Santa, Jesus and dinosaurs

Father Christmas rides a goat

Father Christmas rides a goat

A friend of mine once shared her “skeptic origin story,” which also happens to be an amazing Christmas story. I’ll have to paraphrase the story from memory:

When I found out that Santa Claus wasn’t real, I couldn’t believe so many people had been lying to me. I immediately stopped believing in God, Jesus and dinosaurs. They all sounded like made-up fantasies told by the same people who lied about Santa. I eventually started believing in dinosaurs again.

This story is both funny and thought provoking. On one hand, it exposes the plain similarity between religious knowledge and the Santa myth — the latter being a ubiquitous lie in which nearly everyone knowingly participates. On the other hand, it highlights a continual challenge for skeptically minded people: where do I draw the boundaries of my skepticism? This is sort of the amateur version of the demarcation problem: where are the lines that separate (1) total junk; (2) reasonable but wrong beliefs; and (3) questionable topics that warrant further study?

Continue reading

Extraordinary Tweets and the Madness of Crowds

ImageTwitter accelerates the spread old-fashioned mob thinking. Thanks to the efforts of internet skeptics, older varieties of internet noise email hoaxes seem more or less tamed. But recent blowups — from Elan Gale’s pointless Thanksgiving hoax to the more tragic terrorism accusations pointed at innocent people after the Boston marathon bombing — indicate that the skeptical culture hasn’t come to grips with the Twitter medium. With Twitter, pseudo-information spreads and evolves much faster than it can be verified or corrected, and even experienced skeptics get fooled by the noise.  
Continue reading

Structured reasoning with ProbLog


ProbLog is a simple language for probabilistic reasoning. It allows you to write logic programs that account for uncertainty among the facts and propositions, and it can calculate the probabilities of hypothesis or events based on your model. An online tutorial with a web-based calculator is available here.

Of all the intellectual techniques available for those who pursue a rational worldview, I believe none are more important that Bayesian inference. By saying this is “most important,” I don’t mean to disregard the fundamentals of logic, mathematics, probability, statistics, etc — those are all prerequisites to understanding Bayesian techniques. There’s been a lot of talk about Bayesianism recently in skeptical circles, with Richard Carrier drawing a lot of attention for his advocacy of Bayesian methods in history. I appreciate that there are a lot of philosophical arguments surrounding Carrier and others’ use and application of Bayes’ theorem. As with all reasoning techniques, Bayesian reasoning can be used well or it can be used poorly.
Continue reading

There is a difference between science and debate

ImageThe goal of science is to build knowledge. The goal of a debate is to win. While these can sometimes look very similar, they are not the same. Online media have created platforms where superficial debates can flourish at the expense of scientific understanding. Skeptical communities who advocate the scientific worldview already promote a widely known collection of rules for debate. They should also consider articulating “best practices” for constructive discussion, recognizing that most discussions do not need to become debates.

Science concerns itself with analysis, observation, demonstration and refutation with the goal of building knowledge. While scientific discussions often involve the juxtaposition of arguments, they rarely proceed in the manner of formal debates unless the occasion calls for an immediate decision or action — as in a peer review decision or the adoption of a research agenda. When debate happens, it requires extensive preparation by the participants and is usually moderated by a decision maker, such as a journal editor, program manager or committee chair.
Continue reading