Math: from Anxiety to Hostility

There’s a dustup happening between science writer John Horgan and several popular authors in the Skeptical (with a capital-S) community. Horgan’s arguments are pretty dull, but one point caught my eye: he argues that grand cosmological theories are pseudoscience, no better than psychoanalysis or transhumanism. The alleged reason is because they are “untestable.” But lurking beneath this inch-deep argument is an implied hostility to mathematical theory. Math hostility is extremely widespread, even among scientists and engineers. I think it’s a form of xenophobia; people fear what they don’t understand.

But here’s the thing: mathematical theories are built for the sole purpose of thinking precisely and avoiding contradictions. If you believe a set of physical laws are true, then you have to accept the mathematical system that is built from those laws; otherwise you commit a contradiction. In some cases, as with string theory, it may be necessary to insert some untested propositions in order to complete the theory’s mathematical structure (disclaimer: I’m no string theorist, but I’ll do my best with what I’ve gleaned about it from popular literature). By doing this, a hypothesis is born. It’s not arbitrary — it’s constrained by the mathematics inherited from known physical laws. If you can’t think of any way to test the hypothesis, that’s no reason to stop working on it. This is what we call “reasoning”, you keep doing the math until you figure out some way to test it.

That’s what science is: thinking really hard about the world, devising explanations that help us understand things, expunging contradictions, and (where possible) connecting independent lines of evidence. Folks like Horgan tend to get stuck on simplified explanations of science, like the falsifiability criterion, and mistake those explanations for prescriptive rules of “the game”. But the practice of science is a phenomenon, and the falsifiability criterion is just one of many post hoc theories developed to explain that phenomenon. Like a lot of science spectators, Horgan doesn’t get that. He looks at sophisticated modern theories and says something I’ve seen many times before:

Some string and multiverse true believers, like Sean Carroll, have argued that falsifiability should be discarded as a method for distinguishing science from pseudo-science. You’re losing the game, so you try to change the rules.

I’m tempted to codify this phrasing and name it the “Crank’s Gambit”: the claim that a well established branch of science has gone rogue, has abandoned the true principles of scientific method, and now they want to retroactively change definitions in order to cover up their fraud. This pattern of argument is well represented among science deniers; I recently addressed it in my response to “memristor skeptics” in my own research field.

Apparently Horgan has held this view for a long time. Krauss (quoted by Coyne) describes the gist of Horgan’s book, the End of Science, as

John Horgan was a respected science writer years ago up until he wrote a book entitled The End of Science, which essentially argued that much of physics had departed from its noble traditions and now had ventured off into esoterica which had no relevance to the real world, and would result in no new important discoveries—of course, this was before the discovery of an accelerating universe, the Higgs Boson, and the recent exciting discovery of gravitational waves!

That description, “esoterica with no relevance to the real world,” is the most common complaint against highly abstract or mathematical theories. It echoes the sentiment from one comment made on my memristor post: “Who the hell is still believing that the ‘memristor’ – the so-called fourth basic component of electronic circuits – exists in physical reality? …the ‘memristor’ is nothing else but a mathematical curiosity.” These comments betray an underlying suspicion that mathematics cannot describe the real world. In my previous post I quoted Tesla, who said something similar in reaction to general relativity:

Today’s scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality…. The scientists of today think deeply instead of clearly. One must be sane to think clearly, but one can think deeply and be quite insane.

For the past week I’ve been at a conference where mathematicians mingle with circuit engineers, and several conversations have turned toward the deep seated mistrust of mathematics that we encounter in our own professional fields. There are many who genuinely believe that if you aren’t building and measuring something, then what you are doing is fake. But this view is so dysfunctional. For one thing, it’s often prohibitively expensive to build and measure interesting things. For another thing, it’s impossible to know what would be interesting to build and measure, unless you’ve invested a lot of rigorous thought beforehand. That’s what theory is for. As for those who are so skeptical of it, my best guess is that they simply don’t get the math, and they resent what they don’t understand.

Tesla vs Edison: just the facts

Screen Shot 2016-01-24 at 2.53.47 PMEvery year around this time I deliver a short lecture on the history of electronics. The timing of this lecture happens to roughly coincide with the anniversary of Nikola Tesla’s death, when the internet is abuzz with pop science articles, posts and tweets about the magical wizardry of Tesla. These stories are frequently accompanied by stories of Edison’s depravity as Tesla’s alleged arch-nemesis. The story alleges that Edison was obstinately fixated on a doomed technology (DC power distribution), and did everything in his power to block the arrival of the better option (AC power distribution), championed by Tesla. (Of course, nothing is so simple, and even today there is vigorous debate over the comparative advantages of DC vs AC power grids).

This tale of moral conflict among legendary inventors has been going on for decades; I recall enthusiastically reading Tesla: Man out of Time as a first year undergraduate student. More recently, the story was re-popularized by The Oatmeal in a cartoon titled “Why Nikola Tesla was the greatest geek who ever lived.”  The story is fun. I get it; I even have an Oatmeal T-shirt that says “Tesla > Edison”. Pop-science culture is virtually saturated with Tesla; he’s even featured as a super-hero in a Kickstarter-funded cartoon called “Super Science Friends.”  In spite of all this fandom, we still hear statements like this one, from one of the creators of Super Science Friends:

when you here about Tesla and Edison and all the drama, how Tesla is responsible for a lot of the things we use today but isn’t given any credit.

It’s very strange to say Tesla received no credit; he was nominated for the Nobel prize and his name was selected as the standard international unit for magnetic flux density. The story is a classic revisionist tale: the secret story of the suppressed or forgotten savior of the world. It feels like a subversive, almost conspiratorial counter-history. Only the initiated know the truth… except it’s not really accurate. Not remotely. And while Tesla is a great figure in the history of technology, the story is not very fair to Edison, who completely deserves the credit history has given him. It also is not fair to the many other contributors in the history of electricity and electronics. Tesla is simply not the “father” of electricity or of AC; he was one of many important men and women who made numerous incremental contributions. And Edison is simply not a murderous villain who nearly robbed the world of its future; you can find character flaws and moral failings in every person, including both Tesla and Edison.

Continue reading

Only America has this problem.

Guard bear threatens pedestrians. [Image by Gillfoto, CC BY-NC-SA 2.0]

Guard bear threatens pedestrians. [Image by Gillfoto, CC BY-NC-SA 2.0]

A few days ago, we saw yet another tragic massacre in which a frustrated young man blocked the doors of a classroom and released a pride of lions to attack the defenseless people trapped inside. A day later, a tragic story of a child eaten by a negligent neighbor’s animal. And on Friday, two more school maulings in a single day. The rapid succession of violent events has left Americans struggling to understand the cause of all these injuries and deaths from large predators. Many wonder if America’s unique habit of collecting exotic large predators might be the underlying cause for all these people being eaten by exotic large predators.

But conservative pundits are skeptical, and argue that deeper causes, not lions and bears, are more likely to blame for the epidemic of people being eaten by lions and bears. “These things happen,” said presidential candidate Donald Trump, who suggested that similar tragedies could be avoided if professors had predators of their own. Some experts respond by noting instances where professors have used their animals to attack colleagues, and other cases where large cats or bears were inadvertently left unattended in student restrooms.

Continue reading

Free will and guilty machines

Do machines have moral responsibility?

Do machines have moral responsibility?

Do we have free will to determine our own choices, or are we merely sophisticated machines? Can a machine be morally responsible for its actions? Do the words “guilt” and “innocence” even apply? A lot of popular authors in science and skepticism would answer no, but I believe they do.

A number of scientists have been in a philosophical mood lately, and some have decided to fix their
attention on the ancient problem of free will: Do we really have deep freedom to determine our choices and actions, or are we automata (machines) whose thoughts and actions are fully determined by physical laws? Recent scientific discoveries show new details of what kinds of automata we might be. These discoveries are not exactly shocking, since scientists and philosophers have already spent centuries working out what it means to be both a person and an automaton. But the “new” determinists, which include popular authors Sam Harris and Jerry Coyne, argue for major changes in our social and moral understanding based on the alleged logical implications of determinism.

In this post, I summarize some of their arguments along with major problems I see in them. In short, my view is that we cannot dispose of guilt without also disposing of innocence. While I agree with the conclusion that we should seek more empathic approaches to retributive punishment, we cannot entirely dispose of retributive ideas. And whatever we conclude about morality, justice and punishment, it should probably not be deduced from metaphysics.
Continue reading

The memristor skeptics

Illustration of the memristor in electrical network theory. Image from Wikipedia, produced by Parcly Taxel.

Illustration of the memristor in electrical network theory. Image from Wikipedia, produced by Parcly Taxel.

A story of skepticism gone horribly wrong.

In 2008, researchers at HP Labs announced their discovery of the memristor, a type of electrical device that had been predicted by Leon Chua in a 1971 paper titled, “Memristor– the missing circuit element.” Memristors have been in the news again recently due to HP’s announcement of a bold new computing project called The Machine, which reportedly makes heavy use of memristor devices. Thanks to the sudden attention being paid to memristors in the past few years, we now know that they were with us all along, and you can even make one yourself with a few simple hardware items.

Since I teach my department’s introductory course on electronic devices, I’ve been studying memristors to see if it’s time to add them into the basic curriculum. During my reading, I started to notice a small percolation of skeptical voices. They appeared in popular science magazines, blog posts, and comment threads, and said some very unexpected things, like “HP didn’t really invent a memristor” and even “the memristor may be impossible as a really existing device.” I soon noticed that several of the critics were published researchers, and some of them had published their critiques on the arXiv, a preprint site used by credentialed researchers to post draft articles prior to peer review. The skeptics reached their peek in 2012, but fizzled out in 2013. One of those skeptics went out with a bang, crafting a bold conspiracy theory that still echoes in discussion fora and in the comment threads of tech industry articles. This post chronicles the rise and fall of his career as a memristor scholar. I also offer some speculation as to how the debacle could have been avoided.

Continue reading

Escaping the traps of Facebook, Google and other centralized data hordes

A furor erupted this week over a research project conducted by Facebook in which they manipulated the feeds of over 600,000 users in order to measure their emotional responses. To many, this sounds like a trivial intrusion, perhaps on par with the insertion of advertising content. But several scientists have argued that it constitutes a serious breech in established research ethics — namely the requirement for informed consent. In the world of scientific research, the bar for informed consent is quite high. Facebook chose to rely on their Terms of Use as a proxy for informed consent, but that is unacceptable and would establish a dangerous precedent for eroding the rights of future study participants. An author at the Skepchik network contributed this critique of Facebook’s behavior:

What’s unethical about this research is that it doesn’t appear that Facebook actually obtained informed consent. The claim in the paper is that the very vague blanket data use policy constitutes informed consent, but if we look at the typical requirements for obtaining informed consent, it becomes very clear that their policy falls way short. The typical requirements for informed consent include:

  • Respect for the autonomy of individual research participants
  • Fully explain the purposes of the research that people are agreeing to participate in in clear, jargonless language that is easy to understand
  • Explain the expected duration of the study
  • Describe the procedures that will happen during the study
  • Identify any experimental protocols that may be used
  • Describe any potential risks and benefits for participation
  • Describe how confidentiality will be maintained
  • A statement acknowledging that participation is completely voluntary, that a participant may withdraw participation at any time for any or no reason, and that any decision not to continue participating will incur no loss of benefits or other penalty.

Of course this level of detail cannot be covered by blanket “Terms of Use” that apply to all users of a general-purpose communication platform. Slate’s Katy Waldman agrees that Facebook’s study was unethical:

Here is the only mention of “informed consent” in the paper: The research “was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.”

That is not how most social scientists define informed consent.

Here is the relevant section of Facebook’s data use policy: “For example, in addition to helping people see and find things that you do and share, we may use the information we receive about you … for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”

So there is a vague mention of “research” in the fine print that one agrees to by signing up for Facebook. As bioethicist Arthur Caplan told me, however, it is worth asking whether this lawyerly disclosure is really sufficient to warn people that “their Facebook accounts may be fair game for every social scientist on the planet.”

Of course Facebook is no stranger to deceptive and unethical behavior. We may recall their 2012 settlement with the Federal Trade Commission, which charged “that Facebook deceived consumers by telling them they could keep their information on Facebook private, and then repeatedly allowing it to be shared and made public.”

The problem is simple: Facebook is a centralized service that aggregates intimate data on millions of users. They need to find ways to profit from that data — our data — and we have little control over how their activity might disadvantage or manipulate the users. Their monetization strategies go beyond their already troubling project to facilitate targeted ads from third party apps, apps that you might assume have no relationship to your Facebook activities. Facebook also manages the identity and contact networks of those users, making it difficult to leave the platform without becoming disconnected from your social network. It is a trap. Last week a Metro editorial claimed that it’s getting worse, and recommends that we all quit “cold turkey.” Some users have migrated over to Google services as an escape, but Google has faced similar FTC charges that reveal isn’t any better. So Google is just another mask on the same fundamental problems.

So what is the fix? I’m putting my money on The Red Matrix, a solution that supports distributed identity, decentralized social networking, content rights management and cloud data services.

20140630-145154-53514492.jpg

The core idea behind the Red Matrix is to provide an open specification and protocol for delivering contemporary internet services in a portable way, so that users are not tied to a single content provider. The underlying protocol, called “zot,” is designed to support a mix of public and privately shared content, providing encryption and separating a user’s identity from their service provider.

While still in its early stages, the Red Matrix provides core features comparable to WordPress, Drupal, Dropbox, Evernote and of course social networking capabilities. It is hard to summarize the possibilities of this emerging platform. I’m still discovering new ways to leverage the platform for things ranging from personal note management to blogging. Although the Red Matrix is small, it is an open source project with a fanatical base of users and developers, which makes it likely to endure and grow.

This seems like a good time to announce the Red Matrix companion channel for this site: www.bawker.net/channel/FairCoinToss. This channel acts as a “stream of consciousness” for material related to this blog, containing supplemental information, technical posts, short comments, reposts of news items, and other miscellanea. The primary WordPress site will be reserved for more detailed posts. Any readers are welcome to comment or otherwise interact by joining the Red Matrix at my server or one of the other public servers in the Red Matrix network.

Non-testable facts are commonplace in mathematically driven science

Observation that lacks a theory

An observation that lacks a theory

At first inspection, the scientific method seems to dictate that all accepted facts should rest on concrete observations. Based on this notion, some skeptics are quick to dismiss the scientific legitimacy of mathematically driven research. But there are many examples of important scientific findings that are essentially mathematical theorems with no prospect for physical falsification. A simple class of examples is the family of bounds and asymptotes. In this post I’ll examine a couple of specific examples from information science and engineering.

There’s an interesting set of articles that recently appeared on Pigliucci’s ScientiaSalon site. The first of these articles, titled “The multiverse as a scientific concept,” defends a mathematically-driven hypothesis that has no prospect for empirical validation. This article was authored by Coel Hellier, a professor of astrophysics at Keele University in the UK. The second article, titled “The evidence crisis,” offers a highly skeptical critique of the mathematical research methods used by string theorists, who introduce unobservable physical dimensions (and perhaps other controversial features) in order to produce a self-consistent mathematical theory that unifies the known physical laws. The second article is by Jim Baggott, who holds an Oxford PhD in physical chemistry, and has authored some critical books on modern physics, like this one.

I am very interested in the relationship between empirical and mathematical research. At just this moment, I have two article revisions in progress on my desktop. The first article provides an almost entirely empirical approach to validate a new heuristic technique; the reviewers are upset that I only have empirically tested results without a single mathematical theorem to back them up. The second article is more theoretically driven, but has limited empirical results; the reviewers complain that the experimental results are inadequate. This is a very typical situation for my field. There is an expectation of balance between theory and experiment. Purely empirical results can easily represent experimental or numerical mistakes, so you should ideally have a predictive theory to cohere with the observations. On the other hand, a strictly theoretical or mathematical result may not have any practical utility, so should be connected to some empirical demonstration (I am in an engineering field, after all).

Since I’m not a physicist, I won’t weigh in on the merits of string theory or the multiverse. In thinking about these topics, however, it occurs to me that there are a lot of scientific concepts that are purely mathematical results, and are effectively unfalsifiable. I think one such example is Shannon’s Capacity Theorem, which plays a foundational role in information theory. Simply put, Shannon’s Theorem predicts that any communication channel should have a maximum information capacity, i.e. a maximum rate at which information can be reliably communicated. There is a whole branch of information science devoted to modeling channels, solving for their capacity, and devising practical information coding techniques that push toward those capacity limits. A large amount of work in this field is purely mathematical.

With regard to empiricism, here are the features that I think are interesting about the capacity theorem: First, capacity is a limit. It tells us that we can’t achieve higher rates on a given channel. In terms of empirical testing, all we can do is build systems and observe that they don’t beat the capacity limit. That is not really an empirical test of the limit itself. Second, we usually don’t measure capacity directly. Instead, we use an assumed model for a hypothetical physical channel, and then apply some mathematical optimization theory to predict or infer the capacity limit.

Given these two features, I think the capacity theorem — along with a huge body of related research — is not truly testable or falsifiable in the way many empiricists would prefer (and I think that’s okay). Here are some specific points:

  1. We cannot falsify the proposition that every channel has a capacity. It is a consequence of the same mathematics that grounds all of probability theory and statistics research. In order to falsify the capacity theorem, we have to discard most other modern scientific practices as well. It is interesting to me that this is a strictly mathematical theorem, yet it forces inescapable conclusions about the physical world.
  2. If we did observe a system that beats capacity, we would assume that the system was measured improperly or used an incorrect channel model. Nearly every graduate student finds a way to “beat” the capacity limit early in their studies, but this is always because they made some mistake in their simulations or measurements. Even if we keep beating capacity and never find any fault in the measurements or models, it still would not suffice to falsify the capacity theorem. It’s a theorem — you can’t contradict it! Not unless you revise the axioms that lie at the theorem’s foundations. Such a revision would be amazing, but it would still have to be consistent with the traditional axioms as a degenerate case, because those axioms generate a system of theories that are overwhelmingly validated across many fields. This revision could therefore not be considered a falsification, but should rather be thought of as an extension to the theory.

The point of this analysis is to show that an unfalsifiable, untestable mathematical result is perfectly fine, if the result arises from a body of theory that is already solidly in place. To add another example, I mentioned before about how some researchers try to find information coding schemes that achieve the capacity bound. For a long time (about 50 years), the coding community took a quasi-empirical approach to this problem, devising dozens (maybe even hundreds or thousands) of coding schemes and testing them through pure analysis and simulations on different channel models. In the 1990’s, several methods were finally discovered that come extremely close to capacity on some of the most important channels. To some researchers, these methods were not good enough, since they only appear to approach capacity based on empirical observations. To these researchers, it would be preferable to construct a coding method that is mathematically proven to exactly achieve capacity.

In 2009, a method known as Polar Coding appeared, which was rigorously shown to asymptotically achieve capacity, i.e. it’s performance should get better and better as the amount of coded data goes to infinity, and when the amount of data reaches infinity, then it should work at a rate equal to capacity. This was hailed as a great advance in coding and information theory, but again the asymptotic claim is not truly verifiable through empirical methods. We can’t measure what happens when the information size reaches infinity. We can only make mathematical projections. Because of this, some researchers I know have quietly criticized the value of polar codes, calling them meaningless from a practical standpoint. I disagree; I value the progress of mathematical insight in concert with empirical research and practical applications.

To conclude, I want to offer one further observation about the mathematical system from which these theorems arise. When studying the axiomatic development of probability theory, statistics, and stochastic processes, I was really struck by how little attachment they have to empirical observations. They are mathematical frameworks with a number of fill-in-the-gap places where you specify, for instance, a physically plausible probability distribution (a commenter on Baggott’s article similarly described string theory as a mathematical framework for building theories, rather than a single fully-qualified physical theory). But even the physical probability distributions are frequently replaceable by a priori concepts derived, say, from the Bernoulli distribution (i.e. the coin toss process), or the Gaussian distribution under support from the Central Limit Theorem (another purely mathematical result!).

While we like to think that the history of science is a story of theories devised to explain observations (which may be true in some sciences), in many fields the story is partially reversed. The sciences of probability, statistics, and information theory (among many others) developed first from a priori mathematical considerations which defined the experimental procedures to be used for empirical studies. This history is chronicled in two of my favorite books on scientific history — The Emergence of Probability and The Taming of Chance — both written by philosopher Ian Hacking (who has authored a number of other interesting books worth examining).

Some may rightly argue that these claims are not totally unfalsifiable, since they are anchored to a theory that could have been independently falsified. The main point of my post, however, is to point out that a purely mathematical exposition can expose novel, very real truths about the physical world — truths that cannot be verified or falsified on their own.