Machine Learning in Light of Gödel’s Incompleteness Theorems

Machine Learning in Light of Gödel’s Incompleteness Theorems

Crete, the largest island in all of Greece has been a hotbed of intellectual curiosity, since ancient times. In an antique fable, a man named Epimenides, fell asleep in a Cretan cave while tending to his father's sheep. For fifty-seven years he slept in the cavern, which incidentally was sacred to the god Zeus. Epimenides' outlandish slumber, reportedly lent him the gift of prophecy thus transforming him to an oracle.

The Liar Paradox is an intriguing self-referential opinion that has long been attributed to Epimenides. In the puzzle, he states, "All Cretans are liars".  Clearly, as Epimenides is a Cretan; therefore he is himself a liar. But if he is a liar, what he says is untrue, and accordingly Cretans are truthful. However, this implies Epimenides' statement is honest. Consequently, Cretans are liars. But, did we not just say that Epimenides is a Cretan? Thus, we may go on alternately proving that Epimenides and the Cretans are truthful and untruthful.

At what point one exits the loop, depends on other pressing priorities one might have in life.

Kurt Gödel, an Austrian mathematician shrewdly invoked such a paradox, in the world of mathematics. His Incompleteness Theorems claim that, in any formal, consistent mathematical system there will exist statements which, in spite of being true will not be provable in that system. To highly simplify matters, say a theorem claims that it is not provable. If the theorem is provable, the system clearly is inconsistent as the theorem is provable and unprovable simultaneously. On the other hand, if the theorem is unprovable then it implies that it must be true, since the system is free of inconsistencies. Thus, a consistent system is, alas incomplete as it will never be able to axiomatically arrive at some truths. Gödel's theorems are examples of such truths, thus completing the loop.

We can call this the set of elusive true statements. A crucial point to note here is that, if a mathematical system is supplied with any such statement, it can check if that is consistent with its axioms. However, even if turns out to be consistent, the system will never be able to prove whether the statement is actually true or not. The human mind, operating outside the system of mathematics will always be able to realize the truth.

There is a perennial debate among various schools of thought on whether Gödel's theorems imply that strong artificial intelligence (A.I.) is unfeasible. (Strong A.I systems are those that can be applied to solve any problem and exhibit human qualities of thought and consciousness. On the other hand, weak A.I. systems deal with specific, narrowly defined problems.) One line of reasoning is, since human minds can perform mathematics, a formal system that adequately models the mind would also have to be capable of carrying out mathematics. Thus, there are true statements that the machine is incapable of producing, but which the mind can, thereby offering proof by contradiction that strong A.I. is impossible to create. Of course, the debate is rich with nuances and the above is just an abridged version.

Meanwhile, over centuries we have developed a plethora of logical constructs (academic theories/disciplines) in order to systematically shepherd human intelligence to solve relevant problems. These formal systems are based on fundamental truths (axioms) and go all the way in building a complex body of knowledge, as information is teased out of the universe by overlaying layers of inference on the building blocks.

With the advent of computers, such solutions were automated. Theories were leveraged from books, translated to formal, arithmetico-logical constructs and fed to a machine so that it could spew truths. These structures dictated to machines the exact way to solve a problem. With the dawn of inexpensive computing, machine learning algorithms are being leveraged to routinely discover patterns in the world around us.

Case in point, A.I. art has been around for more than fifty years. However, in the past, people using computers to generate art had to specify rules for the chosen aesthetics. These days, we have algorithms that can learn aesthetics by themselves.

The algorithms are discipline-agnostic. Procedures, that one could apply to spot sharks lurking miles beneath the ocean surface while also helping detect credit card fraud. The non-parametric set of machine learning models are independent of limiting axioms. The algorithms do not constitute any formal system. Hence, concerns of consistency or completeness do not arise.

Gödel's theorems, can be used to argue in favor of such iterative algorithms because consistent, formal, mathematical systems replicating parts of human logic will fail at verifying all true statements.

A raging argument in the world in A.I., pivots around the question of whether a machine needs to strictly think like a human in order to solve problems in the most efficient way. Perhaps not.

I mean, when was the last time you were on a plane with flapping wings?

However, machine learning algorithms typically focus on correlation rather than causation. In certain contexts, this might pose a problem and it would benefit to have a construct that axiomatically attempts to portray facts. A relevant example would be, trying to decipher complex patterns in medical symptoms. Algorithms crunching through medical history need to take into account various medical interventions and their effect on symptoms, which in turn might have resulted in further medications. Inability to grasp underlying causal structure of the data might result in discovering true patterns, but false inferences.

Also, what about problems which are game theoretic? Say, a piece of code is designed to anticipate next move of an adaptive adversary. Here, your actions are causing your opponent to display specific patterns, which in turn are influencing your own strategies. Will brute force computing work here? Probably, not. It might make sense to have (as psychologists call) a Theory of Mind, wherein one tries to get into opponent's shoes and rationalize. Similarly, identification of sarcastic product reviews, via text mining is a hard problem to solve, if the machine does not operate within a formal, consistent framework of axiomatic beliefs about thought process of reviewers. Of course, as Gödel showed these systems will never be able to verify all correct facts.

However, with the right mix of dual approaches, an adaptive machine might be able to toggle between empirical and axiomatic models to arrive at the most optimal set of truth. Incidentally, A.I. has not been much used to enhance theoretical constructs. Silhouetted against the age-old debate of rationalism versus empiricism, A.I. of our times has been beefing up mostly our empirical prowess.

Gödel's theorems clearly provide a strong rationale for machines' analytical competence not relying exclusively on the archaic endeavor of mapping aspects of perceived, human rationality to narrow, formal, mathematical constructs.

Imagine a machine leveraging such a construct, to arrive at the inference that is running through your mind as you finish reading this article.

Are you now wondering about the truth that eluded us?

References

  1. Fowler, Thomas (1869). The Elements of Deductive Logic.
  2. Bernard Meltzer (1962). On Formally Undecidable Propositions of Principia Mathematica and Related Systems. Translation of the German original by Kurt Gödel, 1931.
  3. Lucas, J.R. (1969). Minds, Machines and Gödel.
  4. Russel, Stuart and Norvig, Peter(2003). Artificial Intelligence: A Modern Approach.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net