**Historical Motivations for Symbolic Logic and Set Theory**

The general purpose of this site has been to introduce a preliminary level of conceptual or historical theories, systems, or disciplines to a wider audience. We try and assume that there is not any background in these areas and attempt, in some imperfect way, to relate them to some practical level of interest. However, I feel some subjects are really worth getting more into the details and I hope that those that view this site will be understanding of that occasional indulgence. Set theory is one of my favorite things, it is something I feel a genuine passion for and so I would like to take a little bit more time to not only discuss it’s historical development and practical motivation but to really try and introduce some of its structure and notation along the way (without Trin or Tom yelling at me; they haven’t posted in a while so I’m trying to get away with it). I will try and break these posts up between more accessible and more detailed to those readers with varying levels of interest.

**Symbolic Logic: The Limitations of Syllogistic Reasoning**

So what is set theory and why should anyone care? And, what is it’s practical motivation, including ultimate relationship to computers, as indicated from the previous themed post? And, for that matter, what is the connection to Boolean algebra?

Very briefly, we begin with Aristotle (384 BCE – 322 BCE) in ancient Greece, who first discovered logic in a formal way. In his Prior and Posterior Analytics, Aristotle outlines logic as an organization of proper rules for reasoning and making justified decisions. It is in this context that Aristotle introduces syllogistic reasoning, built upon two premises deriving one conclusion. These premises use quantifying words, that is, words that bind the meaning of the rest of the argument (input or premise) to the meaning of the quantifying word, such as “all”, “some”, or “none”, insofar as such words articulate a clear description of the nature of the argument. Thus, if

Premise 1: “All Zonks are Zucks” and,

Premise 2: “All Zucks are Zarks”, then it necessarily follows,

Conclusion: “All Zonks are Zarks”

Through syllogistic reasoning we see that the form of the argument is transitive, that is, no matter what the meanings or lack of meanings of the words in the real world the form of the argument universally applies to any values places into the function of this syllogism. This was a revolutionary discovery and formed the basis of logical analysis for over 2,000 years! More or less, ancient Greece’s contributions of Euclidean geometry and Aristotelian logic were not fundamentally reformed until the 19th century!

And guess what? Just as surprising, the motivation was the same in both cases for their dramatic structural expansion. So a quick tangent, what is non-Euclidean geometry? And for that matter, what is Euclidean geometry? Well, I am really glad you asked me. Euclidean geometry was founded by the ancient Greek Euclid (3rd century BCE, exact dates unknown) and wrote Elements, which is most famous for but he wrote many other things too. However, this work, in particular, has been the foundation of geometry for much of civilization, and systematized a logical and proof-oriented approach to mathematics that served as the basis of future formalization. But the primary difference between Euclidean and non-Euclidean geometry is the curvature of space. The lines and spatial objects of Euclidean geometry are envisioned upon the assumption of a vacuum, insofar as the nature of the surrounding space is dimensionless beyond length, width, and depth of the objects themselves. Non-Euclidean geometry, such as Riemannian integral calculus, assumes the spatial background of the topographical field to be curved in its dimensions. As you may have guessed already, non-Euclidean geometry was a necessary condition for the mathematical language to generate Einstein’s theory of relativity. Since the physical universe is spatially curved a Newtonian mechanical system built upon a Euclidean geometric field was an inaccurate mathematical description of the science.

At the same time as mathematics was being revolutionized in the 19th century, logic was undergoing major changes thanks to Boole, DeMorgan, Peirce, and a generation of logicians that came to realize the significant shortcomings of what Aristotelian logic was capable of validly modeling. This had been hinted before in previous centuries, such as Immanuel Kant’s desire to generate a transcendental logic (which he never did) and G.W.F. Hegel’s Science of Logic explicated a series of dialogical relations that attempted to unify nature in the idea of the Absolute. But in the end it was not the idealists but the empirical realists that did the formalization of logic. At the time, a primary motivating factor was geometry. How can one logically describe the mathematics of extension of body in an Aristotelian syllogism? Exactly. Thus, if logic is the laws of thought, and if mathematics cannot be grounded in logical principles, then there must be a shortcoming in the field of logic.

Symbolic logic is a collection of symbols and functions that formalize valid inference (argument). The other purpose of logic is axiomatization, that is, to create a model by which to represent the nature and structure of the form of some type of phenomena. These two purposes are related but often have different emphases in their application. For example, while quantified modal logic is built upon a foundation of valid inference, its primary motivation is to model aspects of world description as bound to the conceptual notions of ‘possibility’ or ‘necessity’. On the other hand classical first order logic is so broad as to be the basic foundation of how valid inference works but in this universality is generally not a good model for applying it to things by itself. For example, undergraduate philosophers are taught that symbolic logic (propositional calculus) is a model for natural language (English, German, Farsi, etc.) but this is only partially true, insofar as the grammatical structure of a language can be modeled in terms of valid or invalid inference. However, the meaning of words, the morphology of language, the underlying holistic conceptual patterns of language are far beyond the ability of first order logic to model. Indeed, first order logic can’t even handle in its axiomatization a sentence that starts with self-reference, namely “I like to go to the park”. When doing computation, this is an important difference to remember between the computation or logic of the code of grammar versus the model structure used to represent the semantic patterns of language. They are related but they also create many paradoxes if a weak logic is used to model natural language when it’s incapable of handling such complexity.

**Set Theory**

Now set theory originally arose out of a logic for structuring quantity, namely number theory. If you recall from contemporary mathematical courses you learn about the set of integers N (0, 1, 2, 3…n+1) or the set of real numbers R (set of all decimals and fractions), et cetera. The analysis of these sets helped to give rise to the difference between concepts such as countably infinite versus uncountably infinite sets of elements. At any rate, a set is a collection of elements. Although set theory was originally established for study of the theory of number, it has been applied to all manner of inquiries, essentially anything that has elements from words, to concepts, to sets of functions. It is an incredibly versatile and fundamental system of analytical tools with many intuitive and counter-intuitive notions.

Set theory arose in the 19th century and underwent major refinement in the 20th century that gave rise to axiomatic set theory. We could spend a few semesters discussing this topic. However, suffice to say, several mathematicians generated set theory in the 1870’s, and discovered a number of problems in its application, namely paradoxes such as “the set of all sets” (is that set inside a set or outside the set of all sets, thus making it a higher set, and thus in turn requiring another set, ad infinitum). To get around these issues, a system was developed slowly over time in the early 20th century called the Zermelo-Frankel axiomatic set theory, (ZF) and later refined further to the Zermelo-Frankel choice axiomatic set theory (ZFC) that included the axiom of choice as one of its finite set of axioms. The point of these investigations was to provide the logical foundation for the field of mathematics and are recognized today as providing that, insofar as it was originally quite controversial in the early 20th century but no longer is.

It was in this spirit of the times that the mathematician and philosopher Alfred North Whitehead (1861-1947) and Lord Bertrand Russell (1872-1970) constructed a complex system of logical proofs for the sake of grounding mathematical operations in the Principia Mathematica. It is nearly impossible to overstate the historical significance of this book, despite the fact that few contemporary philosophers seem to have ever read it. The notation is outdated by today’s standards but it is a touchstone in the developments of English speaking philosophy. But this book inspired persons from Kurt Godel, Alonzo Church, to W.V.O. Quine (Whitehead was his dissertation advisor).