Saturday, September 11, 2010

Tables, Swans, Dogs and Other Abstract Stuff

Some friends initiated an extensive discussion yesterday on the subject of what kind of concepts can be considered "self-evident". The subject was closely related to the so-called "problem of induction" in philosophy, a very old question going back 2500 years, concerning how you know you've developed (ie, abstracted) a valid concept, theory, hypothesis. The question is important in relation to complex scientific theories such as quantum mechanics or evolution, but applies to any new idea.

My friends were going round on different definitions for "self-evident", and following the lead of many past philosophical discussions on this topic (especially as found in Ayn Rand's "Introduction to Objectivist Epistemology), and references to simple concepts like "table" were made, to illustrate "self-evident" concepts.

Here is how I weighed in on the topic:

Put aside definitions for a moment. Concepts are a relationship between a mind and existence, obtained through a sequence of perceptions and cognitions. One, two, three, many, but think of each instance as a short movie. Though sometimes you have to go a lot higher than "three" and usually you have to define the sequence yourself.

No one -- I repeat, no one -- can form the concept of "table" by being shown one for the first time as a static picture. At that point it is simply an entity with a specific concrete shape. There is not enough information to form a concept. Only when you start to see it being used by people for specific purposes (to set things down on), do you start to get a glimmering of the idea "table". For part of the concept is not just the purely perceptual of the moment (flat top, legs, wood, 3 feet high), but how it is used -- by people.

Even then you still don't have enough information to form the concept "table". How a table is used is essential to the concept, but not sufficient. We have here the black swan fallacy (ie, if you only abstract the concept "swan" from white swans, do you have a valid concept that can embrace the rare black swan): you need more than one concrete example, and you need a sufficient number of concretes, of the right kind, if you are to form any concept. SO: we need at least a second table, different from the first (maybe made of metal instead of wood, red instead of brown, round instead of square), and we need to see it used as a table. It has a function: to set things on.


So you can see, to form the concept "table" we need not just multiple discrete perceptions of different tables, we also need continuous sequences of perceptions for each table we see: how they are used. We have to watch people approaching different tables, setting things on tables, sitting at tables, doing things at tables (like eating) -- though with enough perceptual experience we might reach the conclusion that "sitting at" is a non-essential characteristic for certain types of tables. But for the sub-concept of "dinner tables", we conclude it is.

This is for a pretty simple concept. But even for this, you can see that the mind has to retain a lot of memories and integrate all those cognitions before a concept can be formed. Some of this integration can be semi-automatic: there are basic automatic capacities of any healthy brain that will associate similarities and differences between previous observations and cognitions. The mere act of recalling any specific cognition from any point in our lives depends on this. You can call it an "automatic filing system", but this is a woefully inadequate description of a much more complex and sophisticated process. (The entire analogy of concepts as "file folders" for an unlimited number of concrete observations should not be pushed beyond mere analogy; it simply misses the most important points of concept formation.)

But the process of forming the primitive level concept "table" (I shall avoid saying "first-level concept" -- what's first is debatable) isn't entirely automatic, even if there are automatic cognitive processes involved. It can't be: the very nature of concepts requires volition. This necessitates some conscious awareness of both the perceptions necessary to form the concept, the relationship in time of these perceptions as discrete instances or cases (eg, today I saw a white swan, yesterday I saw a black swan), and the temporal sequence of each case (the swan flying, walking, swimming, rather than just a static picture). For tables, you need not just observations of individual tables in an empty cafeteria or laboratory, but how they are used by people, in a certain context such as eating, working, etc.

Is this process "self-evident"? It's easy for any normal human (though maybe not for a person with severe brain defects, such as a missing right hemisphere), but does that make it self-evident? Again, we're dealing with a concept. "Self-evident" is a concept that implies a certain context of its own: to whom and for what.

You may recall Ayn Rand discussing the fact that all values imply the question "of value to whom and for what?" But go back to another identification she made: all facts imply a value proposition attached to them. Facts, being conceptual identifications about true statements of reality, are only of use to human beings--but therefore, in principle, they possess value. It is by identification and grasp of facts that people grasp reality, and by grasping reality we acquire the means to manipulate it to promote our lives and happiness -- to survive.

Likewise, all concepts (not just facts) imply a value proposition -- to use a concept means it has a value to someone, for some purpose.

But here's an implication Ayn Rand didn't address: the value proposition implicit in every concept doesn't exist in isolation, as a mere statement of the value of the concept to people in general, or even just to an individual person. There is also a value proposition inherent in the very process of concept formation itself.

For example, the concept "table" critically depends on the fact that it is used by people for their purposes. Any purpose implies a value. But take away the purpose of setting things down on it and eating or working at it, and do you have a table?  No. You simply can't form the concept "table" at all. All you can form, if you look at enough different tables (looking at the same table rolling off an assembly line doesn't qualify) is the concept "flat surface with legs". Without seeing it put to some human purpose, you don't have enough information to form the concept we call "table". (If you see just one table without seeing it used, all you know is a shape. If you see a million identical tables, all you know is a million identical shapes.)

All concepts, being relations of a human mind to existence, implicitly have "value", but I will assert the hypothesis (without proof) that all concepts (I think) require an understanding of their "value" in the very process of their formation: that is, part of grasping a concept is grasping the human purpose to which it is put. Ie, how it's used. For whom and for what.

I've put my assertion categorically simply because I can't at this moment think of a single example of a concept that doesn't have human purpose as essential to its formation. For example, "number". Or "lamp". Or "map". I'm open to delimiting my assertion, but it is not debatable that most concepts require a grasp of the human purpose to which they are used before they can be understood. I mean this especially for very abstract concepts.

Oddly, I've been consciously aware of this since I was a child. It always drove me nuts that textbooks frequently didn't state the purpose of the concepts they propounded, because this made it much harder for me to grasp those concepts. So I was frequently actively seeking the purpose, and my textbooks are riddled with margin notes like "WHAT'S THE POINT OF THIS???" But once I got the purpose -- blam. I got the concept. I could see how it related to facts and ideas and applications it was connected to. This applied to physics, engineering and mathematics especially, but you could just as well apply it to economics, history, literature, or any other field.

I will make an even stronger hypothesis: any concept formed without reference to a human purpose is an invalid concept. For example, let's say we are forming the concept "table" for the first time, and shown 1000 pictures of flat-topped shapes with three and four legs, but different colors and other characteristics. Just tables in isolation, out in a forest, not a single other human artifact around. What conclusion do we form? That there are 1000 things with flat tops and 3 and 4 legs. We don't even know how big they are. They're just -- flat-topped and supported by legs.

Is that a concept? No. It's simply a conjunction of two common attributes. A concept is not just a collection of attributes that you can check off on a list to determine if some new concrete is an example of it. As AR discussed in ITOE, a concept is an abstraction (or mental integration) formed by isolating a class of existents according to a common characteristic (their similarity) while omitting the differences and particulars such as the range or degree or magnitude of the common attribute (the "measurements" in AR's terminology). We can form the concept "blue" without regard to "how" blue something is, or whether it's a bird or the sky or a car or how big the car is. Likewise, we form the concept "number" as a quantity without regard to what quantity of number it is -- 1, or 2, or 3.7, etc.

But are flat-topped things with 3 and 4 legs a concept? Okay, we're omitting measurements of size and shape. We are describing a class of entities. But is that a concept? Okay, it needs a symbolic referent to become a concept (a word). So we decide to call all flat-topped, 3 and 4 legged things "glibfritz's". Is that a concept?

Imagine you're walking down the street and you recognize a flat-topped, 3-legged object. It's 12 feet tall, and the flat surface is on the bottom with the legs sticking up in the air, like an upside down table. You say to your friend, "Look, there's a glibfritz!" Then your friend remarks, "Hey, look over there! There's a flat top with 4 posts holding it up. Of course, each end of the flat top is attached to a hillside, and it's much bigger than your glibfritz, but I think that's a glibfritz, too." (It's a bridge.)

Then the first guy says, "but your object isn't really flat, you know. It's got side-rails to keep people from falling off, and curbs up to a sidewalk, and pot-holes." And the second guy says, "but your flat-top is really a flat bottom." And the first guy says, "yeah, but we formed this concept without any reference to up or down. We omitted that measurement."

And so on. Then a third guy comes along and asks, "what do you guys want to do with these concretes? What does a bridge have in common with a gazebo that was blown over?" Both stare at him blankly. The second guy says, "Well, they both have flat-tops and 3 or 4 legs." The first guy says, "but yours isn't flat-topped".

What's the error here? Well, for one, a collection of attributes does not make a unique common attribute. It's just a collection of attributes. What is "unique"? Is "flat-top" and "3 or 4 things" even an attribute?? Not in the sense required by a concept: there must be some relation between the separate attributes -- not just the fact that they are in spatial proximity, but more particular than that.

We could specify a purely physical relationship, like "the legs are normal to the surface of the flattop", but that doesn't resolve it. We've just shifted the problem to another shape.

The key here, is that for a concept you have to omit the measurements of just one thing: the common attribute. What are the measurements unique to a disparate collection of attributes? There aren't any.

But suppose we add a human purpose. We say glibfritz's are "flat-topped things of 3 or 4 legs, oriented with the top side up, for humans to set things on while they sit and eat."  Putting aside the possibility of having dinner in the median of bridge, now we know the approximate scale of the thing -- it's not a bridge -- and it doesn't have to be perfectly flat, as long as you can set things on it so they don't fall off, etc. The human purpose provides a crucial differentia as well as a uniquely common similarity for which we can omit measurements.

Try something more abstract: the mathematical relationship for a cube. 8 vertices, 6 sides, equal side lengths. Is this a concept? Of course. We omit measurements of the distances between vertices beyond saying they are equal (or approximately equal), or the stuff of which the cube is made, and isolate the fact of 8 vertices. But unlike the glibfritz, this is a valid concept because there is also a genuine human purpose behind the identification: to describe a vast number of things in reality that are cubic in nature.

I might remark that an essential attribute of the purpose implicit in any concept is the context in which it is formed and applied. For instance, Newton's theory of gravity has been criticized as being "wrong" and inaccurate. But in what observational context was it formed? To what accuracy is it true? The context didn't assume certain kinds of facts, like Einstein's alleged space-time "warping". We can debate the validity of Einstein's theory another time (though there are effects not explained by Newton's theory), but in his observational context, Newton's theory is a valid concept, even if it doesn't predict all gravitational phenomena accurately to an infinite number of decimal places. As discussed in ITOE, later knowledge doesn't invalidate early concepts, it simply delimits their context of application, and when relativistic effects are incorporated, that context is the 10th decimal place.

So we have concept formation -- any concept formation -- as requiring discrete perceptions, continuous perceptions (sequences in motion), memories, sensations (not discussed), automatic cognitive functioning, conscious volition to grasp similarities and differences in guiding the selection of concretes for the process of abstraction (though this can be easy for simple concepts), and we need some idea of the human purpose to which this identification is to be put (though we often grasp this implicitly and subconciously). And to grasp a concept all these concretes and retained knowledge must be sufficient in quantity and type to isolate and associate the similarities of the concretes from the non-essential differences -- you simply can't form a concept without the right concretes (it's not just how many concretes you have).

All that goes into grasping "table". Is it self-evident? Well, it's an easy concept to "get", as I said, but there's nothing "self-evident" about it. If "self-evident concept" means anything, it means a concept is a completely automatic product of cognition -- without intervention of volition to grasp the great number of perceptions, sensations, memories and human purposes that anyone ordinarily must select to do concept formation.

I could go on about other concepts. Axioms, for instance. Are basic axioms, such as "existence exists" self-evident? If you buy my argument, I think you can see that simply because something is an "axiom" and implicit in everything, that doesn't qualify it as being "self-evident". If it was so obvious there would be no debate. All humans would automatically grasp it the same way we all grasp the presence of a 14,000 foot mountain when we stand in front of it. That would have put an end to differing philosophies 2500 years ago.

I would define "self-evident" in functional terms like that "mountain" example: a concept is only "self-evident" if no human being of normal experience and brain functioning could disagree about its application to new concretes.

There's certainly some simple concepts where this is true. For instance, certain words of grammar, like the word "the", which always identifies a particular physical or cognitive entity that follows -- ie, outward and  inward objects of awareness. (I know, I know -- there are philosophers and career politicians who make careers out of parsing the meaning of words like "is"; I'm talking about honest disagreement.)

So much for "self-evident" concepts. But none of this even begins to address non-evident concepts, which relates to the more complicated "problem of induction"--which involves not just concept formation, but validation of formed concepts. How do you know you've formed a valid concept? Well, for the so-called "self-evident" concepts, like "table", the assertion is you just "know" them. This is the notion that if you see enough tables, you just form the concept "table" -- somehow.  Until some wiseguy says a black swan is joining you for dinner, and then you have a big debate about whether it's really a table you're sitting at or a barnyard.

But how about "quantum theory". Or Newton's theory of gravity? Etc. How do you know that a tentative concept (like Schroedinger's equation) is valid?

The entire question of validation relates directly to whether you've got enough concretes of the right kind to say that you can form a concept. Again, by "concretes" I mean perceptions, sensations, cognitions (such as other validated concepts), including the purposes for which your proto-concept is applied.

Because it is only a proto-concept until it's validated. Unlike the simple "self-evident" concepts, it's practically impossible to form a concept like Schroedinger's equation or Newton's gravitational theory or the Theory of Evolution while holding all the relevant facts in your head at once. Just impossible. So you hold some subset in your head and make a tentative generalization -- but only tentative, because the very nature of my definition of a "complex" concept is that it's too complex to hold all the facts at once when forming the concept.

So you make a preliminary generalization (an induction) and then have to validate that against all the facts. How do you do that? This is the problem of induction.

My take on the solution to this is that it's got three parts: 1. the validation of the integration process (the reasoning), 2. the validation of the concretes (to make sure there is a sufficient number of the right kind) and 3. the definition of the context of validity for the generalization. Once again, you can't form "table" by watching an assembly line at a furniture factory (you can't form "swan" from a single white swan), and you can't know what a table is without knowing the context in which that concept is used.

For the so-called "self-evident" concepts, you can get away with validating a relatively few number of concretes ("few" meaning many tables, many experiences of their different attributes while watching people use tables, etc). But when you get to higher-level concepts the quantity of concretes becomes simply overwhelming. You need a systematic and uniquely conceptual approach for the validation. It isn't just a checklist to see that you've done X, Y and Z and applied the right logic.

My partial answer to this is that your system has to apply a principle of sufficiency, to know that you've got all the right types of concretes from which you've generalized the concept you're validating. Part of this principle is simple: in a sense, a concept is the collection of types when grasped by a particular process of cognition, for particular purposes, and this defines the context of validity. For instance, you've got to have enough different types of tables or swans to form "table" or "swan" as concepts, and those types have to exhaustively represent all tables or swans.

But maybe you don't have any grasp of the concept "DNA". You're a caveman. You observe a bunch of mangy curs and form the concept "dog". Maybe there's a deformed jackal or two in this pack, but your context (whether you realize it or not) is four legs, teeth, 50 - 75 pounds, mean, dangerous, but tamable. Your version of "dog". Maybe later DNA analysis by your descendants shows that some of those creatures you tamed were jackals, but your functional concept (as a caveman) was still valid, in your context of knowledge and purpose: creatures you can domesticate for herding, hunting, protection, keeping warm, affection, etc. You don't have to validate too much here, in the way of the types of concretes, to know that you have a valid context for your concept. But it is a valid concept, even if it isn't the same concept as our modern idea of "dog". They just share the same word and a few other similarities.

It gets much tougher for things like "quantum mechanics", which describes some facts of reality (indeterminism of certain physical phenomena according to a mathematical formula) while having a very inadequate base of concretes for validation, along with some totally bogus philosophical interpretations. But I contend: the basic outline of validation I've propounded is still correct.

One way I get insight into this stuff is that I'm an engineer, not a zientist, and my brand of engineering is very abstract and creative by it's nature -- I invent complex circuits on an almost daily basis. Every design is a sort of concept dedicated to a purpose: I construct complex relationships among entities such as transistors and resistors and capacitors, according to well-defined principles, so that my circuits behave in a certain way to achieve a human purpose (amplifiers, data converters and complex processing).

In essence, I put forward hypotheses: I make a schematic for a proto-concept that I want to do something. Does it work? Does it satisfy all the criteria I've been given as my objective? I'm drawing on a large number of concretes and principles when I build a schematic, guided by 30 years experience, but there's always problems that arise because I can't hold all the facts in my head at once. So my first proto-concept might be close to working as I intended, but normally it fails validation in some respect, sometimes catastrophically.

The concretes I'm mentally juggling can be overwhelming. Well beyond the 7 or 8 things the human brain can hold at once -- the so-called crow epistemology. (A flock of crows wait for four hunters to come out of the woods one at a time. But after the third hunter leaves, the crows conclude "many" have left, so they emerge from hiding, and -- blam.  Because their brains can't hold more than three things at once.)

So engineers being what they are, we've developed procedures of review of the facts and assumptions and principles and procedures and systems of computerized checking to handle the myriad details. Believe me, this is the only way you can validate a new microchip that might have a billion transistors in unimaginably complex relationships. The only thing that saves you is the principles you know, and a clear statement of your objectives. For instance, your amplifier's gain, or your microprocessor's instruction set or your power supply's output voltage. Based on this, you can automate much of the checking and form generalized conclusions as to whether your chip will work under all the conditions for which it is intended to be used. I don't have to prove that a VCR works as a gatling cannon for instance. If it happens to eject tapes too energetically, that's just an ancillary function. All I have to validate is my "specifications".

But even this isn't enough.  This just tells you the database is pretty good before you ship it out to be fabricated.  What you really need is a physical device. 

So what we do is define "test vectors": a finite number of concrete stimuli that exercise my hypothesis (the circuit schematic) to see if it works according to the objectives. We do this before we make the device, in computer simulation, but that's not good enough -- simulation models make approximations.  The only real proof is the real world.  So we eventually apply these vectors to the physical device, too.  Only when you see a physical device doing what it's supposed to do the butterflies in your stomach really go away.  That's the acid test.

The trick is to define these "test vectors" so that they exhaustively exercise the circuit for all objectives, representative of the unlimited "real-world" stimuli that the chip is likely to encounter. If the test vectors aren't properly selected, I might miss a condition for which my hypothesis wasn't correctly constructed. In the software world, this is called a "bug".

So you need a principle for defining and knowing that you've correctly identified a complete set of valid test vectors. They exist in relation to your objectives (your context of purpose), but they are concretes representative of abstractions in relation to those objectives -- Ie, each test vector is a concrete standing in the place of a more general abstraction that represents an almost unlimited number of real-world stimuli to my circuit (the "inputs)".

By analogy, a proper test vector is rather like a good work of art, which is a concrete that stands for the abstraction of the theme. For instance, Michelangelo's "David" is a concrete standing for the abstraction "heroic strength". Or Rand's novel Atlas Shrugged is a concrete standing for the abstraction "the role of the mind in man's existence". Much more mundane, a test vector for a microchip stands for the abstraction "all the inputs of a certain type which exercise the chip to achieve a certain function I've designed it to do

Say I want a microprocessor to multiple two 128 bit binary numbers. Each number represents 340 x 1036 possible values, and there are 115.792 x 1075 possible multiplications. You can't possible test all those combinations one at a time. There isn't enough billions of years left in the universe, even if you revv'ed up the chip to operate a trillion times faster. The multiplier circuit for this chip can have 100,000 transistors, and if just one of them is bad, you're screwed. You may remember a famous case about a decade ago -- Intel had a "floating point error" in the multiplier for their new Pentium chip. It only showed up as something like a 10th decimal place error in 1 out of a trillion trillion calculations. But that was enough to cause serious errors in some computer programs.

So how do you validate that the multiplier works for all binary combinations? Partly by design: you design the multiplier according to well-defined principles of logic so that you can say in principle that it must work for all inputs. But what do you do in a production environment? A chip testing machine can't answer questions about principles--but you have to.

So you define a finite number of test vector inputs to your multiplier that are guaranteed to exercise every one of the 100,000 transistors in the multiplier circuit. Any single test vector might not exercise all the transistors, but you form a set of test vectors that do. It's sort of like a "Venn diagram": there may be overlap among the vectors, but with enough overlap, all the transistors get checked.

The full, yet very finite set of test vectors must represent ALL the possible valid inputs to my chip, in the context for which it is intended to be used. I don't have to claim the chip works for other input combinations. But if I exercise my chip for those delimited number of exhaustive test vectors, I can say my chip is validated and my hypothesis is proved. The chip works. This is how chips are verified during production.

Note that there may be many different possible sets of valid test vectors. The proof lies in the definition of the entire set, not in any particular set.

I might add, there are other factors like the context of validity of the design principles I'm using, too. For instance, Ohm's Law breaks down in many applications where the speed of light of an electromagnetic wave becomes a factor. Again, I define my context of validation, just like the caveman does with his dog.

But what I don't ever do is generate test vectors randomly on the premise that enough of them will verify my chip works. Years ago, as a student in college I had a night job that required me to test computer boards on this principle. A test machine compared two computer boards, one known to be good. Both boards were driven with identical sequences of random test vectors, and the outputs were compared. When they differed, that told us the "board under test" was bad. When the outputs of both boards were the same, the board under test was allegedly "good".

Not. This testing principle only worked for really simple boards -- and sometimes not even then. For a board of even moderate complexity (a few thousand logic elements) the random testing approach often failed to detect faults.  Too many permutations.  You've simply got to have some kind of principle behind the definition of the test vectors, to keep them finite (you've got to test in a reasonable amount of time), and the testing has got to be exhaustive to guarantee you aren't shipping a bad CPU or other part to someone.

The same applies to a complex scientific theory: it's not enough to just accumulate a ton of experimental evidence. You've got to be able to make some general statements about the nature of the experiments, which demonstrate that they exhaustively test the propositions of the theory in relation to reality. Each experiment is a concrete, but every experiment must stand for a conceptual class of possible experiments (if you omit the "measurements" of the measurements, so to speak), so that a finite number of experiments can test the totality of all possible experiments or conditions which the theory is intended to describe. At that point, you can say, "my experiments describe the entire context assumed for the validity of this theory."

So going back to the problem of induction, my suggestion here is that the process I've described is somewhere between analogy and the actual solution to the problem of induction. A circuit is a sort of concept, and its design is a complex product of abstraction that bears similarities to a hypothesis, only, instead of describing reality, it does something in reality. It has attributes, and a specific nature, and a unity of purpose.

So by analogy, I suggest that the approach to validating a circuit can be instructional: validation of complex concepts and hypotheses and propositions requires a similar approach. This hypothesis is not self-evident, but it is probably true.

The main thing I haven't yet addressed with this theory is the specific principles by which you know that your conceptual "test vectors" have truly exhaustively validated the full context of your theory.  I'll leave that for another discussion.

No comments:

Post a Comment

Comments must be polite and well-reasoned, but passion is allowed when directed at the subject matter and not someone who posts -- violate this, and your comment doesn't get posted. Comments may not post immediately -- I'm pretty busy and don't live on the web.