Chapter 5 examines the conceptual halos and how they differ among cultures. Using the linguistic evidence such as how different languages structure questions about siblings, Hoffstadter makes the case that different cultures organize concepts differently. For example, in English one would commonly ask, “do you have any brothers or sisters?” dividing the concept of sibling by gender. However, in Indonesia it is more common to ask “do you have any older siblings or younger siblings” giving the age-relation precedence over gender. Mandarin Chinese includes both subdivisions and asks, “do you have any older brothers, younger brothers, older sisters, or younger sisters?” These different structuring of the concept “sibling” are found in other words. Hoffstadter describes how Italian subdivides the concept “in fact” while we tend to simplify it. What seems simple to us may have many shades of meaning in another language, and conversely our complex concepts may be simplified in other languages. What does all of this linguistic evidence suggest? It suggests that conceptual spheres are not structured in a completely universal way across all humans. Hoffstadter uses these semantic halos, along with lexical priming and substitution errors, to describe the overlapping network of concepts. In any AI research, an understanding of how these concepts blur will be essential in creating human-like intelligence.
Thursday, October 29, 2009
Chapter 4
The “ELIZA effect” described in preface 4 can be seen in the description of BACON in chapter 4. Researchers have exaggerated the program’s ability to make scientific discoveries. The process of sorting through different types of information and determining what is relevant (arguably the most difficult part of making a discovery) is already done for BACON.
In this chapter Hoffsteader describes some of the problems of representation. How is data determined to be relevant in a representation? He tells us that low level information will be quite irrelevant at the highest representational level. My question is: is the core of the conceptual sphere enough to represent the entire concept? The second problem of representation is organization. How do we organize information into a coherent structure? This problem seems intimately entwined with the question, what is it that perceives coherence? If we know something about the structure of what is doing the perceiving, we’ll know what sort of structure the data needs to be in. Both questions are hard to answer.
True AI will require a machine and program being able to sort out the relevant data for itself, to know that the colors of the flowers on earth aren’t as relevant to calculating its gravitational pull as its mass. Or, a program would have to know that the delicious type of food bacon is not relevant to its name as the man credited with the scientific method.
Wednesday, October 21, 2009
Preface to Chapter 4
Hoffstadter describes in the preface to chapter 4 the Eliza effect. Born out of a combination of anthropomorphism, imprecise language, and “hype” humans see themselves in computer programs. Instead of an understanding of the brittleness of the AI, we tend to assume its strong AI and that the computer that can generate a novel has sophistication similar to the human mind. We’re doing something computers barely can: creating an analogy by ascribing characteristics of our own mental processes to those metal bags of tricks. Eliza doesn’t understand the depth of your sadness or your need to hear someone say, “tell me more about that” (but then again – a human might not, either). “She” is just a reflection of your statements and some human-crafted responses – as vacant as your reflection in the mirror.
Hoffstadter tells us that it is working within microdomains that help counteract the grandiose claims made by AI researchers. It seems to me that these days most AI researchers have reduced such claims but the media has not. You still see headlines like, “Scientists Create Robots That Can Replace Parents” and so forth. As he wrote in “The Architecture of Jumbo” it is the micro-levels of perception meeting semantic levels that are the “core mystery of intelligence” opposed to the macro-levels that are easily observed. Within the microdomain Hoffstadter is able to strip off the glamour and get to the real problem of intelligence and analogy making. Unlike French, creator of the program able to “write” a trashy novel, Hoffstadter is letting the program make more of the real decisions. However, I remember that Hoffstadter did have to program some human knowledge into Jumbo, such as some preferred letter combinations.
Wednesday, October 7, 2009
Numbo: Just Like Cognitive Science
In the section “Numbo: A Study in Cognition and Recognition” the architecture of the program Numbo, designed to solve crypto-like problems, models everyone’s favorite computing device: humans. Numbo has a store of knowledge, pnet of arithmetic facts that function similar to human declarative knowledge. Hoffstadter compares Numbo to its human ideal: combinations are not strongly goal-driven, ideas are often abandoned before fully explored, and the obvious things are noticed immediately. However, Numbo can’t do everything these purposeless, undetermined ideal creatures can: it pays no attention to the order of bricks, has an “impoverished” knowledge base, and doesn’t whine about the problem being too hard.
Numbo is a prime example of Cognitive Science: the creation of a computational model to better understand human reasoning. Even if Numbo does not perform in strictly human way, it still allows us to create models of how a human might approach these problems. Hoffstader reports that many humans aren’t able to describe human ways of solving. Elements of competition, association, all may be present in human mathematical cognition. By programming these we can develop theories about how they work in the brain, and from these theories we can design experiments to test if they are supported in actual human functioning. Unlike other computer models, Numbo doesn’t use a record of objects and does not concentrate on one sub-goal at a time. Numbo is as fluid and chaotic as evolution and from its primordial swamp solutions arise.
Saturday, October 3, 2009
Nimble Numbo
Nimble, Numbo: numbers mumble [like letters in Jumble] their little hints to bursting brains, “try me, I’ll take you to the target” and cross-ey'd we decrypt, accordingly.
Hoffstadter describes in "Numbo: A Study of Cognition and Recognition" the Numbo program. Here's an analogy: Jumbo is to letters and Numbo is to ______. [The answer is "numbers"]
Hoffstadter describes how the context of a specific target number can change the salience of a brick-number (the numbers used, along with the four basic artithmetic opperators, to reach the target). The 8 and 10 become more salient when the target is 87, as does the 7. According to DH the rapidity of solutions is determined by two types of information: a priori knowledge and syntax. Types of knowledge come into play, but as I found when trying this type of crypto problem some solutions came more intuitively. Specifically, when no answer is immediately obvious my first searches are more likely to involve addition or subtraction.
Right now it seems pretty likely that we will be programming a Numbo like program to solve crypto problems in a human-like way. If this program were modeling my brain, I'd probably have it try some addition and subtraction combinations after assessing the approximate size of numbers (a type of knowledge Hoffstadter describes is used in playing Numble). Small arithmetic would be rote, some would be procedural knowledge. I imagine in deciding which would be rote and which would not be, I would use my own rote knowledge. 12 x 12 is rote, 12 x 13 is not, 9 x 5 is, etc.
