Hoffstadter describes in the preface to chapter 4 the Eliza effect. Born out of a combination of anthropomorphism, imprecise language, and “hype” humans see themselves in computer programs. Instead of an understanding of the brittleness of the AI, we tend to assume its strong AI and that the computer that can generate a novel has sophistication similar to the human mind. We’re doing something computers barely can: creating an analogy by ascribing characteristics of our own mental processes to those metal bags of tricks. Eliza doesn’t understand the depth of your sadness or your need to hear someone say, “tell me more about that” (but then again – a human might not, either). “She” is just a reflection of your statements and some human-crafted responses – as vacant as your reflection in the mirror.
Hoffstadter tells us that it is working within microdomains that help counteract the grandiose claims made by AI researchers. It seems to me that these days most AI researchers have reduced such claims but the media has not. You still see headlines like, “Scientists Create Robots That Can Replace Parents” and so forth. As he wrote in “The Architecture of Jumbo” it is the micro-levels of perception meeting semantic levels that are the “core mystery of intelligence” opposed to the macro-levels that are easily observed. Within the microdomain Hoffstadter is able to strip off the glamour and get to the real problem of intelligence and analogy making. Unlike French, creator of the program able to “write” a trashy novel, Hoffstadter is letting the program make more of the real decisions. However, I remember that Hoffstadter did have to program some human knowledge into Jumbo, such as some preferred letter combinations.

No comments:
Post a Comment