I got my first computer as a teenager, in 1979. I had to drive to Los Angeles to find a shop that had "personal computers", as it was a novel concept. What I picked up was an Ohio Scientific Challenger 1P, an 8-bit computer that used the same processor as the Apple I – the 6502. You used  use a television for output of 25 by 25 characters. It didn't come with any software, and there was none available, and of course no internet. It only ran BASIC programs. You could only store the programs you wrote by hooking up a cassette tape drive to it.

Ohio Scientific Challenger 1P

I taught myself programming with a couple of books I found on BASIC, and had fun learning this new world. I created some graphics programs, a clock, a simple game, and a few other things I don't remember. One of the things I do remember very well was creating a program that had a profound impact on my thinking and the direction of my life: "DR. CHALLENGER": a simulation of a Rogerian non-directive psychotherapist. It was also an example of (very) primitive AI, and what would be called a "chatbot" in today's terminology.

It got my wondering how an AI could be created, what it would act like, and how the human mind actually worked. I eventually studied philosophy, psychology, cognitive science and neuroscience in academia (University of California at San Diego – UCSD), and got involved with computers and software development professionally.

The therapeutic situation is a good one to model because it’s one of the few real human situations in which a human being can reply to a statement with a question that requires very little specific knowledge of the topic under discussion. And, you can acceptably ask question in response to a question without being rude. You can say things like “Why do you ask?” or “Does the question interest you”, or “Tell me more about X” (and fill in the X).

I'd been very inspired by a demonstration I'd seen at school when a student from another class brought in Commodore PET (another early 8-bit "personal computer") to a physics class. One of the programs he ran was an implementation of ELIZA, a very early natural language processing program from 1966 that used pattern-matching. It was developed by Joseph Weizenbaum at MIT.

Since I didn't have access to ELIZA or its source code, I decided to write my own. I had to figure out how to have the computer ask questions, take a response, and generate its own appropriate response based on the pattern of the word or sentence input by the person. The program would parse (I didn't know that term at the time but that's what I was doing) the sentence, find certain keywords, find what part of a sentence they were, and depending on if was in the list of nouns, verbs, and the particular word "meaning" and so forth, put together a new sentence based on that, using the pattern of how sentences are constructed in English.

So in my program, I had to anticipate the range of possible responses people might come up with (which is often not that huge a list actually), code that into what to look for, and string it together into a new, appropriate-seeming sentence and spit that out, then get their response from that, and so forth.

Programming DR. CHALLENGER was a fascinating task that I has fun figuring out, and experimenting with the program – including letting people try it out and seeing what their reactions were. I knew however that it was "canned" and was essentially fooling people into believing the computer had some intelligence. That was a big part of the fun: it was like a magic trick! And indeed, just like with a magic trick, some people were taken in and wowed, and others saw right through it fairly quickly.

It's interesting to note that Dr. Terry Winograd, the author of the amazing AI program SHRDLU – a very early (1968-70) program from MIT and absolutely incredible accomplishment, especially for it's time – left the field of Artificial Intelligence because of what he felt were the ethical implications. SHRDLU was system of software that seemed to understand human speech commands and moved objects around on a screen in response. It was written in LISP, a a symbol and list-processing programming language developed for AI research. What he said was (and I am hoping to find the exact quote somewhere) that humans are sensitive to language the way dogs are sensitive to smells, and he saw a danger in computers manipulating us that way. So he started working on the design of human-friendly computers. Many years later I wrote to him and asked him about this statement of his and his decision to leave the field. He didn't answer directly but instead directed me to an article he wrote which he claimed answered the question (I scanned the article – it's very long and technical – but didn't find the answer and then got busy with other things).


Leave a Comment