
There Is No Such Thing as Artificial Intelligence: Notes On The Myth of AI
ABSTRACT: This essay is an initial attempt to outline what I see as the limitations in the current approaches to a generalized Artificial Intelligence (AI), the underlying assumptions that constitute and create those limitations, and how such limitations might either imply a limit to what can be done, or theoretically suggest a way forward in the research. The thesis is that a true, generalized AI (now called "AGI" in the field – I prefer the term "True Artificial Intelligence", or TAI) would depend on Artificial Consciousness (AC). Since we have not even taken a first pre-step in the direction of such a construct, we therefore have no valid current substantive theoretical ground for developing TAI. Furthermore, the author's opinion is that by implication, there is no "existential threat" from AI on the foreseeable horizon.
AI is in essence applied philosophy (the study of thinking and the love of wisdom), and a form of theoretical psychology. I say "theoretical" because the fact is, we don't know how the mind works, how the brain works, or what intelligence and consciousness are. We don't even know what life is (or what matter is, for that matter!). All we have are temporary models and concepts – constructs of the only thing we have direct empirical access to and therefore true knowledge of: a temporary perceiving mind projected from what we could call creative universal consciousness or intelligence, or the Source – which are of use in limited domains of application. All applications therefore embody the assumptions of those creations in these domains.
I consider this piece an example of "speculative non-fiction" by an outsider to the field (professionally speaking: I am not employed by an AI company nor do I work in AI research in academia). However I do work in the software field, have a degree in philosophy, and have engaged in discussions of this topic with thought leaders in philosophy, computer science, and non-dual consciousness over many years. I believe my outsider status gives me a certain advantage: the same advantage a consultant has in the potential for seeing things from a fresh perspective.
Ultimately, I see philosophy as a creative activity, which arises from the core capacity of creative intelligence to formulate new realities for us to live in, and the creation of AI therefore is an example of such activity of creativity "channeled" from Source, so to speak, by humans.
As applied philosophy, and as an art, the inherent direction of AI is not to converge on a solution, but to diverge into diverse forms of artificial "life", "intelligence", and "sentience", at the same time as we learn from this enterprise what those concepts mean, and where the borderlands are.
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful... With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out...” – Elon Musk, quoted in a Guardian article.
“Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains." – Stephen Hawking, quoted in a UK Independent article
"It would take off on its own, and re-design itself at an ever increasing rate...
Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
– Stephen Hawking, speaking with the BBC in 2014
(Note: I wrote this article on the morning of March 14, 2018, and referenced Stephen Hawking, who I hadn't written about or thought about in years. In the evening I learned that he had died that day. Another of many serendipitous events in my life recently.)
The Story of Eliza and Dr. Challenger
I got my first computer as a teenager, in 1979. I had to drive to Los Angeles to find a shop that had "personal computers", as it was a novel concept. What I picked up was an Ohio Scientific Challenger 1P, an 8-bit computer you could program in BASIC, and hook a cassette tape drive to for saving and loading programs (very slowly), and use a television for output. It didn't come with any software, and there was none available, and of course no internet. But I was able to teach myself programming with a couple of books I found, and had fun writing some graphics programs, a clock, a simple game, and other things I don't remember. One of the things I do remember writing and experimenting with was a program that had a very profound impact on the direction of my life: "Dr. Challenger": a simulation of a Rogerian non-directive psychotherapist. It was also an example what would be called a "chatbot" in today's terminology.
The therapeutic situation is a good one to model because it’s one of the few real human situations in which a human being can reply to a statement with a question that requires very little specific knowledge of the topic under discussion. And, you can acceptably ask question in response to a question without being rude. You can say things like “Why do you ask?” or “Does the question interest you”, or “Tell me more about X” (and fill in the X).
I'd been very inspired by a demonstration I'd seen at school when a student from another class brought in Commodore PET (another early 8-bit "personal computer") to a physics class. One of the programs he ran was an implementation of "ELIZA", an early natural language processing computer program that used pattern-matching, developed by Joseph Weizenbaum at MIT. Since I didn't have access to ELIZA or it's source code, I decided to write my own. I had to figure out how to have the computer ask questions, take a response, and generate its own appropriate response based on the pattern of the word or sentence input by the person. The program would parse (I didn't know that term at the time but that's what I was doing) the sentence, find certain keywords, find what part of a sentence they were, and depending on if was in the list of nouns, verbs, and the particular word "meaning" and so forth, put together a new sentence based on that, using the pattern of how sentences are constructed in English. So I had to anticipate the range of possible responses people might come up with (which is often not that huge a list actually), code that into what to look for, and string it together into a new, appropriate-seeming sentence and spit that out, then get their response from that, and so forth.
Programming Dr. Challenger was a fascinating task that I has fun figuring out. I knew however that it was "canned" and was essentially fooling people into believing the computer had some intelligence. That was a big part of the fun: it was like a magic trick! And indeed, just like with a magic trick, some people were taken in and wowed, and others saw right through it fairly quickly.
It's interesting to note that Dr. Terry Winograd, the author of the amazing AI program SHRDLU – a very early (1968-70) program from MIT and absolutely incredible accomplishment, especially for it's time – left the field of Artificial Intelligence because of what he felt were the ethical implications. SHRDLU was system of software that seemed to understand human speech commands and moved objects around on a screen in response. It was written in LISP, a a symbol and list-processing programming language developed for AI research. What he said was (and I am hoping to find the exact quote somewhere) that humans are sensitive to language the way dogs are sensitive to smells, and he saw a danger in computers manipulating us that way. So he started working on the design of human-friendly computers. Many years later I wrote to him and asked him about this statement of his and his decision to leave the field. He didn't answer directly but instead directed me to an article he wrote which he claimed answered the question (I scanned the article – it's very long and technical – but didn't find the answer and then got busy with other things).
Some Junk I Learned at College
In the 1950s and ’60s, artificial-intelligence researchers saw themselves as trying to uncover the rules of thought. But those rules turned out to be way more complicated than anyone had imagined. Since then, artificial-intelligence (AI) research has come to rely, instead, on probabilities — statistical patterns that computers can learn from large sets of training data. Larry Hardesty, MIT News
All of the early attempts at AI were based on what was called "symbolic processing" or the "MIT approach". The assumption behind them was that human intelligence was in essence a kind of linguistic activity underneath, and it involved the manipulation of symbols or "tokens". All one needed to do was understand and decode the deep rules that underlie language processing and other forms of intelligence, instantiate them in a computer, and whala! AI. There was also the assumption, based on Norm Chomsky's outlook, that there was a kind of deep and hidden set of grammatical rules in the brain that generated human language and even accounted for the ability of us speakers, writers and thinker to create new sentences. In other words, even Shakespeare in theory could be invented and reproduced in a sophisticated computer program, with enough rules and "knowledge" embodied in systems of symbols, and start creating new plays. This may sound wrong-headed these days to some (it even back then) but these were the operating assumption behind AI research for decades (my outline is a simplification of it all but it gets the gist of it).
It became obvious sooner or later that such symbolic and linear systems, while good at working within highly limited domains, such as the manipulation of certain object in a simulated space, were very "brittle" once you went outside that space. The real world was just too open, and had an infinite number of variables to account for: too many rules were needed. It also seemed obvious to me after writing Dr. Challenger that you would need a process similar to how human babies and children gain intelligence: by learning. In other words you can't just have a computer programmer or computer operator sitting there inputting countless rules and data knowledge and hope to get anything like real intelligence or the ability to operate in the world, or even just talk to you intelligently. The system needed to be able to sense, learn and respond and have a feedback loop with the organic world "out there", and continue to grow and learn (just as we do, and animals too to varying degrees). An AI then would need something akin to a body and sensory apparatus - to be a robot. But it would also need a brain that was a better brain.
So after seeing the limitations of this very linear and limited approach in practice and in theory, and wanting to mimic more the natural system the brain seems to use, the approach of parallel processing and neural nets came more and more to be seen as a hopeful direction (the reader can do their own research into the details these fields: this introduction is just a historical, conceptual background).
After that I spent decades studying, researching, thinking and discussing the general philosophical problems of artificial intelligence (AI), (questions such as what is the mind, what is knowledge and how is it represented, etc.) as well as writing software and working on computer hardware for a living. This gave me both a ground-level view of what the technology is and can do, as well as a conceptual birds-eye view of some of the more esoteric issues (which are nonetheless more and more pertinent to society today). In college, I studied with some of the most brilliant yet controversial thinkers around, such as the "neurophilosopher" Pat Churchland and the philosopher of science Paul Churchland, renowned philosophers of mind, as well as taking part in colloquia with prominent cognitive psychologists of the day. This allowed me the great privilege of asking them questions, and seeing what the underlying assumptions of their approaches were as well as of the institutions and thought-worlds (ways of seeing life) of which they were a part of.
There was a point at which however, towards the end of several years at UCSD, I felt I'd run into a cul de sac with this research. Though I was an undergrad but often ignored the assigned curriculum and chose my own topics and directions according to my interests and enthusiasms, and went to colloquia and graduate classes – my main obsession being the philosophical issues underlying AI. I realized that there was a huge flaw, a blindspot, or an ignoring (“ignorance” in the Hindu Sanskrit sense) that was common in all their outlooks. Being "at the end of my rope" however also led to a "gran mal intuition" as I later called it. To try and flesh out what this intuition was I'll talk about it in two parts: we can call them the "negative" and the "positive".
The negative side: They placed process thought, analytical thought, language and problem solving, as being the nature of human thinking –be it linguistic, visual, spatial, social, or scientific. This intellectual and material intelligence is seen as epitome of intelligence, and the result of an evolutionary process of survival and Darwinian adaptation and competition that got humans to where they are. And therefore all their models reflected the kind of thinking that was used to make them, with the same constraints. In other words, the models were modeling the modelers! Furthermore, the models reflected the modeler's assumption, beliefs and biases about who who and what they were, or humans were, and the universe and reality. The models were essentially dogma in visible form: they reflected not only partly accurate self-knowledge (as basically humans and animals as examples of organic robots) but also the blind spots of the researchers and the agendas (philosophical and political) and prejudices of the researchers, institutions and culture of which they were a part. In short, the the strengths as well as the limitations of the theories and models, which in turn reflected the thought systems and level of consciousness of the theoreticians and the modelers. I later came to understand how thought systems were a reflection of, or activated as it were, by the level of consciousness (I will explain thought systems and levels of consciousness later, as it's not widely understood). THe flip side of this limited intelligence is a more unconstrained intelligence.
The positive side: The other aspect of this powerful intuition I had towards the end of my time at the university was that intuition is the central faculty of us sentient, aware, intelligent beings. This however, one cannot prove, except experientially and anecdotally; it is only a fact from first-person point of view. This kind of non-objective knowledge however does have the advantage of a self-evident quality. For example, there are many anecdotes from history when a scientist or inventor or artist suddenly "saw" the answer to a problem they'd been working on for a long time, and knew it was the answer. A blinding light of insight told them. Therefore this "revelation" shall we call it, to sound more colorful, placed me outside the academic temple. It also placed me outside of the scientism that rules academia and society in general: the materialist religion. However, the name of and the depth of this belief system I was unaware of, other than as a vague and all pervasive knowingness. There was even fear associated with this view of intelligence (because of its implications, which I thought were the total loss of human freedom, via mind-control, and I wrote about it in a dystopian science fiction story in 1991 called "The Web", then renamed "The Zero Point" ).
It is interesting to see that some awareness of more direct access to a different way of knowing in books like Blink, or the expanding field of applied psychology called The Three Principles. These are radically different ways of looking at how humans operate in and see the world, but are more in line with actual experience and common sense than the models of academia (with entrenched interests behind them).
“Machine learning models such as deep learning have made tremendous recent advancements by using extensive training datasets to recognize objects and events. However, unless their training sets have specifically accounted for a particular element, situation or circumstance, these machine learning systems do not generalize well.” – Dr. Michael Mayberry, Intel Corporation
Some Acronyms and Definitions
Consciousness = The “first person” subjective experiencing of something, including the awareness of awareness. For example that which is reading these words right now, whatever that is. Another way of putting it is, the ultimate perceiver of perceptions.
It’s important to make a distinction here between fact and theory, for I realize from talking to people that the assumption is made that if brain can do it, why not a machine? Their thinking is that if a system of neurons and parts of a brain can be conscious, why not a machine? The problem is that this is based in a theory and concepts, not fact. That brains can produce or experience consciousness is not a fact, it's a conjecture – one without evidence, based in confusions such as between sentience and consciousness (thus an asleep body is considered "unconscious", when in fact, consciousness has no states) . However it’s such a deeply rooted prejudice, or myth, and so often repeated, that it starts to seem self-evident.
The best that can be shown by science and technology is that there is some type of correlation between some kinds of brain activity, such as electrical impulses or chemical actions, and thinking activity. So a subject can be hooked up to some machine, and be thinking about an ice cream soda, and then say “ice cream soda” and some pattern of electrical excitation could be registered, and perhaps an image of brain parts that show more activity could be shown an screen, that to some degree correlate with the imaging in the mind of an ice cream soda, and the decision to speak and the speaking of the words. But this is all about mental activity: thoughts and actions, and not about consciousness. It completely leaves out the question of who or what is having the inner experience of those thoughts.
Sometimes this conflation mistake is because of a confusing of the subjective and the objective points of view. So it’s important to reiterate by consciousness is meant the subjective view: what is it like to have the thoughts. For example, one could imagine the pink color of the ice cream soda, and there is no way anyone will ever know how you experience pink, no matter how much you describe it in words, paint it, try and point it out in the world, everyone else will have *their* own subjective experience of pink that may be completely different from yours, and there is no way of knowing, since you can’t get inside their experience. What they call “pink” you may call “orangish-pink”, even though there is a one-to-one correspondence between their reported color mappings and yours, all their colors are experienced slightly or completely shifted or differently. It is 100% private and intimate. These subjective, absolutely irreducible qualities are called by philosophers “qualia”.
Thus you can see that one could build a physical system that had the same kinds of firings register on the instruments, but there can be, and would be, no experience for anything: there is nothing of what it is “like to be” that machine such that it experiences it’s own qualia, unless you want to imagine something odd and absurd based on your faith in machinery such that machines could have experiences. You could, I suppose, imagine an adding machine have experiences like “Oooh, a number 2 and 3 is adding up to a 5! Oh boy, how yummy!”, but I who would take that seriously? I wouldn’t (except to laugh) and it certainly isn’t any proof that AC is possible.
The other confusion is between awake-ness, or levels of wakefulness and consciousness. In other words, being awake as opposed to being asleep. However, these are states *within* consciousness, not consciousness itself. IN other words, what we are looking at that we are defining as C. is the experience of what is reading these words right now to be awake or asleep: that constant that is behind all experience. One could hook up an EEG and watch a person or an animal go through different phases of wakefulness and sleep, but it tells you nothing about what it’s like to be that subject, or what experience they are having as consciousness. In fact, there are a number of recorded cases of subjects who have been flatlined in terms of brain activity, and clinically dead, yet report experiences later that they could not have had unless there was some kid of consciousness at the time. And though there are not dreams remembered from deep sleep, the inference that there is no consciousness is merely an assumption based not having a memory of anything, or an assumption based on the fact that the mind resurrects at waking, and since there seems to be no mind, therefore there was no consciousness, which is a false inference. Why can’t you have consciousness without content: and experience of pure consciousness?
Intelligence (with a capital “I”) = the capacity to come up with new solutions to old or new problems and situations of any kind; the capacity for true creativity (new knowledge or genuine art); the ability to “receive” self-evident truths (mathematical, spiritual, aesthetic, philosophical, etc.); the ability to appreciate beauty and to experience universal love (non-objective substantive apperception). Non-linear “thinking” or perception; the ability to understand; the perception of Reality.
intelligence (with a small “i”) = the ability to process data, solve problems in a specific domain (cognition), the ability to make decisions based data inputs; the ability to calculate solutions to numerical or symbolic problems; linear thinking (which includes the processing in highly parallel neural net systems); cunning; empirical reasoning (reasoning based on specific sensory or perceptual input and data facts regarding situations); logical reasoning; analytical thought; rule-based thinking.
PAI = Pseudo Artificial Intelligence
TAI = True AI = A Generalized Intelligence machine.
TAC = True AC = Artificial Consciousness that is not a simulation of consciousness, nor just sentience (being awake and able to think and act like a non self-aware animal).
TMI = Transcendental Machine Intelligence = In contradistinction to AI as so far conceived, uses TAC + AI to achieve TAI.
My central thesis is, You can’t have true AI without AC, and AC is a top-down reality, not a bottom-up construction.
True AI = A generalized intelligence machine.
AC = Artificial Consciousness.
My thesis is that you cannot have true AI without AC.
But is AC – Artificial Consciousness – possible?
AI systems (and therefore robot minds) are brittle. Another way of saying this is that they do not fail gracefully.
Because computer intelligence such as neural nets uses systems of bias (pre-judging, using memory-based processing thinking), based on correlation they are inherently unable to generalize. If if they are not rule-based like expert systems, they are using past knowledge, not direct knowledge.
The fact that computers do not write software, human creativity (creativity channeled through the human bodymind appearance) and intelligence creates software, including the AI software, as far as anything new in its totality. Within that sphere, an AI could write new software, in a fancy mechanical fashion.
Comment I wrote in response to a YouTube video:
“The Hanson Robotics CEO misspoke. He should have said "simulate" being as conscious, creative as a human. He's playing into the mass ignorance about the nature of AI and computers, which are mechanical, adding-machine based entities. All the creativity and consciousness comes from their creators. That's the "art". The burden of proof is on him. He is either dumb, gullible, or irresponsible. (I recommend "What Computers Still Can't Do: A Critique of Artificial Reason" by Hubert L. Dreyfus, MIT Press – one of many good books on the philosophy of AI).”
Myths or Erroneous Assumptions Underlying AI and "The Singularity"
1. That if you just get enough of a lower-level machine intelligence together then at some point it will reach critical mass of higher-level intelligence that will result in self-awareness, independent thought, creativity, and other elusive qualities we experience. This is a common trope in Science Fiction scenarios of AI, including (one of my favorite classic science fiction movies about computers) “Colossus, The Forbin Project”.
2. This previous assumption is also used to fuel the idea that this critical mass will become a runaway-AI and get smarter and smarter…
3. Which in turn stokes the notion that this runaway intelligence could thus becomes a super intelligence of nearly infinite power. Just like how a massive star collapses under its own gravitational weight until it sucks light into it and forms an event horizon, sucking in more and more matter in a runaway process, compressing down into an infinitely dense thing called a singularity (the end-game of a black hole), the AI would accelerate in self-enhanced intelligence, using it's intelligence to learn and create better intelligence, and so on, until this runaway epistemological feedback loop implodes like a bomb.
"Simply put, more and more stupidity does not add up to an intelligence that equals real intelligence, much less self-awareness."
These are all false notions, because they build this house of computerized cards on the erroneous assumption in #1. Simply put, more and more stupidity does not add up to an intelligence that equals real intelligence, much less self-awareness. Even an astronomical set of neural nets, that get very smart at doing a variety of tasks that brains do, is still just garbage in, garbage out in relation to true intelligence. This is because of the ignorant assumption that real intelligence and consciousness arise from brains (are an emergent property, like wetness is an emergent property of water molecules), whereas in fact there is no evidence for such an assumption that is not circular. This property of an idea – where no matter what evidence there is, it is held onto defended as if for dear life – is itself evidence of the interwoven set of dogmatic, religious ideas that underpin much of society and science when one reaches the limits of it’s understanding. (These materialist assumptions about intelligence is such a deep and ingrown belief in modern culture that it takes quite a long essay to untangle – I hope to write more on that soon).
The problem is, almost no one (in the Western world especially), understands what real or natural intelligence is. That is, they don’t have an adequate definition, because the cultural orientation, the social programming in life does not allow one to see it without immediately dismissing or devaluing it. For example, one suddenly understands a problem whose solution was eluding you: an Aha! moment of illumination. The whole thing is seen in a flash. Such timeless moments don’t compute to the mind: it cannot hold or grasp the open limitless freedom from which they seem to appear “from nowhere”.
On the other hand, we don’t want to leap farther than our understanding justifies and say we *know* for sure what true intelligence or consciousness is (other than that we experience it now, or in flash of intuition or self-evident truth, such as in mathematical discovery or understanding). The best and most honest path is to admit what we don’t know, and from that solid foundation, we can step forward and hopefully make progress.
Underpinning the thesis is that to create (or foster the creation of) a genuine AI, one must have artificial consciousness (AC) is the fact that the two ultimately cannot be separated. To have the properties of general intelligence, the property of consciousness must be present. The relationship of one of a triad or trinity: the apparent duality of matter and mind is resolved when the trans-real substratum of consciousness is taken into account. This is the “dual aspect theory” of Spinoza’s philosophy applied to a real-world engineering problem. However, consciousness is not an emergent property of brains, but is a general or unified field, of which a brain could be thought of as akin to a receiver.
The analogy of brain-as-receiver works like this: if one is, for example, watching a television program about politician, and you don't like the politician, would you go buy a new television in order to have a better politician – in other words to change it to a show you like to view – on the TV? Likewise, that thoughts seem to correlate to the functioning activities of a brain (and lack of thoughts or coherent thoughts in a damaged, dragged or asleep brain) does not prove that the TV program arose from that brain.
The analogy breaks down however in that radio waves and television programming are still in the same level of reality: objects. That is, they are physical phenomena and are measurable and detectable. Consciousness is not on the same level of perceptibility: it is not a phenomena. Consciousness is the witness of phenomena. This is basic.
Unfortunately, while sentience (being awake or asleep) are detectable, as are attentional mechanisms and types of brain waves associated with *states*, consciousness is not. In other words, that which experiences states is not detectable. We do not have instruments for detecting the presence of consciousness, and we probably never will. From where we stand today, in our scientific outlook, consciousness is not amenable to the scientific method, except with regards to second-hand evidence (such as verbal reports of other humans), or as subjective experiences and reports that can only be inter-subjectively evaluated. Likewise, you cannot open up a persona's head and find consciousness in the brain somewhere (despite stupid claims of finding centers of consciousness in popular articles, which always turn out to be based on mental actives or attentional mechanisms, not consciousness). Why? Because consciousness is not a phenomena. It is not part of the universe of experience out there. In other words, the universe is experienced *in* consciousness – as is the body, thoughts, sensations, feelings, thoughts – or is inferred to exist in other beings (such as humans or animals, or possibly aliens), as a separate quality of that being, because that’s what we assume for ourselves.
Another way of describing consciousness is, what is it like to *be*: for example, that which is reading these words right now: that reality. It is always in the absolute present, and ultimately intimate. It is the context or ground for an experience-having reality, or even feeling or seeming to have a sense of reality. For example, even dream is a *real* dream. The content of the dream is illusory, but the fact of experiencing it is real for the experiencer. That is the reality we are pointing to here: the reality of the experiencer. It is full-stop subjective. It has no objective properties. It is radical.
So how does one encode that in a computer? How can a piece of software embody that? It can’t: they are both content, not context; they are both part of the material world, both objects. They are not subjects. They are not experiencing.
But could they? Could there be an aspect of experience, that like other beings (humans and animals etc.), seems to have consciousness, yet is an object we created, or got underway through an engineering project?
The key concept here is “seeming”: how does one discern the difference between a simulation and the real thing.
To be continued...
References
"What Computers Still Can't Do: A Critique of Artificial Reason" by Hubert L. Dreyfus, MIT Press