There Is No Such Thing as Artificial Intelligence: Notes On The Myth of AI

“Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains.” – Stephen Hawkin

(Note: I wrote this article on the morning of March 14, 2018, and referenced Stephen Hawking, who I hadn’t written about or thought about in years. In the evening I learned that he had died that day. Another of many serendipitous events in my life recently.)

I spent decades studying, researching, thinking and discussing the general philosophical problems of artificial intelligence (AI), as well as working in the computer field. I studied with some of the most brilliant (yet controversial) minds around, such as the neurophilosopher Pat Churchland and the philosopher of science Paul Churchland, renowned philosophers of mind, as well as panel discussions with cognitive scientists.

There was a point at which however, I realized that there was a huge flaw, a blindspot, or an ignoring (“ignorance” in the Sanskrit sense) that was common in all their outlooks. They placed analytical thought, or problem solving, as the nature of human thinking, the epitome of intelligence, and therefore all their models reflected the kind of thinking that was used to make them. The powerful intuition I had towards the end of my time in the philosophy department, was that *intuition* is the central faculty of us sentient, aware, intelligent beings. This however, one cannot prove, except anecdotally, and can only be discovered from first-person point of view. This kind of knowledge is self-evident. Therefore this “revelation” shall we call it, to sound more colorful, placed me outside the academic temple. It also placed me outside of the scientism that rules academia and society in general: the materialist religion. However, the name of and the depth of this belief system I was unaware of, other than as a vague and all pervasive knowingness and even fear (because of its implications, which I write about in a dystopian science fiction novel in 1991 called “The Web“, then renamed “The Zero Point” ).

“Machine learning models such as deep learning have made tremendous recent advancements by using extensive training datasets to recognize objects and events. However, unless their training sets have specifically accounted for a particular element, situation or circumstance, these machine learning systems do not generalize well.” – Dr. Michael Mayberry, Intel Corporation

True AI = A generalized intelligence machine.
AC = Artificial Consciousness.

My thesis is that you cannot have true AI without AC.

But is AC – artificial consciousness – possible?

AI systems (and therefore robot minds) are brittle. Another way of saying this is that they do not fail gracefully.

Because computer intelligence such as neural nets uses systems of bias (pre-judging, using memory-based processing thinking), based on correlation they are inherently unable to generalize. If if they are not rule-based like expert systems, they are using past knowledge, not direct knowledge.

My father used to say, “If you’re so smart, why aren’t you rich?” (My retort is “If you’re so rich, why aren’t you happy?” But that’s another essay…)
If AIs are so smart, why aren’t they writing our software? The demand for software engineers and programmers, IT people, etc. are growing at a tremendous rate: evidence not only of technological expansion, but of the fact that computers do not write software, human creativity and intelligence creates software.

Comment I wrote in response to a YouTube video:
“The Hanson Robotics CEO misspoke. He should have said “simulate” being as conscious, creative as a human. He’s playing into the mass ignorance about the nature of AI and computers, which are mechanical, adding-machine based entities. All the creativity and consciousness comes from their creators. That’s the “art”. The burden of proof is on him. He is either dumb, gullible, or irresponsible. (I recommend “What Computers Still Can’t Do: A Critique of Artificial Reason” by Hubert L. Dreyfus, MIT Press – one of many good books on the philosophy of AI).”

Myths or Erroneous Assumptions About AI and “The Singularity
1. That if you just get enough of a lower-level machine intelligence together then at some point it will reach critical mass of higher-level intelligence that will result in self-awareness, independent thought, creativity, and other elusive qualities we experience. This is a common trope in Science Fiction scenarios of AI, including (one of my favorite classic science fiction movies about computers) “Colossus, The Forbin Project”.
2. This previous assumption is also used to fuel the idea that this critical mass will become a runaway-AI and get smarter and smarter…
3. Which in turn stokes the notion that this runaway intelligence, like a star collapsing under its own gravitational weight until it sucks light into it and forms an event horizon, sucking in more and more matter in a runaway process, compressing down into an infinity dense thing called a singularity (the end-game of a black hole), could thus becomes a super intelligence.

These are all false notions, because they build this house of computerized cards on the erroneous assumption in #1. Simply put, more and more stupidity does not add up to an intelligence that equals real intelligence, much less self-awareness. Even an astronomical set of neural nets, that get very smart at doing a variety of tasks that brains do, is still just garbage in, garbage out in relation to true intelligence. This is because of the ignorant assumption that real intelligence and consciousness arise from brains (are an emergent property, like wetness is an emergent property of water molecules), whereas in fact there is no evidence for such an assumption that is not circular. This property of an idea – where no matter what evidence there is, it is held onto defended as if for dear life – is itself evidence of the interwoven set of dogmatic, religious ideas that underpin much of society and science when one reaches the limits of it’s understanding. (These materialist assumptions about intelligence is such a deep and ingrown belief in modern culture that it takes quite a long essay to untangle – I hope to write more on that soon).

The problem is, almost no one (in the Western world especially), understands what real or natural intelligence is. That is, they don’t have an adequate definition, because the cultural orientation, the social programming in life does not allow one to see it without immediately dismissing or devaluing it. For example, one suddenly understands a problem whose solution was eluding you: an Aha! moment of illumination. The whole thing is seen in a flash. Such timeless moments don’t compute to the mind: it cannot hold or grasp the open limitless freedom from which they seem to appear “from nowhere”.

On the other hand, we don’t want to leap farther than our understanding justifies and say we *know* for sure what true intelligence or consciousness is (other than that we experience it now, or in flash of intuition or self-evident truth, such as in mathematical discovery or understanding). The best and most honest path is to admit what we don’t know, and from that solid foundation, we can step forward and hopefully make progress.

Underpinning the thesis is that to create (or foster the creation of) a genuine AI, one must have artificial consciousness (AC) is the fact that the two ultimately cannot be separated. To have the properties of general intelligence, the property of consciousness must be present. The relationship of one of a triad or trinity: the apparent duality of matter and mind is resolved when the trans-real substratum of consciousness is taken into account. This is the “dual aspect theory” of Spinoza’s philosophy applied to a real-world engineering problem. However, consciousness is not an emergent property of brains, but is a general or unified field, of which a brain could be thought of as akin to a receiver.

The analogy of brain-as-receiver works like this: if one is, for example, watching a television program about politician, and you don’t like the politician, would you go buy a new television in order to have a better politician – in other words to change it to a show you like to view – on the TV? Likewise, that thoughts seem to correlate to the functioning activities of a brain (and lack of thoughts or coherent thoughts in a damaged, dragged or asleep brain) does not prove that the TV program arose from that brain.

The analogy breaks down however in that radio waves and television programming are still in the same level of reality: objects. That is, they are physical phenomena and are measurable and detectable. Consciousness is not on the same level of perceptibility: it is not a phenomena. Consciousness is the witness of phenomena. This is basic.

Unfortunately, while sentience (being awake or asleep) are detectable, as are attentional mechanisms and types of brain waves associated with *states*, consciousness is not. In other words, that which experiences states is not detectable. We do not have instruments for detecting the presence of consciousness, and we probably never will. From where we stand today, in our scientific outlook, consciousness is not amenable to the scientific method, except with regards to second-hand evidence (such as verbal reports of other humans), or as subjective experiences and reports that can only be inter-subjectively evaluated. Likewise, you cannot open up a persona’s head and find consciousness in the brain somewhere (despite stupid claims of finding centers of consciousness in popular articles, which always turn out to be based on mental actives or attentional mechanisms, not consciousness). Why? Because consciousness is not a phenomena. It is not part of the universe of experience out there. In other words, the universe is experienced *in* consciousness – as is the body, thoughts, sensations, feelings, thoughts –  or is inferred to exist in other beings (such as humans or animals, or possibly aliens), as a separate quality of that being, because that’s what we assume for ourselves.

Another way of describing consciousness is, what is it like to *be*: for example, that which is reading these words right now: that reality. It is always in the absolute present, and ultimately intimate. It is the context or ground for an experience-having reality, or even feeling or seeming to have a sense of reality. For example, even dream is a *real* dream. The content of the dream is illusory, but the fact of experiencing it is real for the experiencer. That is the reality we are pointing to here: the reality of the experiencer. It is full-stop subjective. It has no objective properties. It is radical.

So how does one encode that in a computer? How can a piece of software embody that? It can’t: they are both content, not context; they are both part of the material world, both objects. They are not subjects. They are not experiencing.
But could they? Could there be an aspect of experience, that like other beings (humans and animals etc.), seems to have consciousness, yet is an object we created, or got underway through an engineering project?

The key concept here is “seeming”: how does one discern the difference between a simulation and the real thing.

To be continued…


Intel’s New Self-Learning Chip Promises to Accelerate Artificial Intelligence, By Dr. Michael Mayberry

Stephen Hawking: ‘Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?’

What Computers Still Can’t Do: A Critique of Artificial Reason” by Hubert L. Dreyfus, MIT Press