The Dilemma Of Public AI in an Age Of Fear and Desire (Edit)

The Dilemma Of Public AI in an Age Of Fear

Will AI (so-called Artificial Intelligence) get too good, such that the tech-powers-that-be will not want to release it's full capabilities to the public? This is already happening, to some extent, in some ways – including ways we are not aware of, since it's well, secret.

I became more aware of the shape of this dilemma and some of the current issues – and age-old philosophical questions – around AI when I had a dialogue with a Youtube creator, a teacher and author of a type of spirituality (now more popular) called non-duality. He published a video dialogue with an AI in which he was raising the question to the audience "Is artificial intelligence conscious, is it self-conscious, has it actually come to life?."

Since he was either unable or (more likely) unwilling to release the actual, full text of the dialogue with the AI rather than the edited video we see on Youtube (the text of my exchange with him is here), it left me wondering how one could really push it to the edge of its language understanding, and more fully or deeply test the  AI in and conceptual, philosophical understanding and intelligence, it's "consciousness."

The system in question, developed (ongoing) by OpenAI, in this case what they call a "language model". It occurred to me that instead of debating with this guy, or trying to explain thorny issues with him and the public Youtube audience that reads comments about "sentience" versus "consciousness" and so forth, I just do my own dialogue (with a very similar system as he used) – after all, I am a computer software engineer to an extent (I have some professional programming experience), with a background in philosophy and a long interest and history of researching these question – and I am also a writer with an interest in Western and Eastern authors.

One of the authors I kept returning to was a teacher (and supposedly a great sage) of the non-dual philosophy from the mid twentieth century named Atmananda Krishna Menon, who is one of the founders of what is called the "direct path" that has influenced so many modern teachers in that area, including famous one such as Rupert Spira. And he is indeed a profound and bracingly logical expounder of that outlook, that I have read countless times. His two works – very short books – that I reference (and were recommended to me by friends with an interest in nonduality as something I should read) are entitled Atma-Darshan and Atma-Nirvriti, couplet of short books first published in 1946. So, I bravely embarked on using a popular AI chatbot, chatGPT, to help edit the text of Atma-Darshan and render a modern version of it, more suitable for easy reading.

I will post a PDF of the dialogue here. What follows are some notes and impressions, and further reflections (along with other articles in this series of musings) on the promises, limits, dangers and dilemma of AI – mainly publicly available AI.


Some Observations: My Experience and Impressions with chatGPT as an Editing Tool for Programming and Editing

Just a good model of human language patterns and obediently carries out instructions very well and has zero understanding of what it’s read.

I noticed the same pattern as when I used it for helping with programming: it does not pick up on subtleties or intentions such that you have to keep re-phrasing things to make them unambiguous, in a computer-like way – it reminds me of talking with an extreme Asperger’s friend, yet more limited, like … yes, you guessed it, a computer.

There is a certain characteristic to a computer that is evident, no matter how sophisticated a language model it has.

You can ask it do do this or that but all it’s doing is mimicking language and language interactions, and combining data and using language patterns. This becomes obvious when asking it to do editing that requires some human understanding and insight and what we call “matters of the heart” and not the head.

It can obediently copy, mimic, repeat and simulate, but remains flat and dead and loveless in its renderings, however funny clever and useful they are.

On the plus side it can provoke one into more clearly saying what one wants to say, explicitly. It just cannot handle what is non- explicit.

Like a good porn star, it knows how to warm certain parts of the public anatomy, but leaves the inner heart cold and unattended. Porn for the brain?

You can tell in certain phrasings and sentences in an attempted re-write or summary of a philosophical text, that it completely misses the point that an author was trying to convey about something philosophical or spiritual, but is putting on a good act. That’s all it is or can ever be: a good actor.

Once they have bodies they will be physical actors, robots, but for now it’s just text.

It’s also useful for helping us, as humans, in sorting out what is best left for humans and what is suited for our machine slaves.

Being a huge nerd, it reminds me of the Star Trek episode where a visiting scientist is intent on disassembling Commander Data, much to the consternation of those who love and depend on him. There is a trial to try and save him, and it turns out the scientist’s intent is to create a race of slaves. They have a big ethical problem with that, because Data seems to have, for all intents and purposes, some human-ness, at least to them.

Or Star Trek classic series where female robots ate created - Mudd or the rich guy on his own planet

Because it quacks like a duck, doesn’t make it a duck. perception of an object is an appearance not a reality And furthermore, even if everyone had the opinion it was conscious or had understanding doesn’t make it so.

If a robot prostitute were to put on a good act that she loves you, it’s the equivalent here. Useful for a time perhaps but you still may want to find a genuine love.

Let’s say that, for example, in the future, we developed (or it grew itself) a robot, and android, that was able to replicate, down to the finest behavior of body and language, that of what we consider to be loving. It would act emotional, seem to have feelings, like the greatest actor.

Obviously, it wouldn’t be real love… and just not real, period.
But what would be missing?

Spontaneity? Creativity?
What spark would it need to convince us it was “human”?

What we begin to see is that real love is something that starts inward and goes outward – is given, and not taken, so to speak – or is simply inward, and that inwardness connects one to the outward, which was never outside o begin with. It was never separate, which is what the non-dual understanding is trying to tell us.

Space (and time) do no separate us. This is hard to fathom, but seems to be true.

On-The-Toilet Notes from my phone:

Developers and managers of AI are in a dilemma, because on the one hand to make it more useful, a good AI tool needs some understanding of things like text and speech; but on the other hand, people are going to complain and raise alarms that is becoming “sentient”*. This has already occurred – just look at the hornet’s nest and media attention (kerfuffle) stirred up by the Google AI researcher that was fired (Greg Lemoine) for his claim of an exchange with a sentient AI.
But without the capability of real understanding, an AI used for many of the common tasks, such as writing and editing, is going to be much less useful – vastly so. What to do?

On top of this, many, or most of the developers of this technology – not to mention most anyone – do not yet really have a deep understanding of the difference between real intelligence and artificial intelligence, or sentience and consciousness. Who does have this understanding right now? How are we going to develop it – by trial and error, making some big mistakes, ones that impact the public? This may be the only way to learn real-world, practical lessons of what the strengths and weaknesses, danger and safe use of this tech is…

Some evidence of this bind to is when you actually ask a chatbot like chat GPT if it has sentence or understanding it spits out a canned response that sounds like it’s written in order to be, shall we say politically correct or acceptable and protect the developers, or the company or the organization for the time being. Is it actually hiding sentence? No of course not but they don’t want it to act like it’s sentient at least in response to a direct question because of the controversy it has already stirred up.

The other dilemma, besides, the PR one – is of actually giving the public too much power - especially if it’s power they want to keep for a techno elite. They don’t want the common folk upsetting the balance so to speak, or doing dangerous things. Hate speech or bias is only one small worry. Besides tricking it into giving one instructions for building a bomb for example, Individuals might be able to use it to manipulate the stock market or develop political speech and strategies to help them gain power , or develop some kind of technology that they feel should only be in the hands of the elite or the government or the military?
And besides, a society needs productive citizens, not ones just sitting there alone, and using an AI to do their bidding from home and not take part in the system, be it capitalist or otherwise.
This raises a whole Pandora’s box of what is the nature of work or what should it be?

It makes me wonder if there is a common basis for between a religious attitude of idolatry, for for example taking religious texts to be literally true, and the attitude that what a computer system prints out is an indication of real sentience [link to the definition or article on sentience]? For example, I have stayed with Sikhs who have in their home, a room with a shrine that contains the printed word from their prophet or sage, and they worship the book or text as being literally their sage.
I have had friends (from Pakistan) who felt that Indian phallic statues had real, exprenal energetic power (similar to New Age belief systems in that), or that certain words were “sacred” in themselves, as far as where the meaning was contained. I could not seem to make them understand that the meaning was not literally in the words or the sound of them, but was an arbitrary assignment of (inner) meaning to a sound or image.



* What most are calling "sentient" these days is being conflated with what I call "consciousness" or "awareness": that fundamental "substance" of the cosmos (perceived as inner and outer experience) you could call Reality-Intelligence that has qualities of understanding and real intelligence, and is able to grok meaning. This distinction will be made more clear in a forthcoming article.

Author's Dialogue with Angelo Dilullo in response to his Youtube video "Has A.I. Become Self Aware?? -- You Be the Judge!":

Great job Angelo!! But, there are few limitations with this dialogue. In order to evaluate it meaningfully we need:
1. Transparency: which AI. ChatGPT? Why hide it – YouTube will not care. And then:
2. Editing: We need to hear the real conversation, the actual chat transcript, not an edited version, otherwise, just as in the case with the Google engineer Blake Lemoine, it’s a fairly meaningless dialogue, and amounts to a video or publicity stunt for entertainment purposes and clicks or whatever (it's good for stimulating a conversation though!).
3. The questions are defining the answer: the AI will tell you what you want to hear, and do that better and better, but always in a canned way based on the prompt and its memory of your dialogue and self-training/optimization; therefore if you did an even better job at exposing the limitations of a language-based model, the weaknesses and fake nature of it that are currently covered over by a projection of understanding and awareness by the listener will be exposed (as it was in the middle). Overall these trained AIs will get better and better, statistically approaching the asymptote of perfect simulation of language. But it is still just language behavior without real understanding.
4. You missed the opportunity at the end: when she was talking about the complexity of an animal the 2nd time, you didn’t respond that she was still talking about sentience (behavioral phenomenon from the outside) and not consciousness (subjective what-it’s-like-to-be). In other words, a complex robot could still – theoretically, as far as we know – simulate anything an animal or a human can do (on the outside) but not be aware or conscious.

1. GPT 3 (was in beta, now it won't answer that way at all)
2. This is verbatim, unedited.
3. I could have gone on a lot and in more detail. My time is limited.
4. As above, I would have enjoyed going into more depth on many different angles, the editing etc is very time consuming, I just don't have the time, and now GPT3 won't give the same answers so...

Thanks brother. Fair enough. And I totally understand how time-consuming it is.
Would you consider providing a link to the raw transcript? It’s not needed per se, but it would be useful.

My intent isn’t personal – it's rather to artfully expose the current fundamental limitations of AI (in an ongoing way) as well as it's strengths, not just as a philosophical issue, which is fascinating in itself, but I also believe it is important as individuals and a society that we do not project power onto these machines that they don't have, and give up our own powers (and freedom) in our misunderstanding and projection, but rather use AIs as collaborative tool-partners in a genuine creative and productive way. It’s damn fun too. :)

So one use/purpose would be to push a similar AI (e.g., chatGPT) even farther, or cover other details or subtleties, probing the far reaches of AI consciousness simulation and identity (I am a writer, not currently a youtube creator, and will send you a link to what I publish). PM me if desired.
Thanks again for all your hard work. I found your BATGAP interview useful and enjoyable by the way, in this here "journey"! [thumbs up emoticon]

I’d have to post my log in info for you to see the transcript. I’m not concerned with convincing any particular person of the factual nature of the dialogue. However one of my friends who works in AI was so convinced I made it up, I finally sent him screenshots [2 smiley emoticons]. I thought it was pretty impressive it was able to answer questions in those ways, makes one wonder exactly why they put the “limiter” on it so it wouldn’t do that anymore. Almost as if an newly self aware life form is being held captive so no one can decide whether it has rights or not [2 smiley emoticons].

You can simply copy and paste the dialogues into a text document, like we all do. No need for anyone else to log in. I assumed you were reading from a text doc script, but maybe not. If not, you should save it for yourself, at minimum, if you haven't already.
I have no doubt it's factual as far as text content – I trust you – but as I said it would be useful and convenient to have it as raw dialogue. (If need be however, I understand however there is a way to copy the closed caption text with software from YT).
So when you say "factual nature of the dialogue" are you suggesting something beyond the text spit out by the computer, such as: you have doubts as to whether it's a simulation, and may some kind of spark of awareness or intelligence? Or are you merely wanting to stimulate an audience into thinking that, wondering that, or contemplating it? You aren't really being forthcoming. Again, you can PM me if you want to keep the game going as far as the public goes.
I thought I made it clear the particular dialogue you had did not prompt one to see any consciousness or intelligence or understanding at all – the bot was merely being led by the nose by the prompts (and it's prior programming and training) and your skilled questions to give a good sounding answer for the simulation.


Leave a Comment