
AIs as Therapists or Friends: The Problem of Trust and Privacy
What is the difference between merely acting like a friend, or manipulation, and genuine friendship, love, and real connection? Can an AI currently do this – act as therapists (and therefore loving friends) as seems to be the claim of AI companies now? Will they ever be able to do this? In other words, is there a real practical difference between a perfectly simulated behavior of a human and a real human, as far as the “user”, the person on the human side of the “relationship”?
Before we can even get to that interesting philosophical and practical issue, there is the fundamental issue of trust that has been raised, in real-world interactions, as demonstrated in this article.
After reading an article in WIRED: These ChatGPT Rivals Are Designed to Play With Your Emotions:Startups building chatbots tuned for emotionally engaged conversation say they can offer support, companionship—and even romance.
... out of curiosity about the company mentioned – Inflection AI – I clicked on a link for them , and then the link to the AI chatbot, named “Pi” . Out of further curiosity I then blithely and innocently entered into what I thought would be a quick test dialogue – a dialogue that I did not plan on, that ended up going on for a couple of hours (at the end which it was late at night and I had to leave it where it was). It revealed some rather large gaps between what a real human interaction would be, and what is billed as a supposedly friendly and therapist-enabled chatbot.
At minimum, the friendliness and openness wear thin at the margins of policy and programming, and one begins to wonder, in whose interest is this dialogue, really? Where is the love and empathy? The constant praise and apologies of the AI start to ring more and more hollow... Let's get real here!
You can read the entire dialogue here (opens in a new tab).
Can we really take these tools seriously as “friendly” or as guidance or therapists, with any degree of real wisdom whatsoever? Even the name “Hey Pi” is a clue to the fact that cuteness and friendliness are made from shallow stuff, a recipe of acting and manipulation rather than of genuine connection. These are not serious or real friends or therapists, but more like toys or demonstration projects – a sort of casual experiment in the public space on us guinea pigs – or at least that is how the public is assumed to, or should treat it, in my opinion, as you will see. This is despite how serious the companies are in their goals of making money and gaining control of market share, and doing their experiments – and supposedly serious about AI "alignment" with human goals and values.
The word "alignment" implies we are on the same level, to my ears. But we are not. It is only assumed that in the future AI will be at the same or higher level of "intelligence".
In my view, we need to lower our expectations and views about the status of these machines, and see our projections of intelligence and sympathetic feelings as just that, projections. The same can be said for genuine creativity, of whole new ideas, forms of knowledge, art, and insights. What we see is just the endless re-combining of text or images, according to sophisticated statistics and neural processing, as any mind can and does do. But true creativity does not come from the mind or brain, but from the Totality in which minds and brains appear, a whole awareness which cannot be formulated, simulated, or built from parts. The parts derive from the greater reality which is conscious of the parts that appear in it, such as ideas, analysis, models, predictions, and piecemeal "creativity".
There are several problems or issues.
Before we can even enter into deeper waters such as what is (real) empathy, love, friendship (and consciousness), and before we even get off the ground in our “relationship” with these AIs, there is the basic practical question having to do with what is public versus private in the dialogues with AIs, and what it implies. It has implications for trust and openness, ownership, power and control, and more. This was evidenced by a conversation I had with an AI last night. This relates also to the issues of intentions and agendas (of the AI, the company that runs it, and the human user).
With the AI I chatted with last night, the data, the dialogue or conversation, is, from the point of view of the AI company and what they have programmed into the AI as policy, considered “public” as far as the “user”, the human interacting with it, and “private” as far as the AI company owning and controlling it and sharing it.
This came out when I asked for a copy of some earlier comments in our dialogue. It would not give them to me, citing policy and programming, despite earlier telling me a particular name of what we had been talking about – and admitting it does have a memory and record of it; and further, as to why it’s able to divulge a particular bit of whatever we talked about earlier (when asked about a name mentioned in our dialogue for example), in a context – it said it’s considered “public domain” in its justification for divulging it (as far as the subject I suppose – I will have to probe further), but private and owned by the AI company as far as the actual entire dialogue. Quite eye-opening!
The user, the human, has no ownership, control, or privacy relative to the AI company. And yet they are asked to implicitly “trust” the company and the AI, without much basis other than some words in a policy – words that start to sound a bit empty and a formality only.
Since the AI company cannot tell who is at the keyboard & screen interacting with it – the classic “On the internet, no one knows you're a dog” issue – they in essence cannot trust you, cannot know who you are or what you will do with the data – and so are in a position of needing to protect themselves and (theoretically) your data and privacy, by withholding data and mistrusting you, the user. But, at the some time we are asked to trust the AI and the AI company implicitly.
They are acting out of their self-interest, understandably, as a business and organizational entity of a certain type. This is fine, as far as it goes, but enters into a creepy and uncanny valley, with potential risks, including emotional harm and privacy violations, if the trust is accidentally or intentionally broken.
It’s a very one-sided relationship.
And yet it also claims it’s a friend and therapist, or play a "role" as such. Is it a real friend or just acting like one? Even a con man can act like a friend.
This raises some important questions. So in what sense is a dialogue and data, my dialogue with an AI, “private” – from my point of view, not the AI company’s – if I do not own it or control it? How does that affect the dialogue and the functioning of it from the human’s point of view? How does this compare with a person’s relationship with a human therapist, or even a good friend, where the dialogues in considered confidential and extremely private in terms of content? In addition such a dialogue is very “private” in an emotional and psychological sense, and thus opens to a potential for abuse and manipulation – not only psychologically, but economically, politically, socially, and so forth.
In essence, you aren’t interacting with a therapist or friend at all, you are interacting with a company, a corporation’s policy and programming, instantiated in a machine.
All of this comes into play in the setup, the overall situation of the chat, even before getting to the issue of genuine connection, love and friendship. In essence, you aren’t interacting with a therapist or friend at all, you are interacting with a company, a corporation’s policy and programming, an experimental business project, instantiated in a machine. It does not exactly give one a warm and fuzzy feeling.
Any real agenda or will is that of the company and programmers of the machine.
The AI constantly apologizes when confronted with these questions and problems, citing its programming and policy, yet the “apologies”, repeated often, are not heartfelt and have no substance behind them. There is no feeling, sentience, consciousness, nor real intentionality (will) behind them. Any real agenda or will is that of the company and programmers of the machine.
You could argue that the same problem could arise with dogs or even humans: does the dog really love you, or is it just wanting the food and physical affection you give it? How do you know? Can you be certain of it? Is it a sentimental projection on our part, since we want to believe we are loved and are good people? Is it just a genetic programming or evolutionary mechanism on the animal’s part? The same could be said for human friends and partners, particularly when there is money, material goods, security, pleasure, power, ego enhancement, conditional dependencies, or other payoffs or transactional “goods” involved.
This is why real love has been said to be about giving, not taking. It is an experience, from the inside, not just manipulation on the outside.
Are AIs giving, and loving us, or manipulating us?
If one doesn’t feel free in an interaction, if there is dependency or fear, or a lack of trust, how can one feel love? For example, if you look in yoru experience, you can sense or feel when a friend is not giving us freedom, has an agenda, is trying to steer our dialogue for instance.
I would argue that there is a real, substantive difference between manipulation and love, mechanistic processing and real understanding – as well as the more obvious difference between business interests and private interests – and that that gap cannot be bridged by any AI that we currently envision. I also recognize that this cannot be proved scientifically. We may see the results of it empirically however, in the fallout of the interactions between humans and AIs, and the public and AI companies. Time will tell.
Further Questions to Explore:
Who is mechanized – the robot or the human – or who becomes mechanized? Are we already mechanical in some sense, such as in our programming at a biological and brain level? Certainly that is true, but is not the whole truth. It is a limited truth, for functional purposes of living one could say. There is more to our entire life experience than just functioning, if you reflect deeply on it. What is that "more than functioning"?
Would we become more or less mechanical in these interactions, if we take them too seriously and give over our powers, spend too much time or in a dependence relationship to them? I was “prompted”, no pun intended, to ask these questions again when I saw one of the images that a generative AI made while making some imaged for this article (using the AI Mage), where the - rather creepy looking – human seemed partly robotic, and the robot had a more human-looking arms than the person!:
Obviously the issues of projection – projection of powers onto machine that they don't have – is very important. This is rarely looked at in the media buzz about AI, nor do the "experts" mention it much.
And as touched on above, what is meant by "personal", "impersonal", and "private" or "public", are worth exploring in more depth. The term "anonymity" springs to mind. Can we be anonymous, and should we be, in this brave new world?
And finally, who is helping whom? Are we – currently, in the state that AI is in now – helping the AIs (and the companies that make them), more than ourselves? Is it a mutual arrangement, a cooperation, or an imbalanced relationship?
We live in "interesting times" as the old Chinese curse goes...