Technological Governance: A Response to Daniel Jeffries’ Interview on Future Thinkers
I recently enjoyed listening to an interview with a friend – a “colleague in thought” – with whom I also engaged in a fascinating discussion (Dialogues With a Mad Solipsist) on the subject of solipsism several years ago, when we were both members of a co-working space in San Diego. Daniel Jeffries is a writer and technologist who has been deeply involved with blockchain technology of late, as well as an author of science fiction with AI agents and futuristic scenarios. It’s always a pleasure to experience the richness of his imagination and no-holds-barred futurism.
So I was excited to hear him in this podcast with Future Thinkers, discussing the application of blockchain technology to the decentralization of governance, and tackling such thorny issues as voting, public policy, democracy, incentivizing behavior, and so on. What follows is an expansion of the notes I took while enjoying the podcast. I strongly encourage the reader to listen to the podcast, as it helps to form a context.
Let’s dive right into the middle: it seems to me he uses the word “ideology” incorrectly. He really means “thought systems”. Or more to the point: the concept of a thought system conveys the psychological fact that the world we experience is a projection of the total system of thinking one has at any moment. This is more global an idea than “ideology”. The concept of an “ideology” suggests to me something more along the lines of, for example, a religious or political dogmatic thought system. This is something quite specific in content. As evidence of this, an individual could have several ideologies, but they can only have one thought system. Thought system gets at the psychological root of the issue, rather than the surface play of ideologies. You could (more readily) program a computer with a ideology, since it’s more or less a fixed, rigid system of interlocking positions, whereas you could not program a computer with a thought system, since it is a living active gestalt based in intelligence and brought to life as an experienced world by consciousness.
Computer systems would be good at embodying ideologies in the sense that it’s an ego-based activity: a fake self. It would be a fun experiment to have a bunch of computerized ideologies (Christians versus Muslims for example) battling it out, since that is what an ideology is for, psychologically speaking: a defensive system for an illusory self, used to maintain the function of being an identity. You could see how computerized people actually are. And who knows, maybe someone would watch those arguments happening in a simulation and realize, “Wow, that’s how I am!” and it may spur them to wake up from their “program”.
I’ve been researching developments in the AI field (such as “deep learning” and the varieties of neural nets and how they work, and the claims of AI companies, not to mention the kinds of assumptions science fiction stories make) and reviewing the issues, as well as blockchain technology, its uses and social implications. So how does the blockchain relate to AI? And what does governance have to do with blockchains? First, I keep hearing either the implicit assumption that there is, or will be, some kind of real intelligence in these systems, either sa they are or with the addition of AI (such as when complex decisions need to be made). But would it be true generalized intelligence? If it were, this is turn is based on erroneous philosophical assumptions and based on what might be developed in the future (a subject covered in a longer, forthcoming article): a future ever-receding it seems. Second it’s assuming that the populace and the politicians have the same kind of interest, understanding, vision and will of the technophiles who have these enthusiasms.
The fact is, people want people in governance, not machines. They look up to figureheads, think they need them. We need to make a distinction between an imagined science fiction-fueled imagined world and social-psychological reality about this. And there is a grain of truth to the public’s feelings or intuitions with respect to leadership: machines do not have consciousness, creativity, or general intelligence and understanding of human affairs. This will not be replacing executives in corporations either, for the same reason (this was a fear back in the 60s through 80’s when computers first came into awareness, and when “expert systems” started popping up). Such systems can only simulate aspects of human leadership, or philosophical thinking (see Plato’s Republic for a relevant perspective about the “philosopher-king”), or embody limited cognitive type processing, like brains (limited, biological parallel processors with no consciousness), but do not embody the substance of consciousness and natural intelligence (e.g. intuition). They can remember rules, like well-greased autistics. But ethics and politics are not rule-based, they are *contextual*. For example, you can codify laws, but not the application of them. The belief that one can codify the application of them is a faith-based belief, nothing more, like the belief that consciousness comes from brains (a belief is defined as something that is held to be true regardless of evidence).
It seems widespread in the field of blockchain and AI that behind the grand claims, enthusiasts are making the same old assumptions that have been around for decades, based on the erroneous religious beliefs of scientific materialism. The blockchain and cryptocurrency religions are merely offshoots of that main religion prevalent in the culture.
So what can be incentivized: there was a discussion of happiness and how that can be warped. One could end up rewarding the wrong things, if I remember the thrust of the conversation: for example incentivizing being a despotic asshole, or getting into a Black Mirror type scenario of social fakery and in order to get ranked and allowed access (or not) to goods and services. One could have the weird situation of being controlled by that system. (Interestingly, this also serves an a technological analogy for what is already the case: the parallel is that in reality we are already controlled by ego and unconscious patterns of psychological conditioning. We think and behave according to what we are not, and this is the basis of our unhappiness).
But the proposed system of incentives misses addressing the real underlying dynamic with respect to happiness, which is ego and control versus authenticity and reality. Most humans don’t have a basic angle of understanding what reality is, and are identifying with what is false. Another way of talking about this for that dynamic is the play of ignorance versus being on the path of knowledge (such as in the Hindu or Buddhist sense). So how do you incentivize self-knowledge and happiness rather than ignorance; how do you incentivize love instead of fear, or de-incentivize what obscure love and true happiness. Is this possible? Is it desirable? Should we really try and mess with things at this level? And how do you do that without being punished as it were, for being innocent, when one goes in the wrong direction vis-a-vis such a system and it’s vectors of happiness and incentivized behavior (right thinking and right action, in Buddhist terms) – which really in essence embody a value system – since everyone is in essence innocent. Even if you could do this, there is going to be push-back for any system you come up, with because you can’t form rules for society. In other words, whose job would that be? A aristocratic, technological elite? Large (or small) companies or teams creating blockchain application, or corporations and government bodies, as we are likely to see more and more he future? The alternative is it devolves into mob rule. So this new elite will be pulling the levers behind the scenes, more or less. We already have this to a degree, and I’m sure there are billionaires in Silicon Valley and elsewhere who want more of it and to keep it that way. Meanwhile they will go live in their hardened missile silos converted into shelters, and New Zealand escape plantations, fearing rebellion when the masses figure it out and rise up. (see articles like https://www.newyorker.com/magazine/2017/01/30/doomsday-prep-for-the-super-rich)
The burden of proof lies on the those who are assuming and believing that computer systems can do things like understand and apply justice, since a negative can’t be proved. Unfortunately we might see some applications of such systems in the process of the experimental proving, and these will be painful lessons, especially when they are applied by the faithful true believers in technology-as-savior to humanity’s problems. Faith tends to blind people to what they don’t want to see.
You want to be careful when you start thinking about codifying values. The reality is, you can’t really. You can set down some guidelines. But it’s not possible to have rules that always apply. Everything is contextual. Proper behavior depends on the complex whole of a situation. A computer system simply can’t read that, have access to that, process that, properly evaluate it. Even if a robot with a large parallel computer neural net brain had all the senses – seeing hearing, touch, smelling, taste – and grew up around humans, it would still not have any intuition nor any organic sense of a body. It would not have the desires and fears and the whole relation to the environment, universe, and the inner cell structure, nor the intuitive glimpses of unity via consciousness. It would not have awareness of itself and the self-evidence of truths come by through that radical subjectivity. No one knows where intuition and self-evident truth comes from, but it’s not coming from a cryptocurrency happiness governance vending machine. That much is certain.
I’m not concerned that some AI or singularity is going to take over the world (like in movies and books such as “Colossus: The Forbin Project” etc.), I merely advise folks to not to give away our freedom and intelligence to a technological system by granting it powers it does not have. It is only a projection to see intelligence and wisdom where it is not. Actual creativity, beauty, love, peace and truth will come from the authors and users of such systems, not the systems themselves. This is evidenced in the examples of computer programs that have been invented to create (supposedly) original artwork: the real “art” is in the creation of the software, not the interesting pictures the software generates. Another good example is the various chatbots, artificial girlfriends/boyfriends, and how easy they are to flush out as simulations (for this writer anyway). And yet many users project intelligence and understanding into them. (If you want to test this out, try any chatbot out there, and instead of letting it lead the direction of the dialogue, reference in your conversation something said earlier. You will find there is no continuity in the “intelligence”: there is no thread of understanding or ability to truly delve into a topic to any degree. The fact that even testers at Turing test contests are sometimes fooled says more about the tester than the chatbot or AI).
One parallel situation to point out is that we don’t have computers writing software to any significant degree. There is more and more need for good programmers, not less. The computer and the software, and all the infrastructure, and tools, not tool-makers.
Something not touched on in the interview is that all this infrastructure that blockchain technology depends on is quite fragile, very complex, with many many layers, all dependent on each other. It’s an electronic house of cards. Internet and cloud services have outages, and even when up and running, are not always available to an individual (I’ve talked about this at the end of my article about money (under revision)). Would we really want such fundamental social systems like money and governance dependent on such enormous unwieldy systems, so completely contrived and inherently brittle, subject to breakdown from a misplaced comma in a database backup program on a server in some remote server farm? This is not an unprecedented scenario, as witnessed by the AWS outages in 2015.
(see for example: http://www.datacenterdynamics.com/content-tracks/colo-cloud/aws-suffers-a-five-hour-outage-in-the-us/94841.fullarticle)
I am by no means against technology (thank god for that since I work for a company that makes software for helping to run elections!). I am not a luddite crying that the sky is falling. In fact I love technology and am fascinated with it’s applications. But in part because of my deep involvement with it, including consulting for users that have seen very painful data loss (such as losing the only copy of a Master’s thesis on a floppy disk as a result of having too much faith in technology), I have significant reservations about the over-application. These reservations and a desire to clarify and spread a little more love and understanding, are a result of an understanding of the limits of intellect, the limits of science and the technology that is derived from this understanding and the underlying assumptions and worldview. In fact, my view is that to truly create something approaching a genuine general AI, it is absolutely necessary to recognize and acknowledge these limits, rather than madly pursue dreams down dead end alleyways. The same applies to blockchain technology as it applies to social problems and opportunities. Do we want to reproduce the same insanity and ignorant worldviews