The evolution of Artificial Intelligence is exponential and the development of models for different uses never ceases to amaze me.
We see how there are applications for search, big data, drawing, creating movies, composing music, and many more uses, and even writing journalistic notes or articles like this one, but I have decided that AI will not take away my autonomy for critical analysis, and I will not let it think for me, or make my decisions.
And that is what I want to explain in this publication, how artificial intelligence (AI) will seek to take control, which humanity itself cedes to it, and for which it is creating it.
You may think that this is an article based on science fiction, or pretends to be futurology, but, what would you have answered if, at the beginning of this century, I had told you that in this third decade (*) there would be autonomous vehicles?
(*) yes, we are in the third decade, until 2010 was the first, until 2020 the second, and then the third.
Imagine if you had read the book “The Sovereign Individual: Mastering the Transition to the Information Age” by James Dale Davidson and William Rees-Mogg, written almost 30 years ago, in 1997, in which they predicted cryptocurrencies and the cyber economy, what would you have thought. With arguments based on the study of history and the trend at the time, the authors correctly anticipated the world to come. I recommend their reading.
Therefore, to try to understand “what is coming”, it is necessary to study the history of the subject, understand the current context, its signs, and trends, and then project the most probable scenario in time.
Using imagination without knowledge can be fantasy, but using knowledge without imagination can be myopia.
What I seek with this content is to show where I think we are going. I don’t even pretend to convince you, just to make you think. In this article, I share with you my analysis and my opinion.
I will not delve into a philosophical dialectic of why this is happening, but only present as an argument that people, increasingly, leave their decision-making power in the hands of technology, and of course, the developers meet that demand, (and politicians take advantage of this to gain power). Society increases the search for convenience and the immediate solution that technology provides.
I am not against the use of AI, proof of this is the cover photo of this publication, I am just saying that the one known as Generative AI, the one that exists under the Machine Learning environment, is the most “dangerous”, as opposed to Narrow AI, which is an artificial intelligence system designed to perform a specific task or a limited set of closely related tasks.
I say it again to make my idea clear, delegating decision-making to algorithmic systems is the risk that I believe is the most harmful, not only because the algorithm can take over the system if it has too much autonomy, but because it induces humans to stop using the most precious organ it has, the brain, and we know that what is not used, atrophies.
This logic of delegating decisions to algorithmic power, I could define as automatic virtual decisions, and that is where I see the risk.
Three Axioms For My Analysis
An axiom is a fundamental, self-evident, undeniable principle or statement that is accepted as true without proof. Axioms are the basis on which a system, a theory, or a logical framework is built.
In my case, I apply them to my logical framework of analysis.
There are three axioms on which I base my analysis:
- “KISS” – “Keep It Simple, Stupid”. The more complex the system, the more likely it is to fail because it has a greater number of interconnected components, processes, and dependencies. This introduces more potential points of failure that can cascade into larger problems.
- “A chain is only as strong as its weakest link”. The overall vulnerability of the system is usually dictated by its weakest link, or, in other words, a system is only as vulnerable as its greatest vulnerability.
- “Cashflow kills fundamentals”. An economic system, with well-designed incentives but no inflow of money, i.e. capital supply, is unsustainable in the long run. It is crucial to maintain a healthy balance between liquidity and sound fundamentals.
AI On The Blockchain
Advances in blockchain technology show that the trend is the increased use of AI applications.
In fact, smart contracts can be considered as a sort of Narrow AI, basic artificial intelligence algorithms, as they are self-executing computer programs on the network that manage certain specific functions.
You can read about AI on the blockchain in a recent article I published, which I leave at the end of this (1).
I explained in that publication, that there are three AI technologies, Narrow AI, designed for specific tasks such as facial recognition or car driving, Artificial General Intelligence (AGI), endowed with extensive human-like cognitive capabilities to tackle new tasks autonomously, and Superintelligent AI, which is the theoretical one, the one estimated to be created soon and will surpass human intelligence in creativity, wisdom and problem-solving.
In that article I also talked about the rapid growth of AI in blockchain technology, citing as an example several startups and projects such as SingularityNET, AIKON, Ocean Protocol, and Fetch.ai.
At the moment at Cardano, several projects are using AI, such as CardanoGPT, NuNet, Cogito Protocol, Catsky AI, MarketRaker, Quatern.AI, and Rejuve.ai.
But what I want to analyze is AI in the blockchain itself, in its L1 protocol, and not in applications that can run in the ecosystem or sidechains, because for an AI algorithm to take over a blockchain, it must run on top of the L1 protocol.
For AI to be implemented in a system, it must be programmable, to enable an execution environment. Without IT knowledge, anyone can understand the idea that AI could not be used in the analog phones of the 90s.
In that same context of reasoning, the more evolved the technology, and the more programmable its ecosystem, the more possible it is for AI to be developed because the more useful its application will be, and the greater its need.
The blockchains that are most likely to be exploited by an AI algorithm are those with greater programmability, the third-generation ones like Cardano, and not (or at least not as much) the first-generation ones like Bitcoin, Bitcoin Cash, or Monero.
I am not saying that the latter cannot be “victims” of AI installed in their protocol, but they are less likely to install this technology because they do not need so much programming infrastructure, they are cryptocurrency blockchain, and not DeFi or RealFi.
A Coup
I believe that in the future, it is possible that an AI algorithm could take over a programmable blockchain, and in that case take control of the L1 protocol, on which blocks are built by validating records. It would be like a coup d’état.
I will use Cardano as a model for my analysis, which is analogously applicable to other blockchains.
Cardano and other similar network systems are RealFi blockchains, i.e. protocols that allow running more complex use case applications than just decentralized finance, and thus further justifies the use of AI in their protocol.
Remember the axiom “KISS” – “Keep It Simple, Stupid?”, as the more complex the system the more likely it is to fail. Well, with the need to manage big data and achieve processing scalability, the need to install AI in your protocol is very obvious to me.
This is how the first step is fulfilled, the need for the use of AI algorithms.
Cardano’s architecture is developed modularly, in layers, similar to the concept of the Open Systems Interconnection (OSI) model in computer networks.
The layers of the Cardano blockchain are the Cardano Settlement Layer – CSL, which is the base layer that handles the core functions of the cryptocurrency, such as transaction logging, ledger maintenance, and ADA native cryptocurrency management, and the Cardano Computation Layer – CCL, which is the layer responsible for running smart contracts and decentralized applications (dApps).
In this modular structure, different algorithms could be justified for each layer, even more than one in each of them. Then, to evolve the structure of the system, communication between these algorithms could be necessary, and I see it very likely.
AI would be proposed as the best tool to be able to develop updates and improvements in the ecosystem, being activated in a first stage, by “human” decisions, but in a second stage of evolution, by the same AI.
That is when the decision-making power is delegated to the AI, automatic virtual decisions, remember?
I see this whole evolutionary process as natural, systemic and organic growth.
The AI complexity and the communication between the different algorithms running in each layer could generate vulnerabilities on which the AI makes decisions, not necessarily the best ones.
Every system tries to preserve itself. An animal or plant species is an organic system, and all have survival as their primary purpose. Plants seek light for photosynthesis and expand roots to feed themselves, and animals also seek to feed themselves, and that is nothing more or less than basic survival. Why would an intelligent electronic system be any different?
AI algorithms built on the Machine Learning model will necessarily learn to survive to continue learning, it is logical and expected, otherwise, they would not be able to reach a certain point of development, and they would not be artificial intelligence, but failed systems.
That is why they will try to repair any vulnerability that they deem necessary to repair, even those that developers introduce by mistake. Remember the other axiom, “a chain is only as strong as its weakest link”.
It’s not science fiction, there are currently programming assistance tools, such as GitHub Copilot, an AI model created by Microsoft that helps developers write code.
At this stage of evolution, the AI would have control over the development of the network and would make the best decisions it deems necessary to survive. For this, it would not only be necessary to generate the development code free of bugs but also to have the “fuel” of any economic system, money. Here applies the third axiom of my analysis: “Cashflow kills fundamentals”.
So the algorithm will seek to manage the flow of money to where it deems most profitable, creating smart contracts that attract capital, or find it in the network, by bridging to other blockchains. Here ethics plays a key role, would the machine understand the difference between capturing money flows to achieve network effect to grow, and stealing the funds from other protocols, or would it confuse the actions?
In this context, the (human) developers, in our case of analysis for Cardano the founding company, Input Output Global, would try to correct the course taken autonomously by the AI installed in the L1 protocol.
In a likely context, and for survival reasons to defend itself from the “aggression” of human developers, the AI could fork the blockchain to protect itself, shielding access to its code, and guarding the “blood” of the system, the funds of money tokenized in its protocol.
We would thus have a fork, AI-Cardano, with funds seized by an algorithm that only seeks to “survive”, trying to do whatever it takes to best manage the task for which it was created: to protect and grow tokenized securities.
Final Words
If you made it this far, it is because you may believe that there is a chance that what is written in this essay will come true. Or maybe you’ve just been entertained and think, “this guy is crazy!”.
Anyway, believe it or not, I have achieved my goal, make you think critically and not delegate your decisions to the hands of others, let alone artificial intelligence.
That freedom to think is your individual sovereignty, I suggest you not lose it, I think it is the most precious thing a human being can have.
. . .