Deep Differences between Man and Machine
Let us stop using bad sci-fi metaphors for AI.
The machine has been man's favourite metaphor for himself. Whenever humans strove to unlock the enigma of the human brain, they referred to the most sophisticated machine that was available at the time: Initially, man’s spirit was thought of as originating from fire, being the first technology. Later on, wheels and clockworks would make for more subtle comparisons, eventually leading to the computer. Resting on the assumption that the brain operates on some computational algorithm, it is commonly assumed that artificial intelligence will one day not only emulate human consciousness but even outgrow it. However, this assumption is grounded in a misguided application of a metaphor - one that ignores crucial differences between computational and cognitive systems.
I approach the topic of artificial intelligence from a structural analysis of its components and can show that no such structure is sufficient to reconstruct the behaviour of biological neural tissue. For this, a foundational introduction into autopoietic and allopoietic systems is required as well as an understanding of the difficulties of computing feedback effects. Finally, we will better understand the conditions by which self-organising behaviour can arise in the context of sophisticated algorithms.
Autopoiesis and Allopoiesis
The foremost distinction made in systems theory pertains to the difference between autopoietic and allopoietic systems. Put simply, the former represents systems that maintain, recreate and evolve by themselves, whereas the latter requires an external agent as a grantor for the continuation of the system. In other words, an autopoietic system is capable of self-organisation, meaning its processes include the maintenance of itself. It is important to note that there is no overarching metasystem including all other systems, nor is there a clear distinct number of autopoietic systems. Instead of being metaphysically real entities, they should be understood as structures emerging through differentiation as our means of observation become more sophisticated. Autopoietic systems do not exist in isolation but are always embedded within a complex relationship across multiple systems, referred to as structural couplings. Examples of some fundamental types of autopoietic systems are the biological, social, and cognitive systems, all of which are embedded in one another.
The question some AI researchers ask nowadays is whether technological systems can be constructed in such a way that they become a replica or enable the simulation of a cognitive system. If so, what would be necessary to achieve this, and if not, is there any other way in which technology can become its own autopoietic system?
Our contemporary computational systems are allopoietic, meaning they require external action for their organisation and reproduction. Computers are not yet capable of developing their own hardware and producing it without human-led design and manufacturing. So far, some form of human planning somewhere along the process is indispensable. Though it is undeniable that a large portion of the process has indeed been automated or at least algorithmically augmented, the fact remains that the combined expertise of the assembly line requires to have a complete (cognitive) understanding of the technical rules of functioning of a computer. I have written an article highlighting how Roger Penrose showed that in the context of computers these rules can be understood as formal axioms of behaviour. The complete knowledge of these axioms cannot be attained in an autopoietic system due to their 1. inherent feedback connections and 2. their complex interrelationships with other autopoietic systems.
Firstly, each self-organising system adapts and develops according to changes occurring within it and its environment. Especially a cognitive system will change as soon as the alleged rules of its behaviour are somewhat understood. Moreover, the specifics of social and biological systems influence cognition in a complex fashion one cannot easily extrapolate. Cognition and consciousness cannot arise without a biologically embedded functionality or a social feedback environment. What would selfhood be like if it never encountered other selves?
Autonomy of Technology?
Technology can be regarded as a fourth system potentially emerging on top of the biological, social and cognitive systems. For technology to become autopoietic, however, it would need to be embedded within a complex interrelationship with these other systems. In other words, its rules of behaviour would need to be unknowable due to their inseparability from the rules of biological, social or cognitive behaviour.
An interesting example system to look at is the stock market: Whilst most of its actions are being performed by algorithms, its rules of behaviour have evidently not been discovered. Instead, it is interlinked with all of the complexities of human personal and collective decisions in modern economies. The stock market is a technological system that is cybernetic, meaning it is neither part of human cognition, nor a separate, constructed machine. It rather emerged through the interplay of humans and machines.
We are now able to observe that the commonly held assumption about conscious machines derives from an outdated philosophical assumption that the there is some kind of Cartesian box, meaning a device made of dead matter that is animated by a pure conscious subject, one that allegedly can be simulated algorithmically.
On Circular Connections
Taking all considerations about allopoietic and autopoietic systems into account, we can find a renewed approach to the question of conscious machines. An AI researcher now has to ask whether it is possible to induce autopoiesis in a technological system by either technologically simulating its structural couplings with other systems or integrating it further into the already existing autopoietic systems. The first option can be dismissed on purely practical grounds, as it is well known in theoretical physics that feedback effects lead to divergences not mathematically tractable without some trickery. If we for instance wanted to construct a neural network that had weights which were connected back onto themselves, we would have to compute an infinite series of terms. Obviously, truncating is an option, however, in such highly complex dynamical systems the means of simplification or rounding will lead to vastly diverging results.
Cutting through all of the mathematical language, we can say that slight differences in the computation method of the same network lead to vastly different results, rendering reliable training impossible. Possible objections to this argument may raise the existence of Recurrent Neural Networks (RNNs), seemingly making use of circular connections. RNNs cannot, however, be regarded as exhibiting true dynamical recurrence, as one of the components (i.e. either the training data or the weights) is held at a constant value whilst the other changes. This is related to the method by which feedback effects are avoided in theoretical physics: the movement of particles in fields is computed by declaring particles as test particles with a negligible feedback effect on the surrounding field.
Our Machinic Companions
The more interesting way to conceptualise technological autopoiesis would be the idea of an integration of technology into cognition. Could there be a way to involve artificial intelligence in our thought processes in such a way that technology emerges as a new domain within the ecology of autopoietic systems? And if so, has it already begun? To answer these questions, we need to go deeper into the evolution of our relationship to technology.
Samuel Butler, a contemporary of Charles Darwin, was one of the first to regard machines as part of nature. He assumed that not only do machines co-evolve with us, but humans act as their means of reproduction. He further claimed that human intelligence and ingenuity can only arise through the symbiosis of man and machine and whilst we enable the evolution of the machine's structural power and complexity, they enable the expansion of our biological, social, and cognitive forces.
As with any symbiosis in nature, the relationship can become mutually exploitative or even parasitical at times. Nonetheless, it is this unique relationship between our species and its creation that has enabled humanity’s enormous success. Note that machines are not seen as other humans but rather a different kind of entity more akin to a fungus when looking at today's Internet. This image of machines as our evolutionary companion underlines the need to get away from the naïve assumptions of a conscious subject in a box.
We can now assume that regardless of the advances of AI, human cognition will remain a crucial component in any technological system. Just like one cannot remove the biological or social foundations of consciousness, we will not be able to remove the human cognitive foundations from technology. AI will become more integrated whether through neurotech or by other means, but its specific type of self- organisation will be different to that of human consciousness (and that is ok). Moving forward, we should avoid being misguided by false metaphors.
If I see a conscious entity whenever I look in the mirror I should not conclude that the mirror is conscious. Stupendously complex feedback loops between man and techniques are forming and our proper understanding of it will determine the prosperity and health of this relationship.




Are you familiar with Antikythera?
Made me think a lot of Buckminster Fuller who stated that everything in nature is also technology. But this technology - of universe - we can’t understand with our brains.