Between Data and Power: Towards a Critical Epistemology of the
Algorithmic Society
Clara Lamberdina Erben Vázquez
Üskudar University – Universidad de Santiago de Compostela
16.06.2025
Abstract
The aim of this paper is to provide clarity and theoretical foundations to the widely discussed
topic of artificial intelligence, with the clear intention of demystifying and clarifying its
nature.
The approach to analysing the new algorithmic era will be from a technopolitical perspective
in search of critical technorealism. It is not a question of completely rejecting technology or
naively accepting it (techno-optimism), but rather of understanding its logic and
consequences.
Technorealism, as an intermediate position, recognises technical potential while insisting on
the need for ethical, epistemological and political frameworks, and proposes transparency in
algorithmic processes, critical literacy and social control over technical systems.
The challenge is not technological but epistemic, political and cultural.
In the first section, we will try to introduce the basics of the epistemological approach and, in
turn, describe the current technological framework in which we will apply them.
In the second section, we will reflect on Eurocentric mono-technological culture, analysing
how knowledge of a generative model is constructed, what its epistemic validity is and what
consequences it has for modern society.
In the third section, we will analyse in greater depth the consequences of scientism and the
way in which knowledge of generative models is constructed in a social framework, starting
from the awareness of a reality in which the cultural-technological monopoly is held in the
form of big data by commercial elites.
Finally, we will draw conclusions from the analysis.
Keywords: Technopolitical perspective, epistemology, critical literacy, scientism, generative
model, and Eurocentric mono-technological culture.
Introduction to Epistemology
Epistemology is the branch of philosophy that deals with the study of knowledge itself. This
may sound strangely redundant: ‘studying knowledge’ is like ‘studying study’, ‘studying
learning’... and yes, epistemology analyses not only how we acquire and process knowledge,
but also explores its limits and forms of validity.
Since classical philosophy, knowledge has been divided into episteme and doxa. Both are
defined as beliefs that, like all knowledge, are ultimately within the mind and are believed
and constructed, rather than universally known from birth. But they differ from each other in
that episteme is that belief which is founded and therefore true in the strict sense of being
based on rational justification. Episteme requires a conscious subject capable of critical
reflection, through methods that support its validity.
On the other hand, doxa is defined as an unjustified opinion or belief. An example would be a
belief that God exists, or that the soul is located in the pineal gland. It is impossible to say
whether they are true or false in religious terms, but we can say that they are not epistemic
beliefs, because they do not refer to justified knowledge.
Applied to this study, the function of epistemology is to determine the type of knowledge that
AI “possesses”, questioning whether it is capable of possessing any kind of knowledge and
how that knowledge is constructed. Our task is to try to decipher the ways in which AI
generates information in order to discuss its epistemic validity.
Contextualisation of the algorithmic era:
To gain a deeper understanding of a generative model, it is necessary to provide a brief
introduction to the socio-technological dynamics that prevail in digital society.
It all starts with Big Data. This is the foundation on which algorithms and artificial
intelligence are developed. Big data refers to massive amounts of information produced by
and about people, things and their interactions. Although Tecent and Baidu in China possess
large amounts of data, large US technology companies dominate the possession of Big Data,
with corporations such as Google, Meta (Facebook), Amazon and Microsoft.
This phenomenon should not be understood as a ‘malicious theft’ of information and
individual privacy, but rather as what it really is: a widely shared consensus towards
surveillance capitalism. With every click, every cookie, we give our consent to the storage,
use and possession of our data.
Hand in hand with Big Data goes the development of Artificial Intelligence. In 2023, the
United States surpassed the second world leader in the race, with an investment of 62.5
billion compared to China's 7.3 billion. Other countries such as Switzerland, Sweden and
South Korea managed to stand out in innovation in this field in 2024.
These dynamics reflect a significant concentration of resources and capabilities in the hands
of a few powers and corporations. The company founded by Elon Musk in 2023, xAI, after
splitting from OpenAI in 2018, received a £4 billion investment to develop AI products and
build advanced infrastructure. According to Musk, the new company aims to ‘understand the
true nature of the universe’.
Returning to the philosophical matter, and to round off this contextualisation with a little
perspective, Kant spoke of transcendental illusion as the phenomenon of human reason
falsely believing that it can know or comprehend everything, even that which is beyond
possible experience.
Reason, in seeking unity and totality, tries to apply categories of empirical knowledge to what
is beyond experience: God, the soul, the universe... beyond our physical and material reality.
This concept is applied by Hui to calculus:
‘The transcendental illusion of calculus is the mistaken belief that the world can be
completely understood, represented and controlled through calculus, mathematical models
or algorithms.’
Between mind and symbolic processing.
With the aim of demystifying the ‘magical-realist’ narrative of the rebellion of machines, a
comparison will be made between human and artificial intelligence, drawing parallels
between the mind and the capabilities of AI. The purpose is not only to point out their
similarities, but also to highlight their differences, constructing a reference point from which
to assess the legitimacy of artificial ‘knowledge.’
‘Anthropomorphic marketing’ is a strategy that consists of attributing characteristics such as
human voices and forms to products, brands, animals, inanimate objects or even artificial
intelligences, in order to generate an emotional connection between the product and
consumers. The influence of anthropomorphic marketing leads us to ask questions such as:
- Can a statistical model develop consciousness?
- Can a statistical model think?
First, it is necessary to define what we mean by consciousness from a philosophical
perspective. This usually involves three fundamental characteristics:
- Subjective experience (Qualia): the ability to have intense internal sensations, such as
feeling pain or perceiving the colour red.
- Self-awareness: the ability to recognise oneself as a distinct subject separate from the
environment.
- Intentionality: the ability to have directed thoughts about something, that is, the awareness
that our thoughts refer to objects, ideas or states of the world.
It seems obvious, although it is not always treated as such, that artificial intelligence does not
possess any of these qualities. ChatGPT cannot be considered a conscious subject. It does not
have the sensory capacities to have sensory experiences, and since it lacks subjective
experience, we cannot say that it has self-awareness. If asked, an AI can ‘say’ who it is and
seems to ‘know’ its nature. But to have self-awareness as such, it would have to reflect on its
thoughts, emotions and intentions, which it is completely devoid of.
To answer the second question about AI's ability to think, we can turn to a complementary
critique by Pasquinelli and Joler (2021), who argue that technologies should be understood
not as true intelligences in the pure sense, but as forms of artificial perception.
What the authors highlight is that the conceptual and engineering basis of these technologies
does not lie in high-level cognitive processes, but rather in pattern recognition through
machine learning, which is refined through the use of more sophisticated architectures, such
as Generative Adversarial Networks (GANs). GANs are networks where a generator attempts
to create realistic fake data to deceive a discriminator, which in turn tries to detect these
fakes. Both improve over time until the generator produces data that is almost
indistinguishable from the real thing.
A neural network does not compute a specific pattern, but rather the statistical distribution
associated with that pattern. In other words, the network learns a kind of ‘average’ or ‘general
map’ of what the data usually looks like, rather than memorising an exact copy of a particular
case. This allows it to recognise, generalise and respond even when the data it receives is not
identical to what it has encountered before.
That is why we say that a generative model, by reproducing information according to a
logical structure, can be valid and true data, but lacks meaning in the epistemic sense. The
information reproduced by AI can be understood as ‘creative,’ but it does not constitute
knowledge in itself. Without a conscious subject, there is no knowledge; symbols are not
constructed until a human subject gives them meaning.
When a neural network processes information, it does not look for an exact example or a
unique pattern, but rather works with the probability and frequency with which that pattern
appears in a large data set. The data (such as image, text, audio) is converted into vectors of
numbers (mathematical representations), so that a word such as ‘cat’ is represented as a
vector in a multidimensional space (embedding). The epistemic vulnerability of this way of
constructing knowledge lies in the fact that algorithms prioritise statistical relevance, not
truth.
I asked AI to explain how embeddings (vector representations) work, and this was the
answer:
An embedding is a way of converting symbolic elements, such as words or phrases,
into numerical vectors that capture their meaning within a mathematical space. They
are based on tokens.
A token is a minimal unit of text in natural language processing (NLP), used by
language models to understand and process text. What counts as a token depends on
the tokenisation system used.
Tokenisation can be by words, subwords or characters, dividing the text into terms,
letters or syllables, separated by spaces or punctuation.
Text: ‘The dog barks’
Token by words: [‘The’, “dog”, ‘barks’]
Token by subwords (when the meaning of a word is not known): [‘pe’, ‘##rro’]
Token by letters: [‘d’, ‘b’, ‘b’, “b”, ‘e’]
AI models do not process text directly, but rather numbers. So each token is converted
into a number (ID) through a vocabulary. For example:
‘dog’ → token ID: 9487
‘##og’ → token ID: 3065
These IDs are then converted into vectors (embedding), which are what the model actually
uses.
In other words, if it is shown numerous photos of dogs, the network does not learn a specific
image, but rather the common characteristics that dogs usually have, translated into a
numerical ID, such as that they have 4 elements called ‘paws’, 1 element called a tail, 2
ears... and so on, up to an infinite number of other characteristics.
The general map of the concept of a dog that AI constructs differs from the human mental
symbol in that when we think of a dog, we imagine a general idea of what a dog is, giving
real meaning to the word. A computer manipulates symbols without understanding their real
meaning.
Epistemic and semantic limits of artificial intelligence
To strengthen our argument that the type of knowledge reproduced by a generative model is
nothing more than structured numerical information, I will supplement Pasquinelli and Joler's
critique with those of five other authors: John Searle, Brentano, Gettier, Goldman, and Celis
Bueno.
1. John Searle's critique: The Chinese room argument
John Searle (1932–), philosopher of mind and language, developed in his famous article
Minds, Brains, and Programs (1980) the Chinese room thought experiment, with which he
criticised computational functionalism .
The thought experiment consists of imagining a subject locked in a room with no knowledge
of the Chinese language, but with a manual in English that tells them how to manipulate
Chinese symbols according to certain rules. Through a slit, they receive questions written in
Chinese, which they answer perfectly, following the manual, but without understanding what
they are answering. Although from the outside it appears that he understands Chinese, in
reality he is only following syntactic rules without access to the meaning of what he is
answering.
With the Chinese room, Searle demonstrates that processing information does not imply real
understanding of its meaning, as is the case with artificial information processors. These
models succeed in answering a sentence not through knowledge but through a mathematical
calculation that follows a predetermined structure that simulates human knowledge.
2. Edmund Gettier and the inadequacy of the classical definition of episteme.
In the previous example, the truth of the answer given seems to be achieved accidentally,
which is reminiscent of the famous cases raised by Edmund Gettier (1927–2021), an
American philosopher and professor at Wayne State University and the University of
Massachusetts Amherst.
In his short but influential 1963 article, Is Justified True Belief Knowledge, Gettier
questioned the traditional definition of knowledge—known as episteme—understood as
‘justified true belief.’ Through ingenious counterexamples, he demonstrated that a person can
have a belief that is true and justified, yet still not possess knowledge in the full sense.
Until then, the episteme argument was based on these three premises: belief, truth, and
justification.
- Belief: The person believes something.
- Truth: What they believe is true.
- Justification: They have good reasons or evidence to believe it.
Gettier demonstrates that these three conditions can be met without there being true
knowledge. The scenario proposed is as follows:
Two people, Smith and Jones, apply for the same job.
The boss tells Smith that Jones will get the job. This leads Smith to believe that Jones will get
the job. Smith knows that Jones has 10 coins, so he states that the person who will get the job
has 10 coins in their pocket. Ultimately, it is Smith who gets the job, and it turns out that,
unbeknownst to him and by coincidence, Smith also had 10 coins in his pocket.
Smith's statement turns out to be true, it is justified, and he believes it, but he is right by
chance. This type of situation shows that the classical definition of knowledge is insufficient,
as it allows a true and justified belief to occur without real understanding or direct connection
to the truth — as in the Chinese room.
3. Alvin Goldman: New theories for defining knowledge
Gettier's approach led to the need to rethink the classical definitions of true knowledge or
episteme. Among other philosophical alternatives, Alvin Goldman (b. 1938), an American
philosopher and prominent epistemologist, developed two theories that will be key to our
analysis: the causal theory of knowledge and the reliable process theory.
The first theory introduces the need for a direct causal connection between reality and the
subject's belief. In other words, for knowledge to be considered as such, belief ‘P’ must be
caused and motivated by fact ‘P’. The statement “P” could be, for example, ‘there is a tree in
front of me’. His central thesis argues that:
A subject knows/is aware that ‘P’ (belief: there is a tree in front of me) if:
- ‘They believe that P’ (traditional principle of belief)
- ‘P is true’ (traditional principle of truth)
- ‘And the belief that P is appropriately caused by the fact that P’ (new principle of causal
connection)
This argument can be translated and reformulated as follows:
Mary sees a tree in front of her window and, as a result of that direct visual perception of the
tree, of the real fact, she constructs the belief that ‘there is a tree in front of me.’ The
statement is believed by a subject, it is true because the tree is there, it is justified because she
is seeing it, and most importantly, the tree is the cause of the belief.
In this case, and in accordance with the four principles, we are talking about the new
definition of episteme.
This theory is complemented by the theory of the reliable process of knowledge or
Reliabilism. In it, Goldman proposes that what matters is not so much the internal
justification, but the method or process by which the belief is formed. The central thesis
argues that:
A person knows that ‘P’ if:
- ‘They believe that P’,
- ‘P is true’,
- ‘And the belief was produced by a reliable cognitive process’.
By introducing the criterion of reliability of the cognitive process, Goldman rules out cases in
which the belief, although caused by the fact (causal connection), may be false due to a
sensory mistake. For example:
Mary might say ‘there is a tree in front of me’, and although her belief seems true, justified
and based on perception, her senses could be failing her by showing her an illusion or
reflection different from the real tree.
Goldman defines a reliable cognitive process as one that leads to the truth regularly, not by
chance. Reliable examples are: unaltered memory, visual perception under normal conditions,
valid logical reasoning, correctly calibrated scientific instruments...
In this way, Goldman manages to refute Gettier's cases. When Smith believes that ‘the person
who will get the job has 10 coins in their pocket’, there is no causal connection between the
fact and the belief:
Having 10 coins in one's pocket is not the cause of the belief that the person will get the job.
There is no cause-and-effect relationship between having 10 coins and getting a job.
In reality, the belief is caused by another fact, the boss's statement that Jones was going to get
the job. This case does not agree with the second theory either. A subjective statement of this
kind is a contingent judgement, which may or may not be true in the future, and therefore
does not constitute a reliable cognitive process for constructing knowledge either.
When Goldman introduces the theory of causal connection and reliability, he provides us with
epistemic bases for analysing the relationship between an AI-generated statement and reality,
concluding that they lack a real connection, however reliable their mathematical procedure
may be. For example, according to ChatGPT (GPT-4 model, OpenAI, 2025), ... if I ask how
many legs a cat has, it will respond as follows:
A cat has four legs.
These are divided into:
Two front legs
Two back legs
Each leg generally has:
Five toes on the front legs
Four toes on the back legs
(although some cats have more toes, a condition called polydactyly)
Would you like to know more about how they use their legs or any specific curiosities?
The artificial belief that ‘a cat has four legs’ is caused, as in the Chinese room, by the
incentive of a question. But there is no stimulus from material reality that generates the belief
that ‘a cat has four legs’. The generative model has learned that this is so by definition; it has
never ‘seen’ a real cat to construct the belief that it has four legs. It “knows” this because the
information contained in the symbol cat is inseparably linked to other symbolic information
such as: ‘being a mammal’, ‘having four legs’, ‘having five toes on its hind legs’ ...
By defining knowledge (episteme) as a deliberate and intentional process, knowledge ceases
to be merely a state of mind or an accumulation of data, and becomes a conscious, reflective,
and active activity. This implies that knowledge requires intention and consciousness, and as
we concluded in the previous section, a generative model lacks both.
Therefore, there can be no direct causal connection between its result and the truth. Although
the phrase generated by an AI may be true, there is no real connection to reality, since an
artificial system, by its ‘nature,’ does not have the ability to perceive or interact with the
outside world.
4. Intentionality and consciousness in knowledge
The previous point connects directly to the fourth criticism, the idea of intentionality
developed by Franz Brentano (1838-1917), an Austrian philosopher and psychologist
considered one of the founders of modern phenomenology. Brentano argues that all thought
“is about something”, that is, it has intentionality, an essential characteristic of mental acts.
“All consciousness is consciousness of something”.
The author, distinguishing between mental and physical phenomena, defines intentionality as
the essential property of mental phenomena of being directed towards an object, which allows
them to be distinguished from physical phenomena, which lack internal reference. For
example:
A mental phenomenon such as thinking about a dog has intentionality. It is a thought that
focuses on an object, “dog”.
However, in a physical phenomenon such as a stone falling to the ground, there is no
underlying meaning: the action occurs through physical causality, without any symbolic,
affective or conceptual reference to anything else.
This distinction suggests that AI, by manipulating symbols without consciousness or
intentional direction, cannot be considered a cognisant subject, but simply a processing
system based on algorithmic rules such as Bayes' theorem.
Reformulating the question, if I ask ChatGPT to tell me what the dog vector is, it gives me
the following answer:
In AI (especially in NLP), a vector is a list of numbers that represents a concept. For
example:
‘dog’ → [0.12; -0.33; 0.87; ..., 0.03] ← Vector in a high-dimensional space.
These numbers are coordinates in a semantic space. Words with similar meanings have
vectors that are close to each other. The vector for ‘dog’ is obtained from a retrained
embedding matrix. Each word/token is converted into a vector before being processed. The
vector changes depending on the context (contextual embedding).
Next, to complete this section, I asked an AI to represent two different semantic vectors in
two dimensions:
In the image, you can see that the dog and cat vectors are closer to each other due to their
similar characteristics (numerical IDs) than, for example, the wolf vector and the car vector.
From this perspective, a generative model cannot be considered a mental phenomenon, but
rather a physical phenomenon, since the impulse that drives the model to formalise the word
‘dog’ is produced by a mechanical cause, without intentionality, like when a stone falls to the
ground. An AI's “belief” that ‘that is a dog’ does not come from the dog object but from a
mathematical vector.
5. Celis Bueno: objectivity as a social agreement and the ideology implicit in data.
A fifth and final criticism, which draws on ideas from Richard Rorty, is raised by Celis
Bueno, who points out that machine learning algorithms, by relying on past patterns, tend to
reify objectivity as a simple social agreement. Instead of understanding objectivity as a
rigorous correlation with the real world—which would meet a stricter criterion of realism—
these systems merely process a pre-existing set of data.
Big Data implies a concrete truth, conditioned and biased by the nature and quality of the
data entered. Therefore, the principle of objectivity assumed by a generative model is based
on partial and algorithmically structured information, which calls into question its fully
objective nature.
What the author points out with this argument can be illustrated with an example:
Suppose an AI system claims that ‘Everest is the highest mountain in the world.’ This
statement is true and justified, based on reliable geographical data and records. However, the
AI ‘believes’ this statement solely because the logical and statistical structure of the algorithm
determines it to be so. In other words, there is no real understanding of the concept of
“mountain” or ‘height.’
Based on these criticisms, it can be concluded that AI is not capable of producing objective
judgements about the world. The model itself answers the question ‘Can AI produce objective
judgements about the world?’ as follows:
Not completely. AI can make objective statements, but it does not generate objective
judgements on its own. Its apparent objectivity depends on the data it was trained with, the
design of the model, and the decisions of those who develop it. In short, it only reflects the
objectivity that has been incorporated into it.
Although they may be true, these judgements reproduce and amplify a specific ideology
because they depend on inherited beliefs and lack critical capacity. This hegemonic ideology
is crystallised in the data sets with which they are trained and the domestication operations
with which AI responses are intended to be brought closer to user expectations.
We can go further and argue that they reinforce or enhance aspects of the prevailing ideology
of an era, even those that are less explicit and analysed by cultural criticism. It is therefore
essential that we, as recipients, become aware of this informational determinism: if we do not
question the information we receive, AI, by its very nature, lacks the critical capacity to do
so. This limits us to accepting statistical correlations without a deep understanding.
Epistemologies of bias: towards noodiversity in artificial intelligence
As AI is a ‘symbolic machine’ (which processes and produces symbols, language,
knowledge), noodiversity implies that there are multiple ways of thinking, representing and
using these machines, respecting different cultures, epistemologies and worldviews in order
to overcome different types of bias.
In The Nooscope Manifested: AI as Instrument of Knowledge Extractivism, Pasquinelli and
Joler distinguish three major forms of bias in algorithmic systems:
1. Historical bias:
This type of bias originates outside the algorithm, in the historical social structures that have
produced systemic inequalities (racism, sexism, colonialism, economic inequality, etc.). The
data that feeds artificial intelligence systems is not neutral: it reflects the values, omissions,
and hierarchies of the social world from which it comes.
For example, if an AI is trained with data from the US judicial system, where there has
historically been over-criminalisation of African American communities, the algorithm will
tend to replicate that pattern of discrimination even if it is not explicitly instructed to do so.
Thus, historical bias perpetuates past injustices under the guise of technical objectivity.
2. Dataset bias:
This bias occurs when the dataset chosen to train the model is not representative of the
phenomenon being modelled. It can happen due to:
Incomplete or biased sampling (e.g., training a facial recognition model with mostly white
faces).
Lack of geographical, gender or class diversity.
Data generated in specific contexts that are not adequately generalised.
This type of bias leads to exclusion: underrepresented groups are not ‘seen’ by the model or
are misclassified. Instead of correcting inequalities, the model renders them invisible or
exacerbates them.
For example, a machine translation system that has only been trained with formal texts will
not be able to adequately interpret dialects or popular slang.
3. Algorithmic bias:
This bias is generated during model training, even if the data was well selected. It arises from
the mathematical and statistical rules used by the algorithm: probability, popularity,
frequency, etc.
Models tend to conform to majority patterns and ignore or penalise less frequent behaviours
or cases (outliers). This leads to a forced generalisation of the data set, which can amplify
dominant behaviours and filter out minority, diverse or marginal information. In many cases,
this logic reinforces stereotypes and eliminates nuances.
Furthermore, this type of bias is intensified by algorithmic feedback mechanisms: if a model
predicts more clicks for certain content, that content is displayed more, consumed more, and
therefore the model believes it is even more relevant. This produces a circular amplification
of certain patterns, while silencing others that could be valuable but are less statistically
visible.
From bias to system: how AI reproduces the geopolitics of knowledge
The second bias, dataset bias, goes beyond historical reality and occurs when selecting
sources, data types, and other information for the learning process. Less than 4% of public
data used to train artificial intelligence models originates in Africa, and similar or lower
percentages correspond to other regions such as Latin America and South Asia. Meanwhile,
more than 90% of this data comes from Europe and North America, producing and
reproducing culturally and racially biased information.
It can be argued that the fact that the most technologically developed regions concentrate the
largest proportion of the data that make up the large training corpora for artificial intelligence
systems is a structural and, in a sense, inevitable consequence of the global imbalance in
digital infrastructure. However, this phenomenon is far from neutral. From the theoretical
framework of World-Systems Theory, it is possible to interpret this asymmetry as an
expression of a historically unequal distribution of economic, technical and cognitive power.
As a result, AI models tend to reproduce and reinforce a partial view of the world: that
represented in the data from the dominant regions. This leads to the systematic exclusion of
other cultures, languages, knowledge and ways of life, not because of their irrelevance, but
because of their structural underrepresentation, which is the result of a lack of resources,
infrastructure and technological access in many areas of the Global South.
Another critical aspect related to bias in the datasets used to train artificial intelligence
models is the human infrastructure that underpins these processes. Far from being fully
automated, AI training relies heavily on a vast network of workers tasked with manually
labelling and filtering data. This work, which is essential for models to learn to “imitate”
linguistic, visual or behavioural patterns, is carried out in conditions marked by structural
precariousness that borders on, and even crosses, the thresholds of labour exploitation.
A paradigmatic case is that of the company Sama, subcontracted by OpenAI in 2021, which
employed workers in Kenya to process highly sensitive and traumatic content — including
graphic descriptions of rape, torture, sexual crimes and extreme violence — with the aim of
training models such as ChatGPT to recognise and avoid this type of language. These
workers, many without psychological training, worked exhausting hours for wages ranging
from £1 to £2 per hour, without adequate access to emotional support or guarantees of
contractual stability.
This case highlights not only the systematic outsourcing of the human costs of technological
development to the Global South, but also a phenomenon that has been termed
‘fauxtomatisation’: processes that appear to be automatic but in reality depend on intensive—
and often invisible—human labour. At the epistemic level, this challenges the supposed
neutrality of machine learning, as the data that feeds AI is mediated both by human decisions
and by structures of global inequality.
This analysis allows us to understand that bias in data sets is not simply a technical problem,
but a manifestation of deeper structural dynamics, where artificial intelligence functions as a
node within a global network of cognitive and labour value extraction. The so-called
‘automation’ that characterises these systems hides processes of massive outsourcing of tasks,
where small units of work are distributed to thousands of people, many of them in extremely
precarious conditions.
This phenomenon contributes to reinforcing the illusion of AI with autonomous, almost
magical capabilities, when in reality it is an infrastructure that redistributes human labour in
an unequal, opaque and unregulated manner.
‘By relying directly on databases, past forms (bias) will always affect future predictions.
Existing patterns of race, gender and class asymmetry will thus be repeated and intensified by
algorithms through a process of positive feedback. (Celis Bueno 2022 445)’
Artificial intelligence does not just automate tasks: it reorganises and redefines global work
under new forms of digital exploitation, deepening existing asymmetries between the Global
North and South, and between those who produce knowledge and those who filter, anonymise
or ‘clean’ it in conditions of invisibility.
Questioning the validity of biased knowledge from a posthumanist
critique.
To address the need to question the informational determinism of AI, I will offer a critique
from a posthumanist perspective:
The term posthumanism emerged from debates following postmodern, postcolonial, feminist,
and decolonial thinking, challenging the epistemological frameworks of classical humanism,
which upheld a universal conception of the rational, autonomous, Western subject. As Rosi
Braidotti (2015) points out, posthumanism is not a closed or normative concept, but rather ‘an
index for describing our moment’: an era marked by the hybridisation of the human, the
technological, the animal and the environmental. Thus, the posthumanist question is not
‘what are we?’, but ‘what are we becoming as a species?’ and, above all, ‘who is left out of
this transformation?’
In its critical (non-transhumanist) variant, posthumanism opposes the technoliberal ideal of
overcoming the human body through artificial intelligence, genetic engineering and
cybernetics, as such proposals tend to reinforce exclusionary, mercantilist and extractivist
logics. Unlike transhumanism, which dreams of a technologically enhanced humanity, post-
humanist criticism warns of the risks of reproducing and automating structural inequalities
through algorithmic systems.
In the algorithmic society, knowledge is increasingly mediated by artificial intelligence
systems that produce ‘automated truths’ from large volumes of data. These systems, such as
the COMPAS judicial software in the US, illustrate how algorithms are not neutral: they
calculate a ‘risk of recidivism’ based on historical patterns that reflect racial and social biases.
Thus, AI does not predict the future: it normalises it, imposing it from a statistical reading of
the past. In this way, algorithmic models become technologies of governance that produce
subjectivities, marginalisations and forms of control.
This logic rests on the mechanistic model and the prejudice of perfect modelling, according
to which every phenomenon — even human behaviour — can be represented, predicted and
quantified. Yuk Hui has criticised this reductionist view, warning that the world cannot be
captured by computational structures without losing complexity, meaning or context. This is a
new computational rationalism, which revives the old modern anthropocentrism, displacing
the enlightened subject with a technocratic system that makes decisions for us, under the
illusion of neutrality and efficiency.
In this sense, algorithmic society represents a radicalised continuation of the modern project,
in which instrumental reason seeks to optimise everything: from production to subjectivity.
AI thus becomes the culmination of extractivist and techno-positivist thinking, which exploits
not only natural resources, but also information, emotions, habits and behaviours. Post-
humanist epistemology denounces how this technocratic vision deepens the imbalance
between humans, technologies and ecosystems, generating destructive environmental, social
and epistemic consequences.
Furthermore, feminist and decolonial post-humanism highlights how certain bodies, voices
and territories are systematically underrepresented or eliminated in algorithmic decision-
making systems. While overrepresented actors tend to be young, white, upper-middle-class
men from the northern hemisphere, populations from the global south, women, indigenous
peoples and sexual dissidents are transformed into ‘statistical noise’ or ‘problematic data’.
Therefore, a posthumanist epistemology does not seek to return to the traditional ‘human
subject,’ but rather to construct forms of knowledge that recognise the interdependence
between humans, non-humans and technologies, based on an ethic of relational responsibility.
In the face of informational determinism and false algorithmic objectivity, posthumanist
thinking proposes a radical critique of the systems of power that hide under the guise of
‘intelligence’ in order to imagine other futures where technology is not an instrument of
domination, but rather of care, inclusion, and plural coexistence, giving rise to a more diverse
alternative. This proposes the idea of noodiversity, which refers to the diversity of symbolic,
intellectual and cultural forms in the creation and functioning of technologies such as AI.