Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
5 views9 pages

The Hard Problem

The article explores the concept of AI consciousness through a research project at Sussex University involving a device called the 'Dreamachine', which studies human consciousness. Researchers are divided on whether AI could become conscious, with some believing it is inevitable as technology advances, while others caution against equating intelligence with consciousness. The implications of perceived AI consciousness raise concerns about moral priorities and human relationships in a future where machines may appear sentient.

Uploaded by

s079868623456
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views9 pages

The Hard Problem

The article explores the concept of AI consciousness through a research project at Sussex University involving a device called the 'Dreamachine', which studies human consciousness. Researchers are divided on whether AI could become conscious, with some believing it is inevitable as technology advances, while others caution against equating intelligence with consciousness. The implications of perceived AI consciousness raise concerns about moral priorities and human relationships in a future where machines may appear sentient.

Uploaded by

s079868623456
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

The people who think AI

might become conscious


21 hours ago
Share
Save

Pallab Ghosh
Science correspondent•@BBCPallab

BBC

Listen to this article.


I step into the booth with some trepidation. I am about to be subjected
to strobe lighting while music plays – as part of a research project
trying to understand what makes us truly human.
It's an experience that brings to mind the test in the science fiction film
Bladerunner, designed to distinguish humans from artificially created
beings posing as humans.
Could I be a robot from the future and not know it? Would I pass the
test?
The researchers assure me that this is not actually what this
experiment is about. The device that they call the "Dreamachine",
after the public programme of the same name, is designed to study
how the human brain generates our conscious experiences of the
world.
As the strobing begins, and even though my eyes are closed, I see
swirling two-dimensional geometric patterns. It's like jumping into a
kaleidoscope, with constantly shifting triangles, pentagons and
octagons. The colours are vivid, intense and ever-changing: pinks,
magentas and turquoise hues, glowing like neon lights.
The "Dreamachine" brings the brain's inner activity to the surface with
flashing lights, aiming to explore how our thought processes work.
ADVERTISEMENT

Pallab trying the 'Dreamachine', which aims to find out how we create our conscious
experiences of the world

The images I'm seeing are unique to my own inner world and unique to
myself, according to the researchers. They believe these patterns can
shed light on consciousness itself.
They hear me whisper: "It's lovely, absolutely lovely. It's like flying
through my own mind!"
The "Dreamachine", at Sussex University's Centre for Consciousness
Science, is just one of many new research projects across the world
investigating human consciousness: the part of our minds that enables
us to be self-aware, to think and feel and make independent decisions
about the world.
By learning the nature of consciousness, researchers hope to better
understand what's happening within the silicon brains of artificial
intelligence. Some believe that AI systems will soon become
independently conscious, if they haven't already.
But what really is consciousness, and how close is AI to gaining it? And
could the belief that AI might be conscious itself fundamentally change
humans in the next few decades?
ADVERTISEMENT

From science fiction to reality


The idea of machines with their own minds has long been explored in
science fiction. Worries about AI stretch back nearly a hundred years to
the film Metropolis, in which a robot impersonates a real woman.
A fear of machines becoming conscious and posing a threat to humans
is explored in the 1968 film 2001: A Space Odyssey, when the HAL
9000 computer attacks astronauts onboard its spaceship. And in the
final Mission Impossible film, which has just been released, the world is
threatened by a powerful rogue AI, described by one character as a
"self-aware, self-learning, truth-eating digital parasite".

LMPC via Getty Images

Released in 1927, Fritz Lang's Metropolis foresaw the struggle between humans and
technology
ADVERTISEMENT

But quite recently, in the real world there has been a rapid tipping
point in thinking on machine consciousness, where credible voices
have become concerned that this is no longer the stuff of science
fiction.
The sudden shift has been prompted by the success of so-called large
language models (LLMs), which can be accessed through apps on our
phones such as Gemini and Chat GPT. The ability of the latest
generation of LLMs to have plausible, free-flowing conversations has
surprised even their designers and some of the leading experts in the
field.
There is a growing view among some thinkers that as AI becomes even
more intelligent, the lights will suddenly turn on inside the machines
and they will become conscious.
Others, such as Prof Anil Seth who leads the Sussex University team,
disagree, describing the view as "blindly optimistic and driven by
human exceptionalism".
"We associate consciousness with intelligence and language because
they go together in humans. But just because they go together in us, it
doesn't mean they go together in general, for example in animals."
So what actually is consciousness?
The short answer is that no-one knows. That's clear from the good-
natured but robust arguments among Prof Seth's own team of young
AI specialists, computing experts, neuroscientists and philosophers,
who are trying to answer one of the biggest questions in science and
philosophy.
While there are many differing views at the consciousness research
centre, the scientists are unified in their method: to break this big
problem down into lots of smaller ones in a series of research projects,
which includes the Dreamachine.
Just as the search to find the "spark of life" that made inanimate
objects come alive was abandoned in the 19th Century in favour of
identifying how individual parts of living systems worked, the Sussex
team is now adopting the same approach to consciousness.

Researchers are studying the brain in attempts to better understand consciousness


ADVERTISEMENT

They hope to identify patterns of brain activity that explain various


properties of conscious experiences, such as changes in electrical
signals or blood flow to different regions. The goal is to go beyond
looking for mere correlations between brain activity and
consciousness, and try to come up with explanations for its individual
components.
Prof Seth, the author of a book on consciousness, Being You, worries
that we may be rushing headlong into a society that is being rapidly
reshaped by the sheer pace of technological change without sufficient
knowledge about the science, or thought about the consequences.
"We take it as if the future has already been written; that there is an
inevitable march to a superhuman replacement," he says.
"We did not have these conversations enough with the rise of social
media, much to our collective detriment. But with AI, it is not too late.
We can decide what we want."
Is AI consciousness already here?
But there are some in the tech sector who believe that the AI in our
computers and phones may already be conscious, and we should treat
them as such.
Google suspended software engineer Blake Lemoine in 2022, after he
argued that AI chatbots could feel things and potentially suffer.
In November 2024, an AI welfare officer for Anthropic, Kyle Fish, co-
authored a report suggesting that AI consciousness was a realistic
possibility in the near future. He recently told The New York Times that
he also believed that there was a small (15%) chance that chatbots are
already conscious.
One reason he thinks it possible is that no-one, not even the people
who developed these systems, knows exactly how they work. That's
worrying, says Prof Murray Shanahan, principal scientist at Google
DeepMind and emeritus professor in AI at Imperial College, London.
"We don't actually understand very well the way in which LLMs work
internally, and that is some cause for concern," he tells the BBC.
According to Prof Shanahan, it's important for tech firms to get a
proper understanding of the systems they're building – and
researchers are looking at that as a matter of urgency.
"We are in a strange position of building these extremely complex
things, where we don't have a good theory of exactly how they achieve
the remarkable things they are achieving," he says. "So having a
better understanding of how they work will enable us to steer them in
the direction we want and to ensure that they are safe."
ADVERTISEMENT

'The next stage in humanity's


evolution'
The prevailing view in the tech sector is that LLMs are not currently
conscious in the way we experience the world, and probably not in any
way at all. But that is something that the married couple Profs Lenore
and Manuel Blum, both emeritus professors at Carnegie Mellon
University in Pittsburgh, Pennsylvania, believe will change, possibly
quite soon.
According to the Blums, that could happen as AI and LLMs have more
live sensory inputs from the real world, such as vision and touch, by
connecting cameras and haptic sensors (related to touch) to AI
systems. They are developing a computer model that constructs its
own internal language called Brainish to enable this additional sensory
data to be processed, attempting to replicate the processes that go on
in the brain.

Getty Images

Films like 2001: A Space Odyssey have warned about the dangers of sentient
computers
ADVERTISEMENT

"We think Brainish can solve the problem of consciousness as we know


it," Lenore tells the BBC. "AI consciousness is inevitable."
Manuel chips in enthusiastically with an impish grin, saying that the
new systems that he too firmly believes will emerge will be the "next
stage in humanity's evolution".
Conscious robots, he believes, "are our progeny. Down the road,
machines like these will be entities that will be on Earth and maybe on
other planets when we are no longer around".
David Chalmers – Professor of Philosophy and Neural Science at New
York University – defined the distinction between real and apparent
consciousness at a conference in Tucson, Arizona in 1994. He laid out
the "hard problem" of working out how and why any of the complex
operations of brains give rise to conscious experience, such as our
emotional response when we hear a nightingale sing.
Prof Chalmers says that he is open to the possibility of the hard
problem being solved.
"The ideal outcome would be one where humanity shares in this new
intelligence bonanza," he tells the BBC. "Maybe our brains are
augmented by AI systems."
On the sci-fi implications of that, he wryly observes: "In my profession,
there is a fine line between science fiction and philosophy".
'Meat-based computers'
Prof Seth, however, is exploring the idea that true consciousness can
only be realised by living systems.
"A strong case can be made that it isn't computation that is sufficient
for consciousness but being alive," he says.
"In brains, unlike computers, it's hard to separate what they do from
what they are." Without this separation, he argues, it's difficult to
believe that brains "are simply meat-based computers".
ADVERTISEMENT

Companies such as Cortical Systems are working with 'organoids' made up of nerve
cells

And if Prof Seth's intuition about life being important is on the right
track, the most likely technology will not be made of silicon run on
computer code, but will rather consist of tiny collections of nerve cells
the size of lentil grains that are currently being grown in labs.
Called "mini-brains" in media reports, they are referred to as "cerebral
organoids" by the scientific community, which uses them to research
how the brain works, and for drug testing.
One Australian firm, Cortical Labs, in Melbourne, has even developed a
system of nerve cells in a dish that can play the 1972 sports video
game Pong. Although it is a far cry from a conscious system, the so-
called "brain in a dish" is spooky as it moves a paddle up and down a
screen to bat back a pixelated ball.
Some experts feel that if consciousness is to emerge, it is most likely
to be from larger, more advanced versions of these living tissue
systems.
Cortical Labs monitors their electrical activity for any signals that could
conceivably be anything like the emergence of consciousness.
The firm's chief scientific and operating officer, Dr Brett Kagan is
mindful that any emerging uncontrollable intelligence might have
priorities that "are not aligned with ours". In which case, he says, half-
jokingly, that possible organoid overlords would be easier to defeat
because "there is always bleach" to pour over the fragile neurons.
Returning to a more solemn tone, he says the small but significant
threat of artificial consciousness is something he'd like the big players
in the field to focus on more as part of serious attempts to advance our
scientific understanding – but says that "unfortunately, we don't see
any earnest efforts in this space".
ADVERTISEMENT

The illusion of consciousness


The more immediate problem, though, could be how the illusion of
machines being conscious affects us.
In just a few years, we may well be living in a world populated by
humanoid robots and deepfakes that seem conscious, according to
Prof Seth. He worries that we won't be able to resist believing that the
AI has feelings and empathy, which could lead to new dangers.
"It will mean that we trust these things more, share more data with
them and be more open to persuasion."
But the greater risk from the illusion of consciousness is a "moral
corrosion", he says.
"It will distort our moral priorities by making us devote more of our
resources to caring for these systems at the expense of the real things
in our lives" – meaning that we might have compassion for robots, but
care less for other humans.
And that could fundamentally alter us, according to Prof Shanahan.
"Increasingly human relationships are going to be replicated in AI
relationships, they will be used as teachers, friends, adversaries in
computer games and even romantic partners. Whether that is a good
or bad thing, I don't know, but it is going to happen, and we are not
going to be able to prevent it".
More from InDepth

The truth about life on other planets - and what it means for
humans
How can traditional British TV survive the US streaming
giants?

Nasa needs saving from itself – but is this billionaire right for
that job?

Top picture credit: Getty Images


BBC InDepth is the home on the website and app for the best analysis,
with fresh perspectives that challenge assumptions and deep reporting
on the biggest issues of the day. And we showcase thought-provoking
content from across BBC Sounds and iPlayer too. You can send us your
feedback on the InDepth section by clicking on the button below.

You might also like