Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
12 views26 pages

Ai

The documents cover various aspects of artificial intelligence (AI), including its history, applications in fields like healthcare, finance, transportation, education, and robotics, as well as ethical concerns and programming tools. They discuss the potential of artificial general intelligence (AGI) and its implications, including emerging trends and challenges. Each document highlights the benefits and obstacles associated with AI implementation across different sectors.

Uploaded by

hokutokenshiro32
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views26 pages

Ai

The documents cover various aspects of artificial intelligence (AI), including its history, applications in fields like healthcare, finance, transportation, education, and robotics, as well as ethical concerns and programming tools. They discuss the potential of artificial general intelligence (AGI) and its implications, including emerging trends and challenges. Each document highlights the benefits and obstacles associated with AI implementation across different sectors.

Uploaded by

hokutokenshiro32
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

PDF 1: History of AI

1. Early Concepts
AI ideas date back to ancient myths of intelligent machines and automata.
2. 20th Century

 1950s: Alan Turing and the Turing Test.


 1956: The term 'Artificial Intelligence' coined at Dartmouth Conference.
 1960s-70s: Early AI programs for games and problem solving.

3. Modern AI
Deep learning and big data have led to breakthroughs in vision, language, and
robotics.

PDF 2: AI in Healthcare

1. Applications

 Disease diagnosis using image analysis.


 Personalized medicine and treatment plans.
 Drug discovery and clinical trials optimization.

2. Benefits

 Early detection of diseases.


 Faster research and development.
 Reducing human error in diagnosis.

3. Challenges

 Data privacy concerns.


 High cost of AI systems.
 Need for explainable AI.

PDF 3: AI in Finance

1. Applications

 Fraud detection using transaction patterns.


 Algorithmic trading and predictive analytics.
 Customer support via chatbots.

2. Benefits

 Faster and accurate financial decisions.


 Reducing operational risk.
 Personalizing customer experience.

3. Challenges

 Model biases and errors.


 Regulatory compliance.
 Cybersecurity risks.
PDF 4: AI in Transportation

1. Applications

 Autonomous vehicles and trucks.


 Traffic flow prediction.
 Drone delivery systems.

2. Benefits

 Safer roads.
 Efficient logistics.
 Reduced fuel consumption.

3. Challenges

 Safety regulations.
 Technology adoption.
 Infrastructure requirements.

PDF 5: AI in Education

1. Applications

 Personalized learning platforms.


 Intelligent tutoring systems.
 Automated grading and assessments.

2. Benefits

 Tailored learning experiences.


 Supporting teachers with data insights.
 Engaging interactive content.

3. Challenges

 Accessibility and digital divide.


 Privacy of student data.
 Teacher training and adoption.

PDF 6: AI and Ethics

1. Ethical Concerns

 Bias and fairness in AI decisions.


 Transparency and accountability.
 Privacy and surveillance.

2. Solutions

 Explainable AI.
 Ethical guidelines for AI development.
 Inclusive data sets.

3. Importance
Responsible AI ensures technology benefits society without harm.

PDF 7: AI and Robotics

1. Applications

 Industrial robots in manufacturing.


 Service robots in hospitality and healthcare.
 Humanoid robots for research and interaction.

2. Benefits

 Automation of dangerous tasks.


 Increased productivity.
 Novel human-robot interactions.

3. Challenges

 Cost and maintenance.


 Safety and human coexistence.
 Ethical use of robots.

PDF 8: AI Programming & Tools

1. Programming Languages

 Python, R, Java.
 Popular for machine learning and AI projects.

2. Tools & Frameworks

 TensorFlow, PyTorch for deep learning.


 Scikit-learn for machine learning.
 OpenCV for computer vision.

3. Practical Tips

 Use pre-built libraries for efficiency.


 Experiment with datasets.
 Focus on model evaluation and optimization.

PDF 9: AI Myths vs Reality

1. Common Myths

 AI will replace all human jobs.


 AI is sentient and conscious.
 AI always makes perfect decisions.
2. Reality

 AI complements human work.


 AI lacks consciousness.
 AI decisions depend on data quality.

3. Conclusion
Understanding AI correctly prevents misconceptions and fear.

PDF 10: The Future of AI

1. Emerging Trends

 AI in space exploration.
 Smart cities and IoT integration.
 AI-powered creative tools.

2. Potential Impact

 Transforming industries.
 Enhancing human capabilities.
 Addressing global challenges.

3. Considerations

 Ethical AI development.
 Sustainable and inclusive AI.
 Preparing society for AI-driven changes.

Artificial general intelligence (AGI)—sometimes called human-level intelligence AI—is a type of artificial
intelligence that would match or surpass human capabilities across virtually all cognitive tasks. [1][2]

Some researchers argue that state-of-the-art large language models (LLMs) already exhibit signs of AGI-level
capability, while others maintain that genuine AGI has not yet been achieved. [3] Beyond AGI, artificial
superintelligence (ASI) would outperform the best human abilities across every domain by a wide margin. [4]

Unlike artificial narrow intelligence (ANI), whose competence is confined to well-defined tasks, an AGI system can
generalise knowledge, transfer skills between domains, and solve novel problems without task-specific
reprogramming. The concept does not, in principle, require the system to be an autonomous agent; a static model
—such as a highly capable large language model—or an embodied robot could both satisfy the definition so long
as human-level breadth and proficiency are achieved.[5]

Creating AGI is a primary goal of AI research and of companies such as OpenAI,[6] Google,[7] xAI,[8] and Meta.[9] A
2020 survey identified 72 active AGI research and development projects across 37 countries.[10]

The timeline for achieving human-level intelligence AI remains deeply contested. Recent surveys of AI
researchers give median forecasts ranging from the late 2020s to mid-century, while still recording significant
numbers who expect arrival much sooner—or never at all.[11][12][13] There is debate on the exact definition of AGI and
regarding whether modern LLMs such as GPT-4 are early forms of emerging AGI.[3] AGI is a common topic
in science fiction and futures studies.[14][15]

Contention exists over whether AGI represents an existential risk.[16][17][18] Many AI experts have stated that
mitigating the risk of human extinction posed by AGI should be a global priority. [19][20] Others find the development
of AGI to be in too remote a stage to present such a risk. [21][22]

Terminology
AGI is also known as strong AI,[23][24] full AI,[25] human-level AI,[26] human-level intelligent AI, or general intelligent
action.[27]

Some academic sources reserve the term "strong AI" for computer programs that will
experience sentience or consciousness.[a] In contrast, weak AI (or narrow AI) is able to solve one specific problem
but lacks general cognitive abilities.[28][24] Some academic sources use "weak AI" to refer more broadly to any
programs that neither experience consciousness nor have a mind in the same sense as humans. [a]

Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence (ASI) is a
hypothetical type of AGI that is much more generally intelligent than humans, [29] while the notion of transformative
AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution. [30]

A framework for classifying AGI by performance and autonomy was proposed in 2023 by Google
DeepMind researchers.[31] They define five performance levels of AGI: emerging, competent, expert, virtuoso, and
superhuman.[31] For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide
range of non-physical tasks, and a superhuman AGI (i.e. an artificial superintelligence) is similarly defined but with
a threshold of 100%.[31] They consider large language models like ChatGPT or LLaMA 2 to be instances of
emerging AGI (comparable to unskilled humans).[31] Regarding the autonomy of AGI and associated risks, they
define five levels: tool (fully in human control), consultant, collaborator, expert, and agent (fully autonomous). [32]

Characteristics
Main article: Artificial intelligence
Various popular definitions of intelligence have been proposed. One of the leading proposals is the Turing test.
However, there are other well-known definitions, and some researchers disagree with the more popular
approaches.[b]

Intelligence traits
Researchers generally hold that a system is required to do all of the following to be regarded as an AGI: [34]

 reason, use strategy, solve puzzles, and make judgments under uncertainty
 represent knowledge, including common sense knowledge
 plan
 learn
 communicate in natural language
 if necessary, integrate these skills in completion of any given goal
Many interdisciplinary approaches (e.g. cognitive science, computational intelligence, and decision making)
consider additional traits such as imagination (the ability to form novel mental images and concepts)
[35]
and autonomy.[36]

Computer-based systems that exhibit many of these capabilities exist (e.g. see computational
creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent).
There is debate about whether modern AI systems possess them to an adequate degree. [37]

Physical traits
Other capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its
expression. These include:[38]

 the ability to sense (e.g. see, hear, etc.), and


 the ability to act (e.g. move and manipulate objects, change location to explore, etc.)
This includes the ability to detect and respond to hazard.[39]

Although the ability to sense (e.g. see, hear, etc.) and the ability to act (e.g. move and manipulate objects, change
location to explore, etc.) can be desirable for some intelligent systems, [38] these physical capabilities are not strictly
required for an entity to qualify as AGI—particularly under the thesis that large language models (LLMs) may
already be or become AGI. Even from a less optimistic perspective on LLMs, there is no firm requirement for an
AGI to have a human-like form; being a silicon-based computational system is sufficient, provided it can process
input (language) from the external world in place of human senses. This interpretation aligns with the
understanding that AGI has never been proscribed a particular physical embodiment and thus does not demand a
capacity for locomotion or traditional "eyes and ears".[39] It can be regarded as sufficient for an intelligent computer
to interact with other systems, to invoke or regulate them, to achieve specific goals, including altering a physical
environment, as the fictional HAL 9000 in the motion picture 2001: A Space Odyssey was both programmed and
tasked to.[40]

Tests for human-level AGI


Several tests meant to confirm human-level AGI have been considered, including: [41][42]

The Turing Test (Turing)


The Turing test can provide some evidence of
intelligence, but it penalizes non-human intelligent behavior and may incentivize artificial stupidity.
[43]
Proposed by Alan Turing in his 1950 paper "Computing Machinery and Intelligence", this test involves a
human judge engaging in natural language conversations with both a human and a machine designed to
generate human-like responses. The machine passes the test if it can convince the judge it is human a
significant fraction of the time. Turing proposed this as a practical measure of machine intelligence,
focusing on the ability to produce human-like responses rather than on the internal workings of the
machine.[44]
Turing described the test as follows:
The idea of the test is that the machine has to try and pretend to be a man, by answering questions put
to it, and it will only pass if the pretence is reasonably convincing. A considerable portion of a jury, who
should not be expert about machines, must be taken in by the pretence. [45]

In 2014, a chatbot named Eugene Goostman, designed to imitate a 13-year-old Ukrainian boy,
reportedly passed a Turing Test event by convincing 33% of judges that it was human. However, this
claim was met with significant skepticism from the AI research community, who questioned the test's
implementation and its relevance to AGI.[46][47]
In 2023, it was claimed that "AI is closer to ever" to passing the Turing test, though the article's authors
reinforced that imitation (as "large language models" ever closer to passing the test are built upon) is not
synonymous with "intelligence". Further, as AI intelligence and human intelligence may differ, "passing
the Turing test is good evidence a system is intelligent, failing it is not good evidence a system is not
intelligent."[48]
A 2024 study suggested that GPT-4 was identified as human 54% of the time in a randomized,
controlled version of the Turing Test—surpassing older chatbots like ELIZA while still falling behind
actual humans (67%).[49]
A 2025 pre-registered, three-party Turing-test study by Cameron R. Jones and Benjamin K. Bergen
showed that GPT-4.5 was judged to be the human in 73% of five-minute text conversations—surpassing
the 67% humanness rate of real confederates and meeting the researchers’ criterion for having passed
the test.[50][51]
The Robot College Student Test (Goertzel)
A machine enrolls in a university, taking and passing the same classes that humans would, and
obtaining a degree. LLMs can now pass university degree-level exams without even attending the
classes.[52]
The Employment Test (Nilsson)
A machine performs an economically important job at least as well as humans in the same job. AIs are
now replacing humans in many roles as varied as fast food and marketing. [53]
The Ikea test (Marcus)
Also known as the Flat Pack Furniture Test. An AI views the parts and instructions of an Ikea flat-pack
product, then controls a robot to assemble the furniture correctly. [54]
The Coffee Test (Wozniak)
A machine is required to enter an average American home and figure out how to make coffee: find the
coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper
buttons.[55] Robots developed by Figure AI and other robotics companies can perform tasks like this.
The Modern Turing Test (Suleyman)
An AI model is given $100,000 and has to obtain $1 million.[56][57]
AI-complete problems
Main article: AI-complete
A problem is informally called "AI-complete" or "AI-hard" if it
is believed that in order to solve it, one would need to
implement AGI, because the solution is beyond the
capabilities of a purpose-specific algorithm.[58]

There are many problems that have been conjectured to


require general intelligence to solve as well as humans.
Examples include computer vision, natural language
understanding, and dealing with unexpected circumstances
while solving any real-world problem.[59] Even a specific task
like translation requires a machine to read and write in both
languages, follow the author's argument (reason),
understand the context (knowledge), and faithfully reproduce
the author's original intent (social intelligence). All of these
problems need to be solved simultaneously in order to reach
human-level machine performance.

However, many of these tasks can now be performed by


modern large language models. According to Stanford
University's 2024 AI index, AI has reached human-level
performance on many benchmarks for reading
comprehension and visual reasoning.[60]

History
Classical AI
Main articles: History of artificial intelligence and Symbolic
artificial intelligence
Modern AI research began in the mid-1950s.[61] The first
generation of AI researchers were convinced that artificial
general intelligence was possible and that it would exist in
just a few decades.[62] AI pioneer Herbert A. Simon wrote in
1965: "machines will be capable, within twenty years, of
doing any work a man can do."[63]

Their predictions were the inspiration for Stanley


Kubrick and Arthur C. Clarke's fictional character HAL 9000,
who embodied what AI researchers believed they could
create by the year 2001. AI pioneer Marvin Minsky was a
consultant[64] on the project of making HAL 9000 as realistic
as possible according to the consensus predictions of the
time. He said in 1967, "Within a generation... the problem of
creating 'artificial intelligence' will substantially be solved".[65]

Several classical AI projects, such as Doug


Lenat's Cyc project (that began in 1984), and Allen
Newell's Soar project, were directed at AGI.

However, in the early 1970s, it became obvious that


researchers had grossly underestimated the difficulty of the
project. Funding agencies became skeptical of AGI and put
researchers under increasing pressure to produce useful
"applied AI".[c] In the early 1980s, Japan's Fifth Generation
Computer Project revived interest in AGI, setting out a ten-
year timeline that included AGI goals like "carry on a casual
conversation".[69] In response to this and the success
of expert systems, both industry and government pumped
money into the field.[67][70] However, confidence in AI
spectacularly collapsed in the late 1980s, and the goals of
the Fifth Generation Computer Project were never fulfilled.
[71]
For the second time in 20 years, AI researchers who
predicted the imminent achievement of AGI had been
mistaken. By the 1990s, AI researchers had a reputation for
making vain promises. They became reluctant to make
predictions at all[d] and avoided mention of "human level"
artificial intelligence for fear of being labeled "wild-eyed
dreamer[s]".[73]

Narrow AI research
Main article: Artificial intelligence
In the 1990s and early 21st century, mainstream AI achieved
commercial success and academic respectability by
focusing on specific sub-problems where AI can produce
verifiable results and commercial applications, such
as speech recognition and recommendation algorithms.
[74]
These "applied AI" systems are now used extensively
throughout the technology industry, and research in this vein
is heavily funded in both academia and industry. As of 2018,
development in this field was considered an emerging trend,
and a mature stage was expected to be reached in more
than 10 years.[75]

At the turn of the century, many mainstream AI


researchers[76] hoped that strong AI could be developed by
combining programs that solve various sub-problems. Hans
Moravec wrote in 1988:
I am confident that this bottom-up route to artificial
intelligence will one day meet the traditional top-down route
more than half way, ready to provide the real-world
competence and the commonsense knowledge that has
been so frustratingly elusive in reasoning programs. Fully
intelligent machines will result when the metaphorical golden
spike is driven uniting the two efforts.[76]

However, even at the time, this was disputed. For


example, Stevan Harnad of Princeton University concluded
his 1990 paper on the symbol grounding hypothesis by
stating:

The expectation has often been voiced that "top-down"


(symbolic) approaches to modeling cognition will somehow
meet "bottom-up" (sensory) approaches somewhere in
between. If the grounding considerations in this paper are
valid, then this expectation is hopelessly modular and there
is really only one viable route from sense to symbols: from
the ground up. A free-floating symbolic level like the
software level of a computer will never be reached by this
route (or vice versa) – nor is it clear why we should even try
to reach such a level, since it looks as if getting there would
just amount to uprooting our symbols from their intrinsic
meanings (thereby merely reducing ourselves to the
functional equivalent of a programmable computer).[77]

Modern artificial general


intelligence research
The term "artificial general intelligence" was used as early as
1997, by Mark Gubrud[78] in a discussion of the implications
of fully automated military production and operations. A
mathematical formalism of AGI was proposed by Marcus
Hutter in 2000. Named AIXI, the proposed AGI agent
maximises "the ability to satisfy goals in a wide range of
environments".[79] This type of AGI, characterized by the
ability to maximise a mathematical definition of intelligence
rather than exhibit human-like behaviour,[80] was also called
universal artificial intelligence.[81]

The term AGI was re-introduced and popularized by Shane


Legg and Ben Goertzel around 2002.[82] AGI research activity
in 2006 was described by Pei Wang and Ben Goertzel[83] as
"producing publications and preliminary results". The first
summer school on AGI was organized in Xiamen, China in
2009[84] by the Xiamen university's Artificial Brain Laboratory
and OpenCog. The first university course was given in
2010[85] and 2011[86] at Plovdiv University, Bulgaria by Todor
Arnaudov. The Massachusetts Institute of Technology (MIT)
presented a course on AGI in 2018, organized by Lex
Fridman and featuring a number of guest lecturers.

As of 2023, a small number of computer scientists are active


in AGI research, and many contribute to a series of AGI
conferences. However, increasingly more researchers are
interested in open-ended learning,[87][3] which is the idea of
allowing AI to continuously learn and innovate like humans
do.

Feasibility
Surveys about when experts expect artificial
general intelligence [26]

As of 2023, the development and potential achievement of


AGI remains a subject of intense debate within the AI
community. While traditional consensus held that AGI was a
distant goal, recent advancements have led some
researchers and industry figures to claim that early forms of
AGI may already exist.[88] AI pioneer Herbert A.
Simon speculated in 1965 that "machines will be capable,
within twenty years, of doing any work a man can do". This
prediction failed to come true. Microsoft co-founder Paul
Allen believed that such intelligence is unlikely in the 21st
century because it would require "unforeseeable and
fundamentally unpredictable breakthroughs" and a
"scientifically deep understanding of cognition".[89] Writing
in The Guardian, roboticist Alan Winfield claimed in 2014
that the gulf between modern computing and human-level
artificial intelligence is as wide as the gulf between current
space flight and practical faster-than-light spaceflight.[90]

A further challenge is the lack of clarity in defining


what intelligence entails. Does it require consciousness?
Must it display the ability to set goals as well as pursue
them? Is it purely a matter of scale such that if model sizes
increase sufficiently, intelligence will emerge? Are facilities
such as planning, reasoning, and causal understanding
required? Does intelligence require explicitly replicating the
brain and its specific faculties? Does it require emotions?[91]

Most AI researchers believe strong AI can be achieved in


the future, but some thinkers, like Hubert Dreyfus and Roger
Penrose, deny the possibility of achieving strong AI.[92]
[93]
John McCarthy is among those who believe human-level
AI will be accomplished, but that the present level of
progress is such that a date cannot accurately be predicted.
[94]
AI experts' views on the feasibility of AGI wax and wane.
Four polls conducted in 2012 and 2013 suggested that the
median estimate among experts for when they would be
50% confident AGI would arrive was 2040 to 2050,
depending on the poll, with the mean being 2081. Of the
experts, 16.5% answered with "never" when asked the same
question but with a 90% confidence instead.[95][96] Further
current AGI progress considerations can be found
above Tests for confirming human-level AGI.

A report by Stuart Armstrong and Kaj Sotala of the Machine


Intelligence Research Institute found that "over [a] 60-year
time frame there is a strong bias towards predicting the
arrival of human-level AI as between 15 and 25 years from
the time the prediction was made". They analyzed 95
predictions made between 1950 and 2012 on when human-
level AI will come about.[97]

In 2023, Microsoft researchers published a detailed


evaluation of GPT-4. They concluded: "Given the breadth
and depth of GPT-4’s capabilities, we believe that it could
reasonably be viewed as an early (yet still incomplete)
version of an artificial general intelligence (AGI)
system."[98] Another study in 2023 reported that GPT-4
outperforms 99% of humans on the Torrance tests of
creative thinking.[99][100]

Blaise Agüera y Arcas and Peter Norvig wrote in 2023 the


article "Artificial General Intelligence Is Already Here",
arguing that frontier models had already achieved a
significant level of general intelligence. They wrote that
reluctance to this view comes from four main reasons: a
"healthy skepticism about metrics for AGI", an "ideological
commitment to alternative AI theories or techniques", a
"devotion to human (or biological) exceptionalism", or a
"concern about the economic implications of AGI".[101]

2023 also marked the emergence of large multimodal


models (large language models capable of processing or
generating multiple modalities such as text, audio, and
images).[102] As of 2025, large language models (LLMs) have
been adapted to generate both music and images.
Voice-synthesis systems built on transformer LLMs—such
as Suno AI’s Bark model—can sing, and several
music-generation platforms (e.g. Suno and Udio) build their
services on modified LLM backbones.[103][104]

The same year, OpenAI released GPT-4o image generation,


integrating native image synthesis directly
into ChatGPT rather than relying on a
separate diffusion-based art model, as with DALL-E.[105]

LLM-style foundation models are likewise being repurposed


for robotics. Nvidia’s open-source Isaac GR00T N1 and
Google DeepMind’s Robotic Transformer 2 (RT-2) are first
trained with language-model objectives and then fine-tuned
to handle vision-language-action control for embodied
robots.[106][107][108]

In 2024, OpenAI released o1-preview, the first of a series of


models that "spend more time thinking before they respond".
According to Mira Murati, this ability to think before
responding represents a new, additional paradigm. It
improves model outputs by spending more computing power
when generating the answer, whereas the model scaling
paradigm improves outputs by increasing the model size,
training data and training compute power.[109][110]

An OpenAI employee, Vahid Kazemi, claimed in 2024 that


the company had achieved AGI, stating, "In my opinion, we
have already achieved AGI and it's even more clear
with O1." Kazemi clarified that while the AI is not yet "better
than any human at any task", it is "better than most humans
at most tasks." He also addressed criticisms that large
language models (LLMs) merely follow predefined patterns,
comparing their learning process to the scientific method of
observing, hypothesizing, and verifying. These statements
have sparked debate, as they rely on a broad and
unconventional definition of AGI—traditionally understood as
AI that matches human intelligence across all domains.
Critics argue that, while OpenAI's models demonstrate
remarkable versatility, they may not fully meet this standard.
Notably, Kazemi's comments came shortly after OpenAI
removed "AGI" from the terms of its partnership with
Microsoft, prompting speculation about the company's
strategic intentions.[111]

Timescales

AI has surpassed humans on a variety of language


understanding and visual understanding
benchmarks. As of 2023, foundation models still
[112]

lack advanced reasoning and planning


capabilities, but rapid progress is expected. [113]

Progress in artificial intelligence has historically gone


through periods of rapid progress separated by periods
when progress appeared to stop.[92] Ending each hiatus were
fundamental advances in hardware, software or both to
create space for further progress.[92][114][115] For example, the
computer hardware available in the twentieth century was
not sufficient to implement deep learning, which requires
large numbers of GPU-enabled CPUs.[116]

In the introduction to his 2006 book,[117] Goertzel says that


estimates of the time needed before a truly flexible AGI is
built vary from 10 years to over a century. As of 2007, the
consensus in the AGI research community seemed to be
that the timeline discussed by Ray Kurzweil in 2005 in The
Singularity is Near[118] (i.e. between 2015 and 2045) was
plausible.[119] Mainstream AI researchers have given a wide
range of opinions on whether progress will be this rapid. A
2012 meta-analysis of 95 such opinions found a bias
towards predicting that the onset of AGI would occur within
16–26 years for modern and historical predictions alike. That
paper has been criticized for how it categorized opinions as
expert or non-expert.[120]

In 2012, Alex Krizhevsky, Ilya Sutskever, and Geoffrey


Hinton developed a neural network called AlexNet, which
won the ImageNet competition with a top-5 test error rate of
15.3%, significantly better than the second-best entry's rate
of 26.3% (the traditional approach used a weighted sum of
scores from different pre-defined classifiers).[121] AlexNet was
regarded as the initial ground-breaker of the current deep
learning wave.[121]

In 2017, researchers Feng Liu, Yong Shi, and Ying Liu


conducted intelligence tests on publicly available and freely
accessible weak AI such as Google AI, Apple's Siri, and
others. At the maximum, these AIs reached an IQ value of
about 47, which corresponds approximately to a six-year-old
child in first grade. An adult comes to about 100 on average.
Similar tests were carried out in 2014, with the IQ score
reaching a maximum value of 27.[122][123]

In 2020, OpenAI developed GPT-3, a language model


capable of performing many diverse tasks without specific
training. According to Gary Grossman in
a VentureBeat article, while there is consensus that GPT-3
is not an example of AGI, it is considered by some to be too
advanced to be classified as a narrow AI system.[124]

In the same year, Jason Rohrer used his GPT-3 account to


develop a chatbot, and provided a chatbot-developing
platform called "Project December". OpenAI asked for
changes to the chatbot to comply with their safety
guidelines; Rohrer disconnected Project December from the
GPT-3 API.[125]

In 2022, DeepMind developed Gato, a "general-purpose"


system capable of performing more than 600 different tasks.
[126]

In 2023, Microsoft Research published a study on an early


version of OpenAI's GPT-4, contending that it exhibited
more general intelligence than previous AI models and
demonstrated human-level performance in tasks spanning
multiple domains, such as mathematics, coding, and law.
This research sparked a debate on whether GPT-4 could be
considered an early, incomplete version of artificial general
intelligence, emphasizing the need for further exploration
and evaluation of such systems.[3]

In 2023, AI researcher Geoffrey Hinton stated that:[127]

The idea that this stuff could actually get smarter than
people – a few people believed that, [...]. But most people
thought it was way off. And I thought it was way off. I thought
it was 30 to 50 years or even longer away. Obviously, I no
longer think that.

He estimated in 2024 (with low confidence) that systems


smarter than humans could appear within 5 to 20 years and
stressed the attendant existential risks.[128]

In May 2023, Demis Hassabis similarly said that "The


progress in the last few years has been pretty incredible",
and that he sees no reason why it would slow, expecting
AGI within a decade or even a few years.[129] In March
2024, Nvidia's Chief Executive Officer (CEO), Jensen
Huang, stated his expectation that within five years, AI would
be capable of passing any test at least as well as humans.
[130]
In June 2024, the AI researcher Leopold Aschenbrenner,
a former OpenAI employee, estimated AGI by 2027 to be
"strikingly plausible".[131]

Whole brain emulation


Main articles: Whole brain emulation and Brain simulation
While the development of transformer models like
in ChatGPT is considered the most promising path to AGI,[132]
[133]
whole brain emulation can serve as an alternative
approach. With whole brain simulation, a brain model is built
by scanning and mapping a biological brain in detail, and
then copying and simulating it on a computer system or
another computational device. The simulation model must
be sufficiently faithful to the original, so that it behaves in
practically the same way as the original brain.[134] Whole brain
emulation is a type of brain simulation that is discussed
in computational neuroscience and neuroinformatics, and for
medical research purposes. It has been discussed
in artificial intelligence research[119] as an approach to strong
AI. Neuroimaging technologies that could deliver the
necessary detailed understanding are improving rapidly,
and futurist Ray Kurzweil in the book The Singularity Is
Near[118] predicts that a map of sufficient quality will become
available on a similar timescale to the computing power
required to emulate it.

Early estimates

Estimates of how much processing power is


needed to emulate a human brain at various levels
(from Ray Kurzweil, Anders Sandberg and Nick
Bostrom), along with the fastest supercomputer
from TOP500 mapped by year. Note the
logarithmic scale and exponential trendline,
which assumes the computational capacity
doubles every 1.2 years. Kurzweil believes that
mind uploading will be possible at neural
simulation, while the Sandberg, Bostrom report is
less certain about where consciousness arises. [135]

For low-level brain simulation, a very powerful cluster of


computers or GPUs would be required, given the enormous
quantity of synapses within the human brain. Each of the
1011 (one hundred billion) neurons has on average 7,000
synaptic connections (synapses) to other neurons. The brain
of a three-year-old child has about 1015 synapses (1
quadrillion). This number declines with age, stabilizing by
adulthood. Estimates vary for an adult, ranging from 1014 to
5×1014 synapses (100 to 500 trillion).[136] An estimate of the
brain's processing power, based on a simple switch model
for neuron activity, is around 1014 (100 trillion) synaptic
updates per second (SUPS).[137]

In 1997, Kurzweil looked at various estimates for the


hardware required to equal the human brain and adopted a
figure of 1016 computations per second.[e] (For comparison, if
a "computation" was equivalent to one "floating-point
operation" – a measure used to rate
current supercomputers – then 1016 "computations" would be
equivalent to 10 petaFLOPS, achieved in 2011, while
1018 was achieved in 2022.) He used this figure to predict the
necessary hardware would be available sometime between
2015 and 2025, if the exponential growth in computer power
at the time of writing continued.

Current research
The Human Brain Project, an EU-funded initiative active
from 2013 to 2023, has developed a particularly detailed and
publicly accessible atlas of the human brain.[140] In 2023,
researchers from Duke University performed a high-
resolution scan of a mouse brain.

Criticisms of simulation-based
approaches
The artificial neuron model assumed by Kurzweil and used
in many current artificial neural network implementations is
simple compared with biological neurons. A brain simulation
would likely have to capture the detailed cellular behaviour
of biological neurons, presently understood only in broad
outline. The overhead introduced by full modeling of the
biological, chemical, and physical details of neural behaviour
(especially on a molecular scale) would require
computational powers several orders of magnitude larger
than Kurzweil's estimate. In addition, the estimates do not
account for glial cells, which are known to play a role in
cognitive processes.[141]

A fundamental criticism of the simulated brain approach


derives from embodied cognition theory which asserts that
human embodiment is an essential aspect of human
intelligence and is necessary to ground meaning.[142][143] If this
theory is correct, any fully functional brain model will need to
encompass more than just the neurons (e.g., a robotic
body). Goertzel[119] proposes virtual embodiment (like
in metaverses like Second Life) as an option, but it is
unknown whether this would be sufficient.

Philosophical
perspective
See also: Philosophy of artificial intelligence and Turing test

"Strong AI" as defined in


philosophy
In 1980, philosopher John Searle coined the term "strong AI"
as part of his Chinese room argument.[144] He proposed a
distinction between two hypotheses about artificial
intelligence:[f]

 Strong AI hypothesis: An artificial intelligence system


can have "a mind" and "consciousness".
 Weak AI hypothesis: An artificial intelligence system
can (only) act like it thinks and has a mind and
consciousness.
The first one he called "strong" because it makes
a stronger statement: it assumes something special has
happened to the machine that goes beyond those abilities
that we can test. The behaviour of a "weak AI" machine
would be identical to a "strong AI" machine, but the latter
would also have subjective conscious experience. This
usage is also common in academic AI research and
textbooks.[145]

In contrast to Searle and mainstream AI, some futurists such


as Ray Kurzweil use the term "strong AI" to mean "human
level artificial general intelligence".[118] This is not the same as
Searle's strong AI, unless it is assumed
that consciousness is necessary for human-level AGI.
Academic philosophers such as Searle do not believe that is
the case, and to most artificial intelligence researchers the
question is out-of-scope.[146]

Mainstream AI is most interested in how a program behaves.


[147]
According to Russell and Norvig, "as long as the program
works, they don't care if you call it real or a simulation."[146] If
the program can behave as if it has a mind, then there is no
need to know if it actually has mind – indeed, there would be
no way to tell. For AI research, Searle's "weak AI
hypothesis" is equivalent to the statement "artificial general
intelligence is possible". Thus, according to Russell and
Norvig, "most AI researchers take the weak AI hypothesis for
granted, and don't care about the strong AI
hypothesis."[146] Thus, for academic AI research, "Strong AI"
and "AGI" are two different things.

Consciousness
Main article: Artificial consciousness
Consciousness can have various meanings, and some
aspects play significant roles in science fiction and the ethics
of artificial intelligence:

 Sentience (or "phenomenal consciousness"): The


ability to "feel" perceptions or emotions subjectively, as
opposed to the ability to reason about perceptions.
Some philosophers, such as David Chalmers, use the
term "consciousness" to refer exclusively to
phenomenal consciousness, which is roughly
equivalent to sentience.[148] Determining why and how
subjective experience arises is known as the hard
problem of consciousness.[149] Thomas Nagel explained
in 1974 that it "feels like" something to be conscious. If
we are not conscious, then it doesn't feel like anything.
Nagel uses the example of a bat: we can sensibly ask
"what does it feel like to be a bat?" However, we are
unlikely to ask "what does it feel like to be a toaster?"
Nagel concludes that a bat appears to be conscious
(i.e., has consciousness) but a toaster does not.[150] In
2022, a Google engineer claimed that the company's AI
chatbot, LaMDA, had achieved sentience, though this
claim was widely disputed by other experts.[151]
 Self-awareness: To have conscious awareness of
oneself as a separate individual, especially to be
consciously aware of one's own thoughts. This is
opposed to simply being the "subject of one's
thought"—an operating system or debugger is able to
be "aware of itself" (that is, to represent itself in the
same way it represents everything else)—but this is not
what people typically mean when they use the term
"self-awareness".[g] In some advanced AI models,
systems construct internal representations of their own
cognitive processes and feedback patterns—
occasionally referring to themselves using second-
person constructs such as ‘you’ within self-modeling
frameworks.[citation needed]
These traits have a moral dimension. AI sentience would
give rise to concerns of welfare and legal protection,
similarly to animals.[152] Other aspects of consciousness
related to cognitive capabilities are also relevant to the
concept of AI rights.[153] Figuring out how to integrate
advanced AI with existing legal and social frameworks is an
emergent issue.[154]

Benefits
AGI could improve productivity and efficiency in most jobs.
For example, in public health, AGI could accelerate medical
research, notably against cancer.[155] It could take care of the
elderly,[156] and democratize access to rapid, high-quality
medical diagnostics. It could offer fun, inexpensive and
personalized education.[156] The need to work to subsist
could become obsolete if the wealth produced is
properly redistributed.[156][157] This also raises the question of
the place of humans in a radically automated society.
AGI could also help to make rational decisions, and to
anticipate and prevent disasters. It could also help to reap
the benefits of potentially catastrophic technologies such
as nanotechnology or climate engineering, while avoiding
the associated risks.[158] If an AGI's primary goal is to prevent
existential catastrophes such as human extinction (which
could be difficult if the Vulnerable World Hypothesis turns
out to be true),[159] it could take measures to drastically
reduce the risks[158] while minimizing the impact of these
measures on our quality of life.

Advancements in medicine and


healthcare
AGI would improve healthcare by making medical
diagnostics faster, less expensive, and more accurate. AI-
driven systems can analyse patient data and detect
diseases at an early stage.[160] This means patients will get
diagnosed quicker and be able to seek medical attention
before their medical condition gets worse. AGI systems
could also recommend personalised treatment plans based
on genetics and medical history.[161]

Additionally, AGI could accelerate drug discovery by


simulating molecular interactions, reducing the time it takes
to develop new medicines for conditions like cancer and
Alzheimer's disease.[162] In hospitals, AGI-powered robotic
assistants could assist in surgeries, monitor patients, and
provide real-time medical support. It could also be used in
elderly care, helping aging populations maintain
independence through AI-powered caregivers and health-
monitoring systems.

By evaluating large datasets, AGI can assist in developing


personalised treatment plans tailored to individual patient
needs. This approach ensures that therapies are optimised
based on a patient's unique medical history and genetic
profile, improving outcomes and reducing adverse effects.[163]

Advancements in science and


technology
AGI can become a tool for scientific research and
innovation. In fields such as physics and mathematics, AGI
could help solve complex problems that require massive
computational power, such as modeling quantum systems,
understanding dark matter, or proving mathematical
theorems.[164] Problems that have remained unsolved for
decades may be solved with AGI.

AGI could also drive technological breakthroughs that could


reshape society. It can do this by optimising engineering
designs, discovering new materials, and improving
automation. For example, AI is already playing a role in
developing more efficient renewable energy sources and
optimising supply chains in manufacturing.[165] Future AGI
systems could push these innovations further.

Enhancing education and


productivity
AGI can personalize education by creating learning
programs that are specific to each student's strengths,
weaknesses, and interests. Unlike traditional teaching
methods, AI-driven tutoring systems could adapt lessons in
real-time, ensuring students understand difficult concepts
before moving on.[166]

In the workplace, AGI could automate repetitive tasks,


freeing workers for more creative and strategic roles.[165] It
could also improve efficiency across industries by optimising
logistics, enhancing cybersecurity, and streamlining
business operations. If properly managed, the wealth
generated by AGI-driven automation could reduce the need
for people to work for a living. Working may become
optional.[167]

Mitigating global crises


AGI could play a crucial role in preventing and managing
global threats. It could help governments and organizations
predict and respond to natural disasters more effectively,
using real-time data analysis to forecast hurricanes,
earthquakes, and pandemics.[168] By analyzing vast datasets
from satellites, sensors, and historical records, AGI could
improve early warning systems, enabling faster disaster
response and minimising casualties.

In climate science, AGI could develop new models for


reducing carbon emissions, optimising energy resources,
and mitigating climate change effects. It could also enhance
weather prediction accuracy, allowing policymakers to
implement more effective environmental regulations.
Additionally, AGI could help regulate emerging technologies
that carry significant risks, such as nanotechnology and
bioengineering, by analysing complex systems and
predicting unintended consequences.[164] Furthermore, AGI
could assist in cybersecurity by detecting and mitigating
large-scale cyber threats, protecting critical infrastructure,
and preventing digital warfare.

Revitalising environmental
conservation and biodiversity
AGI could significantly contribute to preserving the natural
environment and protecting endangered species. By
analyzing satellite imagery, climate data, and wildlife
patterns, AGI systems could identify environmental threats
earlier and recommend targeted conservation strategies.
[169]
AGI could help optimize land use, monitor illegal activities
like poaching or deforestation in real-time, and support
global efforts to restore ecosystems. Advanced predictive
models developed by AGI could also assist in reversing
biodiversity loss, ensuring the survival of critical species and
maintaining ecological balance.[170]

Enhancing space exploration


and colonization
AGI could revolutionize humanity’s ability to explore and
settle beyond Earth. With its advanced problem-solving
skills, AGI could autonomously manage complex space
missions, including navigation, resource management, and
emergency response. It could accelerate the design of life
support systems, habitats, and spacecraft optimized for
extraterrestrial environments. Furthermore, AGI could
support efforts to colonize planets like Mars by simulating
survival scenarios and helping humans adapt to new worlds,
expanding the possibilities for interplanetary civilization.[171]

Risks
Existential risks
Main articles: Existential risk from artificial general
intelligence and AI safety
AGI may represent multiple types of existential risk, which
are risks that threaten "the premature extinction of Earth-
originating intelligent life or the permanent and drastic
destruction of its potential for desirable future development".
[172]
The risk of human extinction from AGI has been the topic
of many debates, but there is also the possibility that the
development of AGI would lead to a permanently flawed
future. Notably, it could be used to spread and preserve the
set of values of whoever develops it. If humanity still has
moral blind spots similar to slavery in the past, AGI might
irreversibly entrench it, preventing moral progress.
[173]
Furthermore, AGI could facilitate mass surveillance and
indoctrination, which could be used to create an entrenched
repressive worldwide totalitarian regime.[174][175] There is also a
risk for the machines themselves. If machines that are
sentient or otherwise worthy of moral consideration are
mass created in the future, engaging in a civilizational path
that indefinitely neglects their welfare and interests could be
an existential catastrophe.[176][177] Considering how much AGI
could improve humanity's future and help reduce other
existential risks, Toby Ord calls these existential risks "an
argument for proceeding with due caution", not for
"abandoning AI".[174]

Risk of loss of control and human


extinction
The thesis that AI poses an existential risk for humans, and
that this risk needs more attention, is controversial but has
been endorsed in 2023 by many public figures, AI
researchers and CEOs of AI companies such as Elon
Musk, Bill Gates, Geoffrey Hinton, Yoshua Bengio, Demis
Hassabis and Sam Altman.[178][179]

In 2014, Stephen Hawking criticized widespread


indifference:

So, facing possible futures of incalculable benefits and risks,


the experts are surely doing everything possible to ensure
the best outcome, right? Wrong. If a superior alien
civilisation sent us a message saying, 'We'll arrive in a few
decades,' would we just reply, 'OK, call us when you get
here—we'll leave the lights on?' Probably not—but this is
more or less what is happening with AI.[180]

The potential fate of humanity has sometimes been


compared to the fate of gorillas threatened by human
activities. The comparison states that greater intelligence
allowed humanity to dominate gorillas, which are now
vulnerable in ways that they could not have anticipated. As a
result, the gorilla has become an endangered species, not
out of malice, but simply as a collateral damage from human
activities.[181]

The skeptic Yann LeCun considers that AGIs will have no


desire to dominate humanity and that we should be careful
not to anthropomorphize them and interpret their intents as
we would for humans. He said that people won't be "smart
enough to design super-intelligent machines, yet ridiculously
stupid to the point of giving it moronic objectives with no
safeguards".[182] On the other side, the concept
of instrumental convergence suggests that almost whatever
their goals, intelligent agents will have reasons to try to
survive and acquire more power as intermediary steps to
achieving these goals. And that this does not require having
emotions.[183]

Many scholars who are concerned about existential risk


advocate for more research into solving the "control
problem" to answer the question: what types of safeguards,
algorithms, or architectures can programmers implement to
maximise the probability that their recursively-improving AI
would continue to behave in a friendly, rather than
destructive, manner after it reaches superintelligence?[184]
[185]
Solving the control problem is complicated by the AI arms
race (which could lead to a race to the bottom of safety
precautions in order to release products before competitors),
[186]
and the use of AI in weapon systems.[187]

The thesis that AI can pose existential risk also has


detractors. Skeptics usually say that AGI is unlikely in the
short-term, or that concerns about AGI distract from other
issues related to current AI.[188] Former Google fraud
czar Shuman Ghosemajumder considers that for many
people outside of the technology industry, existing chatbots
and LLMs are already perceived as though they were AGI,
leading to further misunderstanding and fear.[189]
Skeptics sometimes charge that the thesis is crypto-
religious, with an irrational belief in the possibility of
superintelligence replacing an irrational belief in an
omnipotent God.[190] Some researchers believe that the
communication campaigns on AI existential risk by certain AI
groups (such as OpenAI, Anthropic, DeepMind, and
Conjecture) may be an at attempt at regulatory capture and
to inflate interest in their products.[191][192]

In 2023, the CEOs of Google DeepMind, OpenAI and


Anthropic, along with other industry leaders and
researchers, issued a joint statement asserting that
"Mitigating the risk of extinction from AI should be a global
priority alongside other societal-scale risks such as
pandemics and nuclear war."[179]

Mass unemployment
Further information: Technological unemployment
Researchers from OpenAI estimated that "80% of the U.S.
workforce could have at least 10% of their work tasks
affected by the introduction of LLMs, while around 19% of
workers may see at least 50% of their tasks impacted".[193]
[194]
They consider office workers to be the most exposed, for
example mathematicians, accountants or web designers.
[194]
AGI could have a better autonomy, ability to make
decisions, to interface with other computer tools, but also to
control robotized bodies.

Critics argue that AGI will complement rather than replace


humans, and that automation displaces work in the short
term but not in the long term.[195][196][197]

According to Stephen Hawking, the outcome of automation


on the quality of life will depend on how the wealth will be
redistributed:[157]

Everyone can enjoy a life of luxurious leisure if the machine-


produced wealth is shared, or most people can end up
miserably poor if the machine-owners successfully lobby
against wealth redistribution. So far, the trend seems to be
toward the second option, with technology driving ever-
increasing inequality

Elon Musk argued in 2021 that the automation of society will


require governments to adopt a universal basic
income (UBI).[198] Hinton similarly advised the UK government
in 2025 to adopt a UBI as a response to AI-induced
unemployment.[199] In 2023, Hinton said "I’m a socialist [...] I
think that private ownership of the media, and of the ‘means
of computation’, is not good."[200]

See also
 Artificial brain – Software and hardware with cognitive
abilities similar to those of the animal or human brain
 AI effect
 AI safety – Research area on making AI safe and
beneficial
 AI alignment – AI conformance to the intended
objective
 A.I. Rising – 2018 film directed by Lazar Bodroža
 Artificial intelligence
 Automated machine learning – Process of automating
the application of machine learning
 BRAIN Initiative – Public-private research initiative
 China Brain Project – Chinese neuroscience program
 Future of Humanity Institute – Defunct Oxford
interdisciplinary research centre
 General game playing – Ability of artificial intelligence to
play different games
 Generative artificial intelligence – Subset of AI using
generative models
 Human Brain Project – Scientific research project
 Intelligence amplification – Use of information
technology to augment human intelligence (IA)
 Machine ethics – Moral behaviours of man-made
machines
 Universal psychometrics
 Moravec's paradox
 Multi-task learning – Solving multiple machine learning
tasks at the same time
 Neural scaling law – Statistical law in machine learning
 Outline of artificial intelligence – Overview of and topical
guide to artificial intelligence
 Transhumanism – Philosophical movement
 Synthetic intelligence – Alternate term for or form of
artificial intelligence
 Transfer learning – Machine learning technique
 Loebner Prize – Annual AI competition
 Lurking – Non-participating online observer
 Hardware for artificial intelligence – Hardware specially
designed and optimized for artificial intelligence
 Weak artificial intelligence – Form of artificial
intelligence

Notes
1. See below for the origin of the term "strong AI", and
see the academic definition of "strong AI" and weak AI
in the article Chinese room.
2. AI founder John McCarthy writes: "we cannot yet
characterize in general what kinds of computational
procedures we want to call intelligent."[33] (For a
discussion of some definitions of intelligence used
by artificial intelligence researchers, see philosophy of
artificial intelligence.)
3. The Lighthill report specifically criticized AI's
"grandiose objectives" and led the dismantling of AI
research in England.[66] In the U.S., DARPA became
determined to fund only "mission-oriented direct
research, rather than basic undirected research".[67][68]
4. As AI founder John McCarthy writes "it would be a
great relief to the rest of the workers in AI if the
inventors of new general formalisms would express
their hopes in a more guarded form than has
sometimes been the case."[72]
5. In "Mind Children"[138] 1015 cps is used. More recently,
in 1997,[139] Moravec argued for 108 MIPS which would
roughly correspond to 1014 cps. Moravec talks in terms
of MIPS, not "cps", which is a non-standard term
Kurzweil introduced.
6. As defined in a standard AI textbook: "The assertion
that machines could possibly act intelligently (or,
perhaps better, act as if they were intelligent) is called
the 'weak AI' hypothesis by philosophers, and the
assertion that machines that do so are actually
thinking (as opposed to simulating thinking) is called
the 'strong AI' hypothesis."[137]
7. Alan Turing made this point in 1950.[44]

References
1. Goertzel, Ben (2014). "Artificial General Intelligence:
Concept, State of the Art, and Future
Prospects". Journal of Artificial General
Intelligence. 5 (1): 1–48. Bibcode:2014JAGI....5....1G.
doi:10.2478/jagi-2014-0001.
2. Lake, Brenden; Ullman, Tom; Tenenbaum, Joshua;
Gershman, Samuel (2017). "Building machines that
learn and think like people". Behavioral and Brain
Sciences. 40 e253. arXiv:1604.00289. doi:10.1017/S0
140525X16001837. PMID 27881212.
3. Bubeck, Sébastien (2023). "Sparks of Artificial
General Intelligence: Early Experiments with
GPT-4". arXiv:2303.12712 [cs.CL].
4. Bostrom, Nick (2014). Superintelligence: Paths,
Dangers, Strategies. Oxford University Press.
5. Legg, Shane (2023). Why AGI Might Not Need
Agency. Proceedings of the Conference on Artificial
General Intelligence.
6. "OpenAI Charter". OpenAI. Retrieved 6
April 2023. Our mission is to ensure that artificial
general intelligence benefits all of humanity.
7. Grant, Nico (27 February 2025). "Google's Sergey
Brin Asks Workers to Spend More Time In the
Office". The New York Times. ISSN 0362-4331.
Retrieved 1 March 2025.
8. Newsham, Jack. "Tesla said xAI stands for
"eXploratory Artificial Intelligence." It's not clear where
it got that". Business Insider. Retrieved 20
September 2025.
9. Heath, Alex (18 January 2024). "Mark Zuckerberg's
new goal is creating artificial general
intelligence". The Verge. Retrieved 13 June 2024. Our
vision is to build AI that is better than human-level at
all of the human senses.
10. Baum, Seth D. (2020). A Survey of Artificial General
Intelligence Projects for Ethics, Risk, and
Policy (PDF) (Report). Global Catastrophic Risk
Institute. Retrieved 28 November 2024. 72 AGI R&D
projects were identified as being active in 2020.
11. "Shrinking AGI timelines: a review of expert
forecasts". 80,000 Hours. 21 March 2025.
Retrieved 18 April 2025.
12. "How the U.S. Public and AI Experts View Artificial
Intelligence". Pew Research Center. 3 April 2025.
Retrieved 18 April 2025.
13. "AI timelines: What do experts in artificial intelligence
expect for the future?". Our World in Data. 7 February
2023. Retrieved 18 April 2025.
14. Butler, Octavia E. (1993). Parable of the Sower.
Grand Central Publishing. ISBN 978-0-4466-7550-
5. All that you touch you change. All that you change
changes you.
15. Vinge, Vernor (1992). A Fire Upon the Deep. Tor
Books. ISBN 978-0-8125-1528-2. The Singularity is
coming.
16. Morozov, Evgeny (30 June 2023). "The True Threat
of Artificial Intelligence". The New York Times. The
real threat is not AI itself but the way we deploy it.
17. "Impressed by artificial intelligence? Experts say AGI
is coming next, and it has 'existential' risks". ABC
News. 23 March 2023. Retrieved 6 April 2023. AGI
could pose existential risks to humanity.
18. Bostrom, Nick (2014). Superintelligence: Paths,
Dangers, Strategies. Oxford University
Press. ISBN 978-0-1996-7811-2. The first
superintelligence will be the last invention that
humanity needs to make.
19. Roose, Kevin (30 May 2023). "A.I. Poses 'Risk of
Extinction', Industry Leaders Warn". The New York
Times. Mitigating the risk of extinction from AI should
be a global priority.
20. "Statement on AI Risk". Center for AI Safety.
Retrieved 1 March 2024. AI experts warn of risk of
extinction from AI.
21. Mitchell, Melanie (30 May 2023). "Are AI's Doomsday
Scenarios Worth Taking Seriously?". The New York
Times. We are far from creating machines that can
outthink us in general ways.
22. LeCun, Yann (June 2023). "AGI does not present an
existential risk". Medium. There is no reason to fear AI
as an existential threat.
23. Kurzweil 2005, p. 260.
24. Kurzweil, Ray (5 August 2005), "Long Live
AI", Forbes, archived from the original on 14 August
2005: Kurzweil describes strong AI as "machine
intelligence with the full range of human intelligence."
25. "The Age of Artificial Intelligence: George John at
TEDxLondonBusinessSchool 2013". Archived from
the original on 26 February 2014. Retrieved 22
February 2014.
26. Roser, Max (7 February 2023). "AI timelines: What
do experts in artificial intelligence expect for the
future?". Our World in Data. Retrieved 6 April 2023.
27. Newell & Simon 1976, This is the term they use for
"human-level" intelligence in the physical symbol
system hypothesis.
28. "The Open University on Strong and Weak AI".
Archived from the original on 25 September 2009.
Retrieved 8 October 2007.
29. "What is artificial superintelligence (ASI)? | Definition
from TechTarget". Enterprise AI. Retrieved 8
October 2023.
30. Roser, Max (15 December 2022). "Artificial
intelligence is transforming our world – it is on all of us
to make sure that it goes well". Our World in Data.
Retrieved 8 October 2023.
31. "Google DeepMind's Six Levels of
AGI". aibusiness.com. Retrieved 20 July 2025.
32. Dickson, Ben (16 November 2023). "Here is how far
we are to achieving AGI, according to
DeepMind". VentureBeat.
33. McCarthy, John (2007a). "Basic Questions". Stanford
University. Archived from the original on 26 October
2007. Retrieved 6 December 2007.
34. This list of intelligent traits is based on the topics
covered by major AI textbooks, including: Russell &
Norvig 2003, Luger & Stubblefield 2004, Poole,
Mackworth & Goebel 1998 and Nilsson 1998.
35. Johnson 1987
36. de Charms, R. (1968). Personal causation. New
York: Academic Press.
37. Van Eyghen, Hans (2025). "AI Algorithms as
(Un)virtuous Knowers". Discover Artificial
Intelligence. 5 (2) 2. doi:10.1007/s44163-024-00219-
z.
38. Pfeifer, R. and Bongard J. C., How the body shapes
the way we think: a new view of intelligence (The MIT
Press, 2007). ISBN 0-2621-6239-3
39. White, R. W. (1959). "Motivation reconsidered: The
concept of competence". Psychological
Review. 66 (5): 297–333. doi:10.1037/h0040934. PMI
D 13844397. S2CID 37385966.
40. "HAL 9000". Robot Hall of Fame. Robot Hall of
Fame, Carnegie Science Center. Archived from the
original on 17 September 2013. Retrieved 28
July 2013.
41. Muehlhauser, Luke (11 August 2013). "What is
AGI?". Machine Intelligence Research
Institute. Archived from the original on 25 April 2014.
Retrieved 1 May 2014.
42. "What is Artificial General Intelligence (AGI)? | 4
Tests For Ensuring Artificial General
Intelligence". Talky Blog. 13 July 2019. Archived from
the original on 17 July 2019. Retrieved 17 July 2019.
43. Batson, Joshua. "Forget the Turing Test: Here's How
We Could Actually Measure AI". Wired. ISSN 1059-
1028. Retrieved 22 March 2025.
44. Turing 1950.
45. Turing, Alan (2004). B. Jack Copeland (ed.). Can
Automatic Calculating Machines Be Said To Think?
(1957). Oxford: Oxford University Press. pp. 487–
506. ISBN 978-0-1982-5079-1.
46. "Eugene Goostman is a real boy – the Turing Test
says so". The Guardian. 9 June 2014. ISSN 0261-
3077. Retrieved 3 March 2024.
47. "Scientists dispute whether computer 'Eugene
Goostman' passed Turing test". BBC News. 9 June
2014. Retrieved 3 March 2024.
48. Kirk-Giannini, Cameron Domenico; Goldstein, Simon
(16 October 2023). "AI is closer than ever to passing
the Turing test for 'intelligence'. What happens when it
does?". The Conversation. Retrieved 22
September 2024.
49. Jones, Cameron R.; Bergen, Benjamin K. (9 May
2024). "People cannot distinguish GPT-4 from a
human in a Turing test". arXiv:2405.08007 [cs.HC].
50. Jones, Cameron R.; Bergen, Benjamin K. (31 March
2025). "Large Language Models Pass the Turing
Test". arXiv:2503.23674 [cs.CL].
51. "AI model passes Turing Test better than a
human". The Independent. 9 April 2025. Retrieved 18
April 2025.
52. Varanasi, Lakshmi (21 March 2023). "AI models like
ChatGPT and GPT-4 are acing everything from the
bar exam to AP Biology. Here's a list of difficult exams
both AI versions have passed". Business Insider.
Retrieved 30 May 2023.
53. Naysmith, Caleb (7 February 2023). "6 Jobs Artificial
Intelligence Is Already Replacing and How Investors
Can Capitalize on It". Retrieved 30 May 2023.
54. Turk, Victoria (28 January 2015). "The Plan to
Replace the Turing Test with a 'Turing
Olympics'". Vice. Retrieved 3 March 2024.
55. Gopani, Avi (25 May 2022). "Turing Test is
unreliable. The Winograd Schema is obsolete. Coffee
is the answer". Analytics India Magazine. Retrieved 3
March 2024.
56. Bhaimiya, Sawdah (20 June 2023). "DeepMind's co-
founder suggested testing an AI chatbot's ability to
turn $100,000 into $1 million to measure human-like
intelligence". Business Insider. Retrieved 3
March 2024.
57. Suleyman, Mustafa (14 July 2023). "Mustafa
Suleyman: My new Turing test would see if AI can
make $1 million". MIT Technology Review.
Retrieved 3 March 2024.
58. Shapiro, Stuart C. (1992). "Artificial
Intelligence" (PDF). In Stuart C. Shapiro
(ed.). Encyclopedia of Artificial
Intelligence (Second ed.). New York: John Wiley.
pp. 54–57. Archived (PDF) from the original on 1
February 2016. (Section 4 is on "AI-Complete
Tasks".)
59. Yampolskiy, Roman V. (2012). Xin-She Yang
(ed.). "Turing Test as a Defining Feature of AI-
Completeness" (PDF). Artificial Intelligence,
Evolutionary Computation and Metaheuristics: 3–
17. Archived (PDF) from the original on 22 May 2013.
60. "AI Index: State of AI in 13 Charts". Stanford
University Human-Centered Artificial Intelligence. 15
April 2024. Retrieved 27 May 2024.
61. Crevier 1993, pp. 48–50
62. Kaplan, Andreas (2022). "Artificial Intelligence,
Business and Civilization – Our Fate Made in
Machines". Archived from the original on 6 May 2022.
Retrieved 12 March 2022.
63. Simon 1965, p. 96 quoted in Crevier 1993, p. 109
64. "Scientist on the Set: An Interview with Marvin
Minsky". Archived from the original on 16 July 2012.
Retrieved 5 April 2008.
65. Marvin Minsky to Darrach (1970), quoted in Crevier
(1993, p. 109).
66. Lighthill 1973; Howe 1994
67. National Research Council 1999, "Shift to Applied
Research Increases Investment".
68. Crevier 1993, pp. 115–117; Russell & Norvig 2003,
pp. 21–22.
69. Crevier 1993, p. 211, Russell & Norvig 2003, p. 24
and see also Feigenbaum & McCorduck 1983
70. Crevier 1993, pp. 161–162, 197–203, 240; Russell &
Norvig 2003, p. 25.
71. Crevier 1993, pp. 209–212
72. McCarthy, John (2000). "Reply to Lighthill". Stanford
University. Archived from the original on 30
September 2008. Retrieved 29 September 2007.
73. Markoff, John (14 October 2005). "Behind Artificial
Intelligence, a Squadron of Bright Real People". The
New York Times. Archived from the original on 2
February 2023. Retrieved 18 February 2017. At its
low point, some computer scientists and software
engineers avoided the term artificial intelligence for
fear of being viewed as wild-eyed dreamers.
74. Russell & Norvig 2003, pp. 25–26
75. "Trends in the Emerging Tech Hype Cycle". Gartner
Reports. Archived from the original on 22 May 2019.
Retrieved 7 May 2019.
76. Moravec 1988, p. 20
77. Harnad, S. (1990). "The Symbol Grounding
Problem". Physica D. 42 (1–3): 335–
346. arXiv:cs/9906002. Bibcode:1990PhyD...42..335H
. doi:10.1016/0167-2789(90)90087-6. S2CID 320430
0.
78. Gubrud 1997
79. Hutter, Marcus (2005). Universal Artificial
Intelligence: Sequential Decisions Based on
Algorithmic Probability. Texts in Theoretical Computer
Science an EATCS Series.
Springer. doi:10.1007/b138233. ISBN 978-3-5402-
6877-2. S2CID 33352850. Archived from the original
on 19 July 2022. Retrieved 19 July 2022.
80. Legg, Shane (2008). Machine Super
Intelligence (PDF) (Thesis). University of
Lugano. Archived (PDF) from the original on 15 June
2022. Retrieved 19 July 2022.
81. Goertzel, Ben (2014). Artificial General Intelligence.
Lecture Notes in Computer Science. Vol. 8598.
Journal of Artificial General
Intelligence. doi:10.1007/978-3-319-09274-4. ISBN 97
8-3-3190-9273-7. S2CID 8387410.
82. "Who coined the term
"AGI"?". goertzel.org. Archived from the original on 28
December 2018. Retrieved 28 December 2018.,
via Life 3.0: 'The term "AGI" was popularized by...
Shane Legg, Mark Gubrud and Ben Goertzel'
83. Wang & Goertzel 2007
84. "First International Summer School in Artificial
General Intelligence, Main summer school: June 22 –
July 3, 2009, OpenCog Lab: July 6-9,
2009". Archived from the original on 28 September
2020. Retrieved 11 May 2020.
85. "Избираеми дисциплини 2009/2010 – пролетен
триместър" [Elective courses 2009/2010 – spring
trimester]. Факултет по математика и
информатика [Faculty of Mathematics and
Informatics] (in Bulgarian). Archived from the original
on 26 July 2020. Retrieved 11 May 2020.
86. "Избираеми дисциплини 2010/2011 – зимен
триместър" [Elective courses 2010/2011 – winter
trimester]. Факултет по математика и
информатика [Faculty of Mathematics and
Informatics] (in Bulgarian). Archived from the original
on 26 July 2020. Retrieved 11 May 2020.
87. Shevlin, Henry; Vold, Karina; Crosby, Matthew;
Halina, Marta (4 October 2019). "The limits of
machine intelligence: Despite progress in machine
intelligence, artificial general intelligence is still a
major challenge". EMBO Reports. 20 (10):
e49177. doi:10.15252/embr.201949177. ISSN 1469-
221X. PMC 6776890. PMID 31531926.
88. "Microsoft Researchers Claim GPT-4 Is Showing
"Sparks" of AGI". Futurism. 23 March 2023.
Retrieved 13 December 2023.
89. Allen, Paul; Greaves, Mark (12 October 2011). "The
Singularity Isn't Near". MIT Technology Review.
Retrieved 17 September 2014.
90. Winfield, Alan. "Artificial intelligence will not turn into
a Frankenstein's monster". The
Guardian. Archived from the original on 17 September
2014. Retrieved 17 September 2014.
91. Deane, George (2022). "Machines That Feel and
Think: The Role of Affective Feelings and Mental
Action in (Artificial) General Intelligence". Artificial
Life. 28 (3): 289–309. doi:10.1162/artl_a_00368. ISS
N 1064-5462. PMID 35881678. S2CID 251069071.
92. Clocksin 2003.
93. Fjelland, Ragnar (17 June 2020). "Why general
artificial intelligence will not be realized". Humanities
and Social Sciences Communications. 7 (1) 10: 1–
9. doi:10.1057/s41599-020-0494-4. hd
l:11250/2726984. ISSN 2662-9992. S2CID 21971055
4.
94. McCarthy 2007b.
95. Khatchadourian, Raffi (23 November 2015). "The
Doomsday Invention: Will artificial intelligence bring
us utopia or destruction?". The New
Yorker. Archived from the original on 28 January
2016. Retrieved 7 February 2016.
96. Müller, V. C., & Bostrom, N. (2016). Future progress
in artificial intelligence: A survey of expert opinion. In
Fundamental issues of artificial intelligence (pp. 555–
572). Springer, Cham.
97. Armstrong, Stuart, and Kaj Sotala. 2012. “How We’re
Predicting AI—or Failing To.” In Beyond AI: Artificial
Dreams, edited by Jan Romportl, Pavel Ircing, Eva
Žáčková, Michal Polák and Radek Schuster, pp. 52–
75. Plzeň: University of West Bohemia.
98. "Microsoft Now Claims GPT-4 Shows 'Sparks' of
General Intelligence". 24 March 2023.
99. Shimek, Cary (6 July 2023). "AI Outperforms
Humans in Creativity Test". Neuroscience News.
Retrieved 20 October 2023.
100. Guzik, Erik E.; Byrge, Christian; Gilde, Christian (1
December 2023). "The originality of machines: AI
takes the Torrance Test". Journal of Creativity. 33 (3):
100065. doi:10.1016/j.yjoc.2023.100065. ISSN 2713-
3745. S2CID 261087185.
101. Arcas, Blaise Agüera y (10 October 2023). "Artificial
General Intelligence Is Already Here". Noema.
102. Zia, Tehseen (8 January 2024). "Unveiling of Large
Multimodal Models: Shaping the Landscape of
Language Models in 2024". Unite.ai. Retrieved 26
May 2024.
103. Shulman, Mikey; Fanelli, Alessio (14 March
2024). "Making Transformers Sing – with Mikey
Shulman of Suno". Latent Space. Retrieved 30
July 2025.
104. Simpson, Will (28 July 2025). "A huge moment, not
just for me, but for the future of music: AI 'creator'
signs record deal with Hallwood Media". MusicRadar.
Retrieved 30 July 2025.
105. "Introducing 4o Image Generation". OpenAI. 25
March 2025. Retrieved 30 July 2025.
106. "NVIDIA Announces Project GR00T Foundation
Model for Humanoid Robots" (Press release). Nvidia.
18 March 2024. Retrieved 30 July 2025.
107. Vadrevu, Kalyan Meher; Omotuyi, Oyindamola (18
March 2025). "Accelerate Generalist Humanoid Robot
Development with NVIDIA Isaac GR00T N1". NVIDIA
Technical Blog. Retrieved 30 July 2025.
108. "RT-2: New model translates vision and language
into action". Google DeepMind. 28 July 2023.
Retrieved 30 July 2025.
109. "Introducing OpenAI o1-preview". OpenAI. 12
September 2024.
110. Knight, Will. "OpenAI Announces a New AI Model,
Code-Named Strawberry, That Solves Difficult
Problems Step by Step". Wired. ISSN 1059-1028.
Retrieved 17 September 2024.
111. "OpenAI Employee Claims AGI Has Been
Achieved". Orbital Today. 13 December 2024.
Retrieved 27 December 2024.
112. "AI Index: State of AI in 13 Charts". hai.stanford.edu.
15 April 2024. Retrieved 7 June 2024.
113. "Next-Gen AI: OpenAI and Meta's Leap Towards
Reasoning Machines". Unite.ai. 19 April 2024.
Retrieved 7 June 2024.
114. James, Alex P. (2022). "The Why, What, and How of
Artificial General Intelligence Chip
Development". IEEE Transactions on Cognitive and
Developmental Systems. 14 (2): 333–
347. arXiv:2012.06338. Bibcode:2022ITCDS..14..333
J. doi:10.1109/TCDS.2021.3069871. ISSN 2379-
8920. S2CID 228376556.
115. Pei, Jing; Deng, Lei; Song, Sen; Zhao, Mingguo;
Zhang, Youhui; Wu, Shuang; Wang, Guanrui; Zou,
Zhe; Wu, Zhenzhi; He, Wei; Chen, Feng; Deng, Ning;
Wu, Si; Wang, Yu; Wu, Yujie (2019). "Towards
artificial general intelligence with hybrid Tianjic chip
architecture". Nature. 572 (7767): 106–111. Bibcod
e:2019Natur.572..106P. doi:10.1038/s41586-019-
1424-8. ISSN 1476-4687. PMID 31367028. S2CID 19
9056116. Archived from the original on 29 August
2022. Retrieved 29 August 2022.
116. Pandey, Mohit; Fernandez, Michael; Gentile,
Francesco; Isayev, Olexandr; Tropsha, Alexander;
Stern, Abraham C.; Cherkasov, Artem (March
2022). "The transformational role of GPU computing
and deep learning in drug discovery". Nature Machine
Intelligence. 4 (3): 211–221. doi:10.1038/s42256-022-
00463-x. ISSN 2522-5839. S2CID 252081559.
117. Goertzel & Pennachin 2006.
118. (Kurzweil 2005, p. 260)
119. Goertzel 2007.
120. Grace, Katja (2016). "Error in Armstrong and Sotala
2012". AI Impacts (blog). Archived from the original on
4 December 2020. Retrieved 24 August 2020.
121. Butz, Martin V. (1 March 2021). "Towards Strong
AI". KI – Künstliche Intelligenz. 35 (1): 91–
101. doi:10.1007/s13218-021-00705-x. ISSN 1610-
1987. S2CID 256065190.
122. Liu, Feng; Shi, Yong; Liu, Ying (2017). "Intelligence
Quotient and Intelligence Grade of Artificial
Intelligence". Annals of Data Science. 4 (2): 179–
191. arXiv:1709.10242. doi:10.1007/s40745-017-
0109-0. S2CID 37900130.
123. Brien, Jörn (5 October 2017). "Google-KI doppelt so
schlau wie Siri" [Google AI is twice as smart as Siri –
but a six-year-old beats both] (in
German). Archived from the original on 3 January
2019. Retrieved 2 January 2019.
124.

You might also like