Ai
Ai
1. Early Concepts
AI ideas date back to ancient myths of intelligent machines and automata.
2. 20th Century
3. Modern AI
Deep learning and big data have led to breakthroughs in vision, language, and
robotics.
PDF 2: AI in Healthcare
1. Applications
2. Benefits
3. Challenges
PDF 3: AI in Finance
1. Applications
2. Benefits
3. Challenges
1. Applications
2. Benefits
Safer roads.
Efficient logistics.
Reduced fuel consumption.
3. Challenges
Safety regulations.
Technology adoption.
Infrastructure requirements.
PDF 5: AI in Education
1. Applications
2. Benefits
3. Challenges
1. Ethical Concerns
2. Solutions
Explainable AI.
Ethical guidelines for AI development.
Inclusive data sets.
3. Importance
Responsible AI ensures technology benefits society without harm.
1. Applications
2. Benefits
3. Challenges
1. Programming Languages
Python, R, Java.
Popular for machine learning and AI projects.
3. Practical Tips
1. Common Myths
3. Conclusion
Understanding AI correctly prevents misconceptions and fear.
1. Emerging Trends
AI in space exploration.
Smart cities and IoT integration.
AI-powered creative tools.
2. Potential Impact
Transforming industries.
Enhancing human capabilities.
Addressing global challenges.
3. Considerations
Ethical AI development.
Sustainable and inclusive AI.
Preparing society for AI-driven changes.
Artificial general intelligence (AGI)—sometimes called human-level intelligence AI—is a type of artificial
intelligence that would match or surpass human capabilities across virtually all cognitive tasks. [1][2]
Some researchers argue that state-of-the-art large language models (LLMs) already exhibit signs of AGI-level
capability, while others maintain that genuine AGI has not yet been achieved. [3] Beyond AGI, artificial
superintelligence (ASI) would outperform the best human abilities across every domain by a wide margin. [4]
Unlike artificial narrow intelligence (ANI), whose competence is confined to well-defined tasks, an AGI system can
generalise knowledge, transfer skills between domains, and solve novel problems without task-specific
reprogramming. The concept does not, in principle, require the system to be an autonomous agent; a static model
—such as a highly capable large language model—or an embodied robot could both satisfy the definition so long
as human-level breadth and proficiency are achieved.[5]
Creating AGI is a primary goal of AI research and of companies such as OpenAI,[6] Google,[7] xAI,[8] and Meta.[9] A
2020 survey identified 72 active AGI research and development projects across 37 countries.[10]
The timeline for achieving human-level intelligence AI remains deeply contested. Recent surveys of AI
researchers give median forecasts ranging from the late 2020s to mid-century, while still recording significant
numbers who expect arrival much sooner—or never at all.[11][12][13] There is debate on the exact definition of AGI and
regarding whether modern LLMs such as GPT-4 are early forms of emerging AGI.[3] AGI is a common topic
in science fiction and futures studies.[14][15]
Contention exists over whether AGI represents an existential risk.[16][17][18] Many AI experts have stated that
mitigating the risk of human extinction posed by AGI should be a global priority. [19][20] Others find the development
of AGI to be in too remote a stage to present such a risk. [21][22]
Terminology
AGI is also known as strong AI,[23][24] full AI,[25] human-level AI,[26] human-level intelligent AI, or general intelligent
action.[27]
Some academic sources reserve the term "strong AI" for computer programs that will
experience sentience or consciousness.[a] In contrast, weak AI (or narrow AI) is able to solve one specific problem
but lacks general cognitive abilities.[28][24] Some academic sources use "weak AI" to refer more broadly to any
programs that neither experience consciousness nor have a mind in the same sense as humans. [a]
Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence (ASI) is a
hypothetical type of AGI that is much more generally intelligent than humans, [29] while the notion of transformative
AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution. [30]
A framework for classifying AGI by performance and autonomy was proposed in 2023 by Google
DeepMind researchers.[31] They define five performance levels of AGI: emerging, competent, expert, virtuoso, and
superhuman.[31] For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide
range of non-physical tasks, and a superhuman AGI (i.e. an artificial superintelligence) is similarly defined but with
a threshold of 100%.[31] They consider large language models like ChatGPT or LLaMA 2 to be instances of
emerging AGI (comparable to unskilled humans).[31] Regarding the autonomy of AGI and associated risks, they
define five levels: tool (fully in human control), consultant, collaborator, expert, and agent (fully autonomous). [32]
Characteristics
Main article: Artificial intelligence
Various popular definitions of intelligence have been proposed. One of the leading proposals is the Turing test.
However, there are other well-known definitions, and some researchers disagree with the more popular
approaches.[b]
Intelligence traits
Researchers generally hold that a system is required to do all of the following to be regarded as an AGI: [34]
reason, use strategy, solve puzzles, and make judgments under uncertainty
represent knowledge, including common sense knowledge
plan
learn
communicate in natural language
if necessary, integrate these skills in completion of any given goal
Many interdisciplinary approaches (e.g. cognitive science, computational intelligence, and decision making)
consider additional traits such as imagination (the ability to form novel mental images and concepts)
[35]
and autonomy.[36]
Computer-based systems that exhibit many of these capabilities exist (e.g. see computational
creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent).
There is debate about whether modern AI systems possess them to an adequate degree. [37]
Physical traits
Other capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its
expression. These include:[38]
Although the ability to sense (e.g. see, hear, etc.) and the ability to act (e.g. move and manipulate objects, change
location to explore, etc.) can be desirable for some intelligent systems, [38] these physical capabilities are not strictly
required for an entity to qualify as AGI—particularly under the thesis that large language models (LLMs) may
already be or become AGI. Even from a less optimistic perspective on LLMs, there is no firm requirement for an
AGI to have a human-like form; being a silicon-based computational system is sufficient, provided it can process
input (language) from the external world in place of human senses. This interpretation aligns with the
understanding that AGI has never been proscribed a particular physical embodiment and thus does not demand a
capacity for locomotion or traditional "eyes and ears".[39] It can be regarded as sufficient for an intelligent computer
to interact with other systems, to invoke or regulate them, to achieve specific goals, including altering a physical
environment, as the fictional HAL 9000 in the motion picture 2001: A Space Odyssey was both programmed and
tasked to.[40]
In 2014, a chatbot named Eugene Goostman, designed to imitate a 13-year-old Ukrainian boy,
reportedly passed a Turing Test event by convincing 33% of judges that it was human. However, this
claim was met with significant skepticism from the AI research community, who questioned the test's
implementation and its relevance to AGI.[46][47]
In 2023, it was claimed that "AI is closer to ever" to passing the Turing test, though the article's authors
reinforced that imitation (as "large language models" ever closer to passing the test are built upon) is not
synonymous with "intelligence". Further, as AI intelligence and human intelligence may differ, "passing
the Turing test is good evidence a system is intelligent, failing it is not good evidence a system is not
intelligent."[48]
A 2024 study suggested that GPT-4 was identified as human 54% of the time in a randomized,
controlled version of the Turing Test—surpassing older chatbots like ELIZA while still falling behind
actual humans (67%).[49]
A 2025 pre-registered, three-party Turing-test study by Cameron R. Jones and Benjamin K. Bergen
showed that GPT-4.5 was judged to be the human in 73% of five-minute text conversations—surpassing
the 67% humanness rate of real confederates and meeting the researchers’ criterion for having passed
the test.[50][51]
The Robot College Student Test (Goertzel)
A machine enrolls in a university, taking and passing the same classes that humans would, and
obtaining a degree. LLMs can now pass university degree-level exams without even attending the
classes.[52]
The Employment Test (Nilsson)
A machine performs an economically important job at least as well as humans in the same job. AIs are
now replacing humans in many roles as varied as fast food and marketing. [53]
The Ikea test (Marcus)
Also known as the Flat Pack Furniture Test. An AI views the parts and instructions of an Ikea flat-pack
product, then controls a robot to assemble the furniture correctly. [54]
The Coffee Test (Wozniak)
A machine is required to enter an average American home and figure out how to make coffee: find the
coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper
buttons.[55] Robots developed by Figure AI and other robotics companies can perform tasks like this.
The Modern Turing Test (Suleyman)
An AI model is given $100,000 and has to obtain $1 million.[56][57]
AI-complete problems
Main article: AI-complete
A problem is informally called "AI-complete" or "AI-hard" if it
is believed that in order to solve it, one would need to
implement AGI, because the solution is beyond the
capabilities of a purpose-specific algorithm.[58]
History
Classical AI
Main articles: History of artificial intelligence and Symbolic
artificial intelligence
Modern AI research began in the mid-1950s.[61] The first
generation of AI researchers were convinced that artificial
general intelligence was possible and that it would exist in
just a few decades.[62] AI pioneer Herbert A. Simon wrote in
1965: "machines will be capable, within twenty years, of
doing any work a man can do."[63]
Narrow AI research
Main article: Artificial intelligence
In the 1990s and early 21st century, mainstream AI achieved
commercial success and academic respectability by
focusing on specific sub-problems where AI can produce
verifiable results and commercial applications, such
as speech recognition and recommendation algorithms.
[74]
These "applied AI" systems are now used extensively
throughout the technology industry, and research in this vein
is heavily funded in both academia and industry. As of 2018,
development in this field was considered an emerging trend,
and a mature stage was expected to be reached in more
than 10 years.[75]
Feasibility
Surveys about when experts expect artificial
general intelligence [26]
Timescales
The idea that this stuff could actually get smarter than
people – a few people believed that, [...]. But most people
thought it was way off. And I thought it was way off. I thought
it was 30 to 50 years or even longer away. Obviously, I no
longer think that.
Early estimates
Current research
The Human Brain Project, an EU-funded initiative active
from 2013 to 2023, has developed a particularly detailed and
publicly accessible atlas of the human brain.[140] In 2023,
researchers from Duke University performed a high-
resolution scan of a mouse brain.
Criticisms of simulation-based
approaches
The artificial neuron model assumed by Kurzweil and used
in many current artificial neural network implementations is
simple compared with biological neurons. A brain simulation
would likely have to capture the detailed cellular behaviour
of biological neurons, presently understood only in broad
outline. The overhead introduced by full modeling of the
biological, chemical, and physical details of neural behaviour
(especially on a molecular scale) would require
computational powers several orders of magnitude larger
than Kurzweil's estimate. In addition, the estimates do not
account for glial cells, which are known to play a role in
cognitive processes.[141]
Philosophical
perspective
See also: Philosophy of artificial intelligence and Turing test
Consciousness
Main article: Artificial consciousness
Consciousness can have various meanings, and some
aspects play significant roles in science fiction and the ethics
of artificial intelligence:
Benefits
AGI could improve productivity and efficiency in most jobs.
For example, in public health, AGI could accelerate medical
research, notably against cancer.[155] It could take care of the
elderly,[156] and democratize access to rapid, high-quality
medical diagnostics. It could offer fun, inexpensive and
personalized education.[156] The need to work to subsist
could become obsolete if the wealth produced is
properly redistributed.[156][157] This also raises the question of
the place of humans in a radically automated society.
AGI could also help to make rational decisions, and to
anticipate and prevent disasters. It could also help to reap
the benefits of potentially catastrophic technologies such
as nanotechnology or climate engineering, while avoiding
the associated risks.[158] If an AGI's primary goal is to prevent
existential catastrophes such as human extinction (which
could be difficult if the Vulnerable World Hypothesis turns
out to be true),[159] it could take measures to drastically
reduce the risks[158] while minimizing the impact of these
measures on our quality of life.
Revitalising environmental
conservation and biodiversity
AGI could significantly contribute to preserving the natural
environment and protecting endangered species. By
analyzing satellite imagery, climate data, and wildlife
patterns, AGI systems could identify environmental threats
earlier and recommend targeted conservation strategies.
[169]
AGI could help optimize land use, monitor illegal activities
like poaching or deforestation in real-time, and support
global efforts to restore ecosystems. Advanced predictive
models developed by AGI could also assist in reversing
biodiversity loss, ensuring the survival of critical species and
maintaining ecological balance.[170]
Risks
Existential risks
Main articles: Existential risk from artificial general
intelligence and AI safety
AGI may represent multiple types of existential risk, which
are risks that threaten "the premature extinction of Earth-
originating intelligent life or the permanent and drastic
destruction of its potential for desirable future development".
[172]
The risk of human extinction from AGI has been the topic
of many debates, but there is also the possibility that the
development of AGI would lead to a permanently flawed
future. Notably, it could be used to spread and preserve the
set of values of whoever develops it. If humanity still has
moral blind spots similar to slavery in the past, AGI might
irreversibly entrench it, preventing moral progress.
[173]
Furthermore, AGI could facilitate mass surveillance and
indoctrination, which could be used to create an entrenched
repressive worldwide totalitarian regime.[174][175] There is also a
risk for the machines themselves. If machines that are
sentient or otherwise worthy of moral consideration are
mass created in the future, engaging in a civilizational path
that indefinitely neglects their welfare and interests could be
an existential catastrophe.[176][177] Considering how much AGI
could improve humanity's future and help reduce other
existential risks, Toby Ord calls these existential risks "an
argument for proceeding with due caution", not for
"abandoning AI".[174]
Mass unemployment
Further information: Technological unemployment
Researchers from OpenAI estimated that "80% of the U.S.
workforce could have at least 10% of their work tasks
affected by the introduction of LLMs, while around 19% of
workers may see at least 50% of their tasks impacted".[193]
[194]
They consider office workers to be the most exposed, for
example mathematicians, accountants or web designers.
[194]
AGI could have a better autonomy, ability to make
decisions, to interface with other computer tools, but also to
control robotized bodies.
See also
Artificial brain – Software and hardware with cognitive
abilities similar to those of the animal or human brain
AI effect
AI safety – Research area on making AI safe and
beneficial
AI alignment – AI conformance to the intended
objective
A.I. Rising – 2018 film directed by Lazar Bodroža
Artificial intelligence
Automated machine learning – Process of automating
the application of machine learning
BRAIN Initiative – Public-private research initiative
China Brain Project – Chinese neuroscience program
Future of Humanity Institute – Defunct Oxford
interdisciplinary research centre
General game playing – Ability of artificial intelligence to
play different games
Generative artificial intelligence – Subset of AI using
generative models
Human Brain Project – Scientific research project
Intelligence amplification – Use of information
technology to augment human intelligence (IA)
Machine ethics – Moral behaviours of man-made
machines
Universal psychometrics
Moravec's paradox
Multi-task learning – Solving multiple machine learning
tasks at the same time
Neural scaling law – Statistical law in machine learning
Outline of artificial intelligence – Overview of and topical
guide to artificial intelligence
Transhumanism – Philosophical movement
Synthetic intelligence – Alternate term for or form of
artificial intelligence
Transfer learning – Machine learning technique
Loebner Prize – Annual AI competition
Lurking – Non-participating online observer
Hardware for artificial intelligence – Hardware specially
designed and optimized for artificial intelligence
Weak artificial intelligence – Form of artificial
intelligence
Notes
1. See below for the origin of the term "strong AI", and
see the academic definition of "strong AI" and weak AI
in the article Chinese room.
2. AI founder John McCarthy writes: "we cannot yet
characterize in general what kinds of computational
procedures we want to call intelligent."[33] (For a
discussion of some definitions of intelligence used
by artificial intelligence researchers, see philosophy of
artificial intelligence.)
3. The Lighthill report specifically criticized AI's
"grandiose objectives" and led the dismantling of AI
research in England.[66] In the U.S., DARPA became
determined to fund only "mission-oriented direct
research, rather than basic undirected research".[67][68]
4. As AI founder John McCarthy writes "it would be a
great relief to the rest of the workers in AI if the
inventors of new general formalisms would express
their hopes in a more guarded form than has
sometimes been the case."[72]
5. In "Mind Children"[138] 1015 cps is used. More recently,
in 1997,[139] Moravec argued for 108 MIPS which would
roughly correspond to 1014 cps. Moravec talks in terms
of MIPS, not "cps", which is a non-standard term
Kurzweil introduced.
6. As defined in a standard AI textbook: "The assertion
that machines could possibly act intelligently (or,
perhaps better, act as if they were intelligent) is called
the 'weak AI' hypothesis by philosophers, and the
assertion that machines that do so are actually
thinking (as opposed to simulating thinking) is called
the 'strong AI' hypothesis."[137]
7. Alan Turing made this point in 1950.[44]
References
1. Goertzel, Ben (2014). "Artificial General Intelligence:
Concept, State of the Art, and Future
Prospects". Journal of Artificial General
Intelligence. 5 (1): 1–48. Bibcode:2014JAGI....5....1G.
doi:10.2478/jagi-2014-0001.
2. Lake, Brenden; Ullman, Tom; Tenenbaum, Joshua;
Gershman, Samuel (2017). "Building machines that
learn and think like people". Behavioral and Brain
Sciences. 40 e253. arXiv:1604.00289. doi:10.1017/S0
140525X16001837. PMID 27881212.
3. Bubeck, Sébastien (2023). "Sparks of Artificial
General Intelligence: Early Experiments with
GPT-4". arXiv:2303.12712 [cs.CL].
4. Bostrom, Nick (2014). Superintelligence: Paths,
Dangers, Strategies. Oxford University Press.
5. Legg, Shane (2023). Why AGI Might Not Need
Agency. Proceedings of the Conference on Artificial
General Intelligence.
6. "OpenAI Charter". OpenAI. Retrieved 6
April 2023. Our mission is to ensure that artificial
general intelligence benefits all of humanity.
7. Grant, Nico (27 February 2025). "Google's Sergey
Brin Asks Workers to Spend More Time In the
Office". The New York Times. ISSN 0362-4331.
Retrieved 1 March 2025.
8. Newsham, Jack. "Tesla said xAI stands for
"eXploratory Artificial Intelligence." It's not clear where
it got that". Business Insider. Retrieved 20
September 2025.
9. Heath, Alex (18 January 2024). "Mark Zuckerberg's
new goal is creating artificial general
intelligence". The Verge. Retrieved 13 June 2024. Our
vision is to build AI that is better than human-level at
all of the human senses.
10. Baum, Seth D. (2020). A Survey of Artificial General
Intelligence Projects for Ethics, Risk, and
Policy (PDF) (Report). Global Catastrophic Risk
Institute. Retrieved 28 November 2024. 72 AGI R&D
projects were identified as being active in 2020.
11. "Shrinking AGI timelines: a review of expert
forecasts". 80,000 Hours. 21 March 2025.
Retrieved 18 April 2025.
12. "How the U.S. Public and AI Experts View Artificial
Intelligence". Pew Research Center. 3 April 2025.
Retrieved 18 April 2025.
13. "AI timelines: What do experts in artificial intelligence
expect for the future?". Our World in Data. 7 February
2023. Retrieved 18 April 2025.
14. Butler, Octavia E. (1993). Parable of the Sower.
Grand Central Publishing. ISBN 978-0-4466-7550-
5. All that you touch you change. All that you change
changes you.
15. Vinge, Vernor (1992). A Fire Upon the Deep. Tor
Books. ISBN 978-0-8125-1528-2. The Singularity is
coming.
16. Morozov, Evgeny (30 June 2023). "The True Threat
of Artificial Intelligence". The New York Times. The
real threat is not AI itself but the way we deploy it.
17. "Impressed by artificial intelligence? Experts say AGI
is coming next, and it has 'existential' risks". ABC
News. 23 March 2023. Retrieved 6 April 2023. AGI
could pose existential risks to humanity.
18. Bostrom, Nick (2014). Superintelligence: Paths,
Dangers, Strategies. Oxford University
Press. ISBN 978-0-1996-7811-2. The first
superintelligence will be the last invention that
humanity needs to make.
19. Roose, Kevin (30 May 2023). "A.I. Poses 'Risk of
Extinction', Industry Leaders Warn". The New York
Times. Mitigating the risk of extinction from AI should
be a global priority.
20. "Statement on AI Risk". Center for AI Safety.
Retrieved 1 March 2024. AI experts warn of risk of
extinction from AI.
21. Mitchell, Melanie (30 May 2023). "Are AI's Doomsday
Scenarios Worth Taking Seriously?". The New York
Times. We are far from creating machines that can
outthink us in general ways.
22. LeCun, Yann (June 2023). "AGI does not present an
existential risk". Medium. There is no reason to fear AI
as an existential threat.
23. Kurzweil 2005, p. 260.
24. Kurzweil, Ray (5 August 2005), "Long Live
AI", Forbes, archived from the original on 14 August
2005: Kurzweil describes strong AI as "machine
intelligence with the full range of human intelligence."
25. "The Age of Artificial Intelligence: George John at
TEDxLondonBusinessSchool 2013". Archived from
the original on 26 February 2014. Retrieved 22
February 2014.
26. Roser, Max (7 February 2023). "AI timelines: What
do experts in artificial intelligence expect for the
future?". Our World in Data. Retrieved 6 April 2023.
27. Newell & Simon 1976, This is the term they use for
"human-level" intelligence in the physical symbol
system hypothesis.
28. "The Open University on Strong and Weak AI".
Archived from the original on 25 September 2009.
Retrieved 8 October 2007.
29. "What is artificial superintelligence (ASI)? | Definition
from TechTarget". Enterprise AI. Retrieved 8
October 2023.
30. Roser, Max (15 December 2022). "Artificial
intelligence is transforming our world – it is on all of us
to make sure that it goes well". Our World in Data.
Retrieved 8 October 2023.
31. "Google DeepMind's Six Levels of
AGI". aibusiness.com. Retrieved 20 July 2025.
32. Dickson, Ben (16 November 2023). "Here is how far
we are to achieving AGI, according to
DeepMind". VentureBeat.
33. McCarthy, John (2007a). "Basic Questions". Stanford
University. Archived from the original on 26 October
2007. Retrieved 6 December 2007.
34. This list of intelligent traits is based on the topics
covered by major AI textbooks, including: Russell &
Norvig 2003, Luger & Stubblefield 2004, Poole,
Mackworth & Goebel 1998 and Nilsson 1998.
35. Johnson 1987
36. de Charms, R. (1968). Personal causation. New
York: Academic Press.
37. Van Eyghen, Hans (2025). "AI Algorithms as
(Un)virtuous Knowers". Discover Artificial
Intelligence. 5 (2) 2. doi:10.1007/s44163-024-00219-
z.
38. Pfeifer, R. and Bongard J. C., How the body shapes
the way we think: a new view of intelligence (The MIT
Press, 2007). ISBN 0-2621-6239-3
39. White, R. W. (1959). "Motivation reconsidered: The
concept of competence". Psychological
Review. 66 (5): 297–333. doi:10.1037/h0040934. PMI
D 13844397. S2CID 37385966.
40. "HAL 9000". Robot Hall of Fame. Robot Hall of
Fame, Carnegie Science Center. Archived from the
original on 17 September 2013. Retrieved 28
July 2013.
41. Muehlhauser, Luke (11 August 2013). "What is
AGI?". Machine Intelligence Research
Institute. Archived from the original on 25 April 2014.
Retrieved 1 May 2014.
42. "What is Artificial General Intelligence (AGI)? | 4
Tests For Ensuring Artificial General
Intelligence". Talky Blog. 13 July 2019. Archived from
the original on 17 July 2019. Retrieved 17 July 2019.
43. Batson, Joshua. "Forget the Turing Test: Here's How
We Could Actually Measure AI". Wired. ISSN 1059-
1028. Retrieved 22 March 2025.
44. Turing 1950.
45. Turing, Alan (2004). B. Jack Copeland (ed.). Can
Automatic Calculating Machines Be Said To Think?
(1957). Oxford: Oxford University Press. pp. 487–
506. ISBN 978-0-1982-5079-1.
46. "Eugene Goostman is a real boy – the Turing Test
says so". The Guardian. 9 June 2014. ISSN 0261-
3077. Retrieved 3 March 2024.
47. "Scientists dispute whether computer 'Eugene
Goostman' passed Turing test". BBC News. 9 June
2014. Retrieved 3 March 2024.
48. Kirk-Giannini, Cameron Domenico; Goldstein, Simon
(16 October 2023). "AI is closer than ever to passing
the Turing test for 'intelligence'. What happens when it
does?". The Conversation. Retrieved 22
September 2024.
49. Jones, Cameron R.; Bergen, Benjamin K. (9 May
2024). "People cannot distinguish GPT-4 from a
human in a Turing test". arXiv:2405.08007 [cs.HC].
50. Jones, Cameron R.; Bergen, Benjamin K. (31 March
2025). "Large Language Models Pass the Turing
Test". arXiv:2503.23674 [cs.CL].
51. "AI model passes Turing Test better than a
human". The Independent. 9 April 2025. Retrieved 18
April 2025.
52. Varanasi, Lakshmi (21 March 2023). "AI models like
ChatGPT and GPT-4 are acing everything from the
bar exam to AP Biology. Here's a list of difficult exams
both AI versions have passed". Business Insider.
Retrieved 30 May 2023.
53. Naysmith, Caleb (7 February 2023). "6 Jobs Artificial
Intelligence Is Already Replacing and How Investors
Can Capitalize on It". Retrieved 30 May 2023.
54. Turk, Victoria (28 January 2015). "The Plan to
Replace the Turing Test with a 'Turing
Olympics'". Vice. Retrieved 3 March 2024.
55. Gopani, Avi (25 May 2022). "Turing Test is
unreliable. The Winograd Schema is obsolete. Coffee
is the answer". Analytics India Magazine. Retrieved 3
March 2024.
56. Bhaimiya, Sawdah (20 June 2023). "DeepMind's co-
founder suggested testing an AI chatbot's ability to
turn $100,000 into $1 million to measure human-like
intelligence". Business Insider. Retrieved 3
March 2024.
57. Suleyman, Mustafa (14 July 2023). "Mustafa
Suleyman: My new Turing test would see if AI can
make $1 million". MIT Technology Review.
Retrieved 3 March 2024.
58. Shapiro, Stuart C. (1992). "Artificial
Intelligence" (PDF). In Stuart C. Shapiro
(ed.). Encyclopedia of Artificial
Intelligence (Second ed.). New York: John Wiley.
pp. 54–57. Archived (PDF) from the original on 1
February 2016. (Section 4 is on "AI-Complete
Tasks".)
59. Yampolskiy, Roman V. (2012). Xin-She Yang
(ed.). "Turing Test as a Defining Feature of AI-
Completeness" (PDF). Artificial Intelligence,
Evolutionary Computation and Metaheuristics: 3–
17. Archived (PDF) from the original on 22 May 2013.
60. "AI Index: State of AI in 13 Charts". Stanford
University Human-Centered Artificial Intelligence. 15
April 2024. Retrieved 27 May 2024.
61. Crevier 1993, pp. 48–50
62. Kaplan, Andreas (2022). "Artificial Intelligence,
Business and Civilization – Our Fate Made in
Machines". Archived from the original on 6 May 2022.
Retrieved 12 March 2022.
63. Simon 1965, p. 96 quoted in Crevier 1993, p. 109
64. "Scientist on the Set: An Interview with Marvin
Minsky". Archived from the original on 16 July 2012.
Retrieved 5 April 2008.
65. Marvin Minsky to Darrach (1970), quoted in Crevier
(1993, p. 109).
66. Lighthill 1973; Howe 1994
67. National Research Council 1999, "Shift to Applied
Research Increases Investment".
68. Crevier 1993, pp. 115–117; Russell & Norvig 2003,
pp. 21–22.
69. Crevier 1993, p. 211, Russell & Norvig 2003, p. 24
and see also Feigenbaum & McCorduck 1983
70. Crevier 1993, pp. 161–162, 197–203, 240; Russell &
Norvig 2003, p. 25.
71. Crevier 1993, pp. 209–212
72. McCarthy, John (2000). "Reply to Lighthill". Stanford
University. Archived from the original on 30
September 2008. Retrieved 29 September 2007.
73. Markoff, John (14 October 2005). "Behind Artificial
Intelligence, a Squadron of Bright Real People". The
New York Times. Archived from the original on 2
February 2023. Retrieved 18 February 2017. At its
low point, some computer scientists and software
engineers avoided the term artificial intelligence for
fear of being viewed as wild-eyed dreamers.
74. Russell & Norvig 2003, pp. 25–26
75. "Trends in the Emerging Tech Hype Cycle". Gartner
Reports. Archived from the original on 22 May 2019.
Retrieved 7 May 2019.
76. Moravec 1988, p. 20
77. Harnad, S. (1990). "The Symbol Grounding
Problem". Physica D. 42 (1–3): 335–
346. arXiv:cs/9906002. Bibcode:1990PhyD...42..335H
. doi:10.1016/0167-2789(90)90087-6. S2CID 320430
0.
78. Gubrud 1997
79. Hutter, Marcus (2005). Universal Artificial
Intelligence: Sequential Decisions Based on
Algorithmic Probability. Texts in Theoretical Computer
Science an EATCS Series.
Springer. doi:10.1007/b138233. ISBN 978-3-5402-
6877-2. S2CID 33352850. Archived from the original
on 19 July 2022. Retrieved 19 July 2022.
80. Legg, Shane (2008). Machine Super
Intelligence (PDF) (Thesis). University of
Lugano. Archived (PDF) from the original on 15 June
2022. Retrieved 19 July 2022.
81. Goertzel, Ben (2014). Artificial General Intelligence.
Lecture Notes in Computer Science. Vol. 8598.
Journal of Artificial General
Intelligence. doi:10.1007/978-3-319-09274-4. ISBN 97
8-3-3190-9273-7. S2CID 8387410.
82. "Who coined the term
"AGI"?". goertzel.org. Archived from the original on 28
December 2018. Retrieved 28 December 2018.,
via Life 3.0: 'The term "AGI" was popularized by...
Shane Legg, Mark Gubrud and Ben Goertzel'
83. Wang & Goertzel 2007
84. "First International Summer School in Artificial
General Intelligence, Main summer school: June 22 –
July 3, 2009, OpenCog Lab: July 6-9,
2009". Archived from the original on 28 September
2020. Retrieved 11 May 2020.
85. "Избираеми дисциплини 2009/2010 – пролетен
триместър" [Elective courses 2009/2010 – spring
trimester]. Факултет по математика и
информатика [Faculty of Mathematics and
Informatics] (in Bulgarian). Archived from the original
on 26 July 2020. Retrieved 11 May 2020.
86. "Избираеми дисциплини 2010/2011 – зимен
триместър" [Elective courses 2010/2011 – winter
trimester]. Факултет по математика и
информатика [Faculty of Mathematics and
Informatics] (in Bulgarian). Archived from the original
on 26 July 2020. Retrieved 11 May 2020.
87. Shevlin, Henry; Vold, Karina; Crosby, Matthew;
Halina, Marta (4 October 2019). "The limits of
machine intelligence: Despite progress in machine
intelligence, artificial general intelligence is still a
major challenge". EMBO Reports. 20 (10):
e49177. doi:10.15252/embr.201949177. ISSN 1469-
221X. PMC 6776890. PMID 31531926.
88. "Microsoft Researchers Claim GPT-4 Is Showing
"Sparks" of AGI". Futurism. 23 March 2023.
Retrieved 13 December 2023.
89. Allen, Paul; Greaves, Mark (12 October 2011). "The
Singularity Isn't Near". MIT Technology Review.
Retrieved 17 September 2014.
90. Winfield, Alan. "Artificial intelligence will not turn into
a Frankenstein's monster". The
Guardian. Archived from the original on 17 September
2014. Retrieved 17 September 2014.
91. Deane, George (2022). "Machines That Feel and
Think: The Role of Affective Feelings and Mental
Action in (Artificial) General Intelligence". Artificial
Life. 28 (3): 289–309. doi:10.1162/artl_a_00368. ISS
N 1064-5462. PMID 35881678. S2CID 251069071.
92. Clocksin 2003.
93. Fjelland, Ragnar (17 June 2020). "Why general
artificial intelligence will not be realized". Humanities
and Social Sciences Communications. 7 (1) 10: 1–
9. doi:10.1057/s41599-020-0494-4. hd
l:11250/2726984. ISSN 2662-9992. S2CID 21971055
4.
94. McCarthy 2007b.
95. Khatchadourian, Raffi (23 November 2015). "The
Doomsday Invention: Will artificial intelligence bring
us utopia or destruction?". The New
Yorker. Archived from the original on 28 January
2016. Retrieved 7 February 2016.
96. Müller, V. C., & Bostrom, N. (2016). Future progress
in artificial intelligence: A survey of expert opinion. In
Fundamental issues of artificial intelligence (pp. 555–
572). Springer, Cham.
97. Armstrong, Stuart, and Kaj Sotala. 2012. “How We’re
Predicting AI—or Failing To.” In Beyond AI: Artificial
Dreams, edited by Jan Romportl, Pavel Ircing, Eva
Žáčková, Michal Polák and Radek Schuster, pp. 52–
75. Plzeň: University of West Bohemia.
98. "Microsoft Now Claims GPT-4 Shows 'Sparks' of
General Intelligence". 24 March 2023.
99. Shimek, Cary (6 July 2023). "AI Outperforms
Humans in Creativity Test". Neuroscience News.
Retrieved 20 October 2023.
100. Guzik, Erik E.; Byrge, Christian; Gilde, Christian (1
December 2023). "The originality of machines: AI
takes the Torrance Test". Journal of Creativity. 33 (3):
100065. doi:10.1016/j.yjoc.2023.100065. ISSN 2713-
3745. S2CID 261087185.
101. Arcas, Blaise Agüera y (10 October 2023). "Artificial
General Intelligence Is Already Here". Noema.
102. Zia, Tehseen (8 January 2024). "Unveiling of Large
Multimodal Models: Shaping the Landscape of
Language Models in 2024". Unite.ai. Retrieved 26
May 2024.
103. Shulman, Mikey; Fanelli, Alessio (14 March
2024). "Making Transformers Sing – with Mikey
Shulman of Suno". Latent Space. Retrieved 30
July 2025.
104. Simpson, Will (28 July 2025). "A huge moment, not
just for me, but for the future of music: AI 'creator'
signs record deal with Hallwood Media". MusicRadar.
Retrieved 30 July 2025.
105. "Introducing 4o Image Generation". OpenAI. 25
March 2025. Retrieved 30 July 2025.
106. "NVIDIA Announces Project GR00T Foundation
Model for Humanoid Robots" (Press release). Nvidia.
18 March 2024. Retrieved 30 July 2025.
107. Vadrevu, Kalyan Meher; Omotuyi, Oyindamola (18
March 2025). "Accelerate Generalist Humanoid Robot
Development with NVIDIA Isaac GR00T N1". NVIDIA
Technical Blog. Retrieved 30 July 2025.
108. "RT-2: New model translates vision and language
into action". Google DeepMind. 28 July 2023.
Retrieved 30 July 2025.
109. "Introducing OpenAI o1-preview". OpenAI. 12
September 2024.
110. Knight, Will. "OpenAI Announces a New AI Model,
Code-Named Strawberry, That Solves Difficult
Problems Step by Step". Wired. ISSN 1059-1028.
Retrieved 17 September 2024.
111. "OpenAI Employee Claims AGI Has Been
Achieved". Orbital Today. 13 December 2024.
Retrieved 27 December 2024.
112. "AI Index: State of AI in 13 Charts". hai.stanford.edu.
15 April 2024. Retrieved 7 June 2024.
113. "Next-Gen AI: OpenAI and Meta's Leap Towards
Reasoning Machines". Unite.ai. 19 April 2024.
Retrieved 7 June 2024.
114. James, Alex P. (2022). "The Why, What, and How of
Artificial General Intelligence Chip
Development". IEEE Transactions on Cognitive and
Developmental Systems. 14 (2): 333–
347. arXiv:2012.06338. Bibcode:2022ITCDS..14..333
J. doi:10.1109/TCDS.2021.3069871. ISSN 2379-
8920. S2CID 228376556.
115. Pei, Jing; Deng, Lei; Song, Sen; Zhao, Mingguo;
Zhang, Youhui; Wu, Shuang; Wang, Guanrui; Zou,
Zhe; Wu, Zhenzhi; He, Wei; Chen, Feng; Deng, Ning;
Wu, Si; Wang, Yu; Wu, Yujie (2019). "Towards
artificial general intelligence with hybrid Tianjic chip
architecture". Nature. 572 (7767): 106–111. Bibcod
e:2019Natur.572..106P. doi:10.1038/s41586-019-
1424-8. ISSN 1476-4687. PMID 31367028. S2CID 19
9056116. Archived from the original on 29 August
2022. Retrieved 29 August 2022.
116. Pandey, Mohit; Fernandez, Michael; Gentile,
Francesco; Isayev, Olexandr; Tropsha, Alexander;
Stern, Abraham C.; Cherkasov, Artem (March
2022). "The transformational role of GPU computing
and deep learning in drug discovery". Nature Machine
Intelligence. 4 (3): 211–221. doi:10.1038/s42256-022-
00463-x. ISSN 2522-5839. S2CID 252081559.
117. Goertzel & Pennachin 2006.
118. (Kurzweil 2005, p. 260)
119. Goertzel 2007.
120. Grace, Katja (2016). "Error in Armstrong and Sotala
2012". AI Impacts (blog). Archived from the original on
4 December 2020. Retrieved 24 August 2020.
121. Butz, Martin V. (1 March 2021). "Towards Strong
AI". KI – Künstliche Intelligenz. 35 (1): 91–
101. doi:10.1007/s13218-021-00705-x. ISSN 1610-
1987. S2CID 256065190.
122. Liu, Feng; Shi, Yong; Liu, Ying (2017). "Intelligence
Quotient and Intelligence Grade of Artificial
Intelligence". Annals of Data Science. 4 (2): 179–
191. arXiv:1709.10242. doi:10.1007/s40745-017-
0109-0. S2CID 37900130.
123. Brien, Jörn (5 October 2017). "Google-KI doppelt so
schlau wie Siri" [Google AI is twice as smart as Siri –
but a six-year-old beats both] (in
German). Archived from the original on 3 January
2019. Retrieved 2 January 2019.
124.