Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
20 views73 pages

Power Point Ia

Uploaded by

Mauro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views73 pages

Power Point Ia

Uploaded by

Mauro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

IN3050/IN4050 -

Introduction to
Artificial Intelligence
and Machine Learning
Lecture 14
The History and Philosophy of Artificial Intelligence
Jan Tore Lønning
Source: Wikipedia
What is AI?

History Philosophy
• What are AI researchers doing? • What is intelligence?
• And what have they done? • Relationship between
• Artificial intelligence
• Natural (human) intelligence

3
Program
1. The birth of AI
• (1956-1970)
2. The Turing test and a little more philosophy related to AI
3. Approaches to AI
• (→ 1990)
4. More recent trends
• (1990 →)

The two first videoes were recorded in 2020


Some cross-references may be inaccurate

4
14.1 The birth of AI
IN3050/IN4050 Introduction to Artificial Intelligence
and Machine Learning

5
The birth of (the term) Artificial Intelligence
• The Darthmouth Summer Research Projects, Summer of 1956
• Arranged by
John McCarthy, Marvin Minsky, Nathaniel Rochester, Claude Shannon
• Other participants:
Herbert Simon, Allen Newell, Arthur Samuel, John Nash
• The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of
intelligence can in principle be so precisely described that a machine can be made to simulate it.
• An attempt will be made to find how to make machines use language, form abstractions and concepts, solve
kinds of problems now reserved for humans, and improve themselves.
• We think that a significant advance can be made in one or more of these problems if a carefully selected
group of scientists work on it together for a summer. (2-month, 10 man)

6
John McCarthy, 1927-2011
• Invented around 1958
• LISP (programming language)
• Garbage collection
• Time sharing
• MIT 1956-1962
• Stanford 1962 →
• Established Stanford AI Lab, 1963
• Turing Award, 1971 for his AI work

7
Marvin Minsky, 1927-2016
• MIT 1958 →
• Founded MIT's AI Lab together with John
McCarthy, 1958
• Inventions:
• Hardware: Head-mounted graphical display, etc.
• w/ Papert: LOGO programming language
• Perceptrons:
• PhD thesis on peceptrons, 1954
• Perceptrons (w/ Seymour Papert), 1969
• Logically oriented work
• e.g. on Turing machines
• The Society of Minds Marvin Minsky (1927-2016)
AI pioneer, MIT AI Lab
• Turing Award, 1969
source
8
Allen Newell (1927-1992)
Herb Simon (1916-2001 )
• Logic Theorist, 1956
• Working program, demonstrated at Dartmouth
• Proved logical theorems from Principia Mathematica
• General Problem Solver, 1957
• A program for solving tasks in general
• Physical Symbols System Hypothesis, 1976 (1963?)
• Theory about AI
• Turing Award, 1976
• Simon, Nobel prize in economy, 1978

9
The strongholds of AI in USA
MIT

CMU

Stanford

• They remained the strongholds for AI for 50 years


• What are todays' AI strongholds?
• Facebook, Google, Amazon, Baidu, … 10
General Problem Solver (1957, GPS)
• Goals-mean analysis:
1. Current situation
2. Goal to achieve
3. A set of available operations
• Task:
• Put together a sequence of operation leading from
current situation to goal.
• Some conditions must be fulfilled.
• Earlier operations must establish these conditions.
• This establishes sub-goals.

11
GPS example: the towers of Hanoi
• Then ring 8 must be at the
bottom of C
• Preconditions:
• C must be free
• Goal: • Nothing on top of 8
• Move the stack from A to C • Hence: 1-7 must be on B
• Rules: • New sub-goal:
• Move one ring at a time • 1-7 on B
• A ring cannot be placed on top of • etc.
a smaller one

12
GPS - evaluation
• Other tasks solved by GPS:
• Logical proofs
• Missionaries and cannibals
• Two types of rules: • Solves some problems
• General rules • The general rules can reduce the
• Task specific rules search space compared to the
• Compared to humans: domain specific rules
• Modelled on human problem • Restricted applicability:
solving • Sometimes stuck in local optima
• Results evaluated against human • Combinatorial explosion, cf. chess
performance
• Project closed
13
Other voices

Symbolic AI/Rule-based Other early pioners


• Properties of GPS: • Arthur Samuel (1901-1990)
• Symbolic • (attended the Darthmouth
• Rule-based workshop)
• Based on/related to logic and • checkers playing program
proofs • coined the term "machine
• Search learning", 1959
• Typical for the approaches of the • William Grey Walter (1910-1977)
founding fathers • turtles

14
Samuel's checkers playing program
• Based on search
• (1952) Started with giving rewards to positions based
on recorded earlier games:
• based on Christopher Strachey' s 1951 program
• the first AI-program according to Jack Copeland
• (1955) Let the program play against itself and
humans and improved the reward function
• The term Machine learning, 1959
• The program beat a local champion, 1959,
• but was beaten by stronger programs in the 1970s

15
16
Grey Walter (1910-1977)
• Worked mainly in GB, also in
Soviet and USA
• Physiologist:
• Early use of EEG, several
discoveries
• Turtles, 1951:
• Simple robots
• Demo

17
Turtles, 1951
• Three wheels, two engines
• Two sensors:
• Touch – avoid collision
• Light – attracted by light, but not too sharp
• When battery got week, more sensible to light
• A strong light in its home
• Returned to home
• Goal-oriented behavior?
• Properties: brain-inspired, simple, analogue
• Compare to modern lawn movers and vacuum cleaners
18
Artificial Intelligence from 1956 →1970 (and beyond)

Methods Tasks
• "An anarchy of methods" • Problem solving
• Search, Game playing
• according to Melanie Mitchell
• Knowledge and Reasoning
• Mostly: • Logic, Theorem proving, Knowledge
• Symbolic representation
• Rule-based • Planning
• Combined with search • Learning
• Logic • Natural language understanding
• But also, e.g. • Perception
• Perceptron • Motion and manipulation

19
History of neural networks
Three main epochs:

1. The beginning (→ 1969)

2. Backpropagation (1986-)

3. Deep learning (2011→ )

• Marsland, originally 2009, lacks (3)

20
Minsky & Papert, The perceptron (1969):
• Showed:
• Networks without hidden layers can only solve
linearly separable problems
• Many simple problems, like logical XOR, are not
linearly separable
• Speculated
• Networks with hidden layers are probably
impossible to train
• Effect:
• Halted the development of neural networks
• Why such an effect?
• Minsky's position Marvin Minsky (1927-2016)
• A growing skepticism towards AI (funding) AI pioneer, MIT AI Lab
source
21
The (first) AI winter, the 1970s
• ALPAC report on Machine translation and funding, 1966, USA
• "The Perceptron", 1969
• Lighthill report, on AI funding in UK, 1974
• see https://en.wikipedia.org/wiki/AI_winter

22
https://commons.wikimedia.org/wiki/File:Ivan_Konstantinovich_Aivazovsky_-_Winter_in_Ukraine,_1874.jpg
Overselling https://en.wikipedia.org/wiki/History_of_artificial_intelligence

• 1958, H. A. Simon and Allen Newell: "within ten years a digital


computer will be the world's chess champion" and "within ten years a
digital computer will discover and prove an important new
mathematical theorem."[69]
• 1965, H. A. Simon: "machines will be capable, within twenty years, of
doing any work a man can do."[70]
• 1967, Marvin Minsky: "Within a generation ... the problem of creating
'artificial intelligence' will substantially be solved."[71]
• 1970, Marvin Minsky (in Life Magazine): "In from three to eight years
we will have a machine with the general intelligence of an average
human being."[72]
23
14.2 The Turing Test
- and a little more philosophy related to AI
IN3050/IN4050 Introduction to Artificial Intelligence
and Machine Learning

24
Alan Turing (1912-1954)
• 1936: The Turing machine
• the theoretical foundation of the computer
• 1939-1945: Codebreaking
• cf. "The Imitation Game"
• 1945→: Developed computers
• 1950: The Turing test
• 1952-1954: Mathematical biology
• The Turing Award named in honor of Turing

25
The Turing test
A. Try to fool the interrogator to
A think it is a human
B. Try to help the interrogator to
B
see that he/she is a human
C. The interrogator should guess
who is human and who is
C
machine

26
The Turing test

Turing's presentation Turing's view


• Can machines think? • In 50 years (2000):
• …I shall replace the question by another, • The machine's memory 10^9 (=1
which is closely related to it…
• Game 1: A:man, B:woman, C:who is what?
GigaB)
• Game 2: A:machine, B:human, C: who is • Less than 70% chance of correct
what? identification
• Will the interrogator decide wrongly as
often when the game is played like this as • But he says this is a guess
he does when the game is played • (other guesses in other
between a man and a woman? articles/interviews)

27
Evaluating the Turing test
1. Is it adequate? • The test has been much
• Will we say that a machine that discussed
passes the test can think?
• Turing anticipated 9 objections
2. Can a computer pass the test in his original paper which he
(in the future)? tried to rebut
3. Is it a goal that a machine
passes the test?

28
(4.) The argument from Consciousness

Jefferson acording to Turing Turing's answer


"Not until a machine can write a • According to the most extreme
sonnet or compose a concerto form of this view the only way by
because of thoughts and emotions
felt, and not by the chance fall of which one could be sure that a
symbols, could we agree that machine can think is to be the
machine equals brain - that is, not machine…
only write it but know that it had
written it. No mechanism could feel • To get convinced e.g., that
[… ]pleasure at its successes, grief somebody else compose
when its valves fuse, […] be angry or because of felt emotions, we
depressed when it cannot get what it would interrogate them, as in
wants."
the imitation game
29
Is the Turing test adequate?

Too strong Inadequate


• Animals are intelligent, but they • Simulation is not the real thing,
don't pass the test. e.g. a man could simulate being
• The machine does not only have a woman
to be intelligent, it must also • We ascribe consciousness to
mimic a human other humans because we know
Answers: they are made in the same ways
we are.
• The test only shows a sufficient
condition not a necessary one
30
Chat bots
• Eliza (Weizenbaum, 1966)
• Planned architecture:
• An overarching program for dialogue management
• Domain specific modules for various applications
• Doctor, the firs example of an application
• A psychotherapist (little domain knowledge necessary)
• Eliza/Doctor, principles:
• looking for key words in the questions
• transform input to answers, e.g. Why do you + ….
• exchange me with you
• vary the answers

31
Joseph Weizenbaum (1923-2008)
• Scared by the reactions to ELIZA
1. Some psychiatrists believed DOCTOR (ELIZA) could be
used in therapy
2. Users developed a personal relationship to ELIZA
3. Some believed this was a model for succesful NLP

• Weizenbaum became skeptical towards AI


https://en.wikipedia.org/wiki/Joseph_Weizenbaum
• Computer Power and Human Reason, 1976
• Great belief in humans, little belief in machines
• c.f. similar views by Stephen Hawking (1942-2018) a.o.
around 2015
32
Has the Turing been passed?
• Now and then there are stories in the news that the Turing test has
been passed.
• It is normally a combination of
• A program that imitates a mad, nasty or uncooperative person
• Judges that don't ask proper questions
• Observe that according to the rules, the human should be
cooperative.

• There have been various attempts of amplifying the rules of the test
with respect to interrogators, etc.

33
Questions to ask
• Try to ask, Where is New York times published?
• Or try the Winograd schemas https://en.wikipedia.org/wiki/Terry_Winograd

• (Terry Winograd, 1946 →, AI/NLP pioneer turned sceptical)

The city councilmen refused the The city councilmen refused the
demonstrators a permit because they feared demonstrators a permit because they
violence. advocated violence.

Q: Who feared violence? Q: Who advocated violence?

34
Philosophy of AI
A. Can a machine pass the Turing test?
B. Can a machine act intelligently?
C. Can it solve any problem that a person
would solve by thinking?
D. Can a machine have a mind and
consciousness?
E. Are human intelligence and machine
intelligence the same?
c.f. https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence

35
Intelligent machines?
C. Can it solve any problem that a
B. Can a machine act intelligently? person would solve by thinking?
• If "A machine is considered • According to the invitation to the
intelligent if it can perform tasks Darthmouth conference, yes.
which are considered intelligent • We aren't there yet!
when carried out by a human
being.“ (IN3050, lect. 1)
• then yes

36
E. Human intelligence = Machine intelligence?
 Physical symbol system hypothesis:
 “A physical symbol system has the necessary and sufficient means of general
intelligent action.” (Newell og Simon, 1976)

A physical symbol system e.g., • Implies that machines are


• Computer intelligent (jfr. Darthmouth)
• Chess • Interpreted as only symbol systems
are intelligent, i.e. human
• Writing on a whiteboard intelligence is equivalent to
executing computer programs.
• The computational theory of mind
37
D. Can a machine have a consciousness?
• A large philosophical discussion related
to this in the 1980-1990s
• To a large degree arguments against the
Turing test and the PSSH:
• cf. The consciousness argument above
• Chinese room argument (Searle).
• Argued that strong AI means, not only
to do the same as a human, but also do
it the same way, which includes having
a mind and consciousness that
machines don't have.

38
Connectionism
• After the publication of the
backpropagation paper in 1986,
• some psychologist and
philosophers argued for
Source
modelling the human mind in
terms of neural networks
• Connectionism • Argues that this avoids some of
the criticism directed towards
• Sub-symbolic computing the computational theory of
mind

39
Main approaches to AI
Symbolic, Rule-based Machine learning, Neural nets
• Logic, deduction • Induction rather than deduction
• Explicit coding of knowledge as • Adapt to the environment
formulas or rules • Main-focus in this course
• Dominated AI-books until the • Dominates AI today
end of the last century
• Compatible philosophy:
• Compatible philosophy: • Connectionism
• Computational theory of mind
(PSSH)
14.3 Traditional approaches in AI
IN3050/IN4050 Introduction to Artificial Intelligence
and Machine Learning

41
Artificial Intelligence from 1956 →1970 (and beyond)

Methods Tasks
• "An anarchy of methods" • Problem solving
• Search, Game playing
• according to Melanie Mitchell
• Knowledge and Reasoning
• Mostly: • Logic, Theorem proving, Knowledge
• Symbolic representation
• Rule-based • Planning
• Combined with search • Learning
• Logic • Natural language understanding
• But also, e.g. • Perception
• Perceptron • Motion and manipulation

42
Logic
• Aristotle 384- 322 BC
• All humans are mortal
• Sokrates is a human
• Sokrates is mortal
• Modern symbols:
• ∀𝑥(ℎ𝑢𝑚𝑎𝑛 𝑥 → 𝑚𝑜𝑟𝑡𝑎𝑙 𝑥 )

• Rationality, intelligence
image: Wikipedia
• Correct reasoning

43
Logic and computation
Logic
• The computer is based on logic:
• A computation can be considered
a logical proof
Thinking Computers
• Turing machines
• C.f. also McCulloch & Pitts’ neural
model of logic
• Hence logic seems to be the • E.g., LISP
perfect link between: • Based on logic
• Human intelligence • (expressed in lambda calculus)
• (beyond numerical calculations) • Symbolic computing
• Computers • In contrast to numeric computing

44
Challenges for the logical approach 1

A framework without content Knowledge representation


• An explosion of facts that must be
• Kim bought a rose
• Kim bought a flower represented:
• Not a valid inference • AI-applications focus on limited domains
• One needs an additional axiom • Knowledge representation a subfield of AI
• ∀𝑥(𝑅𝑜𝑠𝑒 𝑥 → 𝐹𝑙𝑜𝑤𝑒𝑟 𝑥 ) • Other approaches than logic:
• Semantic nets
• Ontologies

• While logic is neat, the represented


knowledge can be ad hoc

45
Challenges for the logical approach 2

Logic provides proofs Search problem


• To see that a conclusion follows • To find a proof, however, is a
from premises, logic provides challenge
proofs and proof-procedures • An enormous search space

46
Search
• Initially AI was to a large
focusing on search
• (discrete structures)
• Common part of various • First principles:
problems: • Ways to search a large search
space efficiently
• Logical proofs
• Newell and Simon’s GPS • The search space often still too
• Game playing, like Checkers large:
• Travelling salesman • Task specific heuristics:
• E.g., Chess

47
General symbolic approaches
• Symbolic approaches could use other
representations than logic:
• cf., GPS, checkers
• One approach base the system but by
observing human behavior
• (in contrast to logic) • Ad-hoc rules and
• E.g., Newell and Simon (eventually) blocks worlds:
• Or whatever works • Impressive in the
• Minsky, became anti-logic, ’’scruffy’’ small
• Many projects that were impressive in the small, • Problems with
e.g. ‘’Blocks world’’ scalability

48
The 1980s a new spring
• Expert systems
• Logic and the 5th generation program
• Revival of Neural Nets 1986

49
Expert systems
• The system tries to reproduce human expertise
• In its simplest forms:
• A set of if-then-else clauses:
• If red dots, check for fever
• The system is built by interviewing human
experts
• A system made by interviewing medical
experts, was report to perform between a GP
and an expert
• Expert system grew into a commercial success
and what adopted by many companies
50
Sideremark: Lisp machines, the 1980s
• The decades when PCs and personal
workstation became common
• Dedicated machines for AI:
• LISP machines
• Symbolics
• XEROX
• Texas instruments
• A.o.
• LISP as operating system

https://en.wikipedia.org/wiki/File:Symbolics3640_Modified.JPG
51
Logic
• Larger interest for logical (‘’neat’’) approaches
• Partly because of the limitations of the ad hoc approaches
• Partly because of the developments
• The resolution procedure (Robinson 1963)
• The programming language PROLOG based on the procedure:
• Computation as proofs

52
Fifth Generation Program (1982-1992)
• Large Japanese governmental funded research
program
• Goals/approaches:
• Hardware: parallel
• Software: logic-based

• Lead to AI-funding also in the rest of the world

• Failure?
• In particular, the AI promises
• Ahead of its time?
53
NN.2: Backpropagation (1986-)
• 1986, Rummelhart, Hinton,
Williams (re)invented
backpropagation
• An immediate enormous interest
by researchers
• But the practical results weren't
impressing, and the interest
diminished

54
Around 1990
• The commercial interest in AI diminished
• Expert systems were partly considered mainstream application of computers
• The marked for AI-workstations collapsed
• Research funding shrunk
• The 1990s (1987-1993) sometimes called the second AI winter

55
14.4 More recent trends
IN3050/IN4050 Introduction to Artificial Intelligence
and Machine Learning

56
Nouvelle AI/Behavior-
based robotics

Trends AI becomes an empirical


since 1990 science

Deep Learning

57
What is Artificial Intelligence?
"A machine is considered intelligent if it can perform tasks which are
considered intelligent when carried out by a human being." (Definition?)

Hence AI has focused on what is typical


for only humans:

? • Language
• Mathematics
Intelligence?

Higher-level intelligence Lower-level intelligence (?)


• Playing chess • Face recognition
• First year university • Moving around
mathematics • Animals are godd at this
• Machines are good at this • Machines are not

Difficult things are easy – Easy things are difficult

59
Rodney Brooks
• Robot researcher
• MIT: 1984-2007
• Director for AI Lab, MIT
• Intelligence without
• representation, 1987/1991
• reason, 1991
• Elephants don’t play chess, 1990

60
Critical towards traditional AI
• Studied isolated problems: chess, language, etc.
• How to put them together?
• Split the AI-part off from the rest
• No such separation in the real world
• Controlled, restricted environments
• Instead of real word
• The Sense-model-plan-act-framework
• E.g.. Shakey, ca 1970 (Stanford)
• Too much emphahsis inner representation
• Preplanned tasks
• In general, AI is to much influenced by the von
Neuman architecture: a CPU

61
Brooks’ alternative proposal:
Behavior-based Robotics
• Inspired by biology and the
evolution,
• e.g., vertebrates 550 mill. years
• mammals for 250 mill. years.
• Humans 1.5. mill years
• Walter’s turtles • Animal-inspired robots acting in
the real world
This suggests that problem solving • Decomposed by activity:
behavior, language, expert • One system for avoiding collition
knowledge and application, and • Another system for goal-directedness
reason, are all pretty simple once the • No central representation
essence of being and reacting are • ”The world is its own best model”
available. (Brooks)
62
Commercialization
• Brooks with colleagues commercialized
the technology (iRobot)
• Vacuum cleaners
• Military robots
• These ideas are also essential for the
development of e.g., self-driving cars

63
Nouvelle AI/Behavior-
based robotics

Trends AI becomes an empirical


since 1990 science

Deep Learning

64
Example: Natural Language Processing

Laboratory - traditional Out in the world


• Write neat rules which can • Consider texts in the real world:
handle a limited fragment of, • What can you do to them?
say, English, very well • Inspired by speech recognition
• (High precision)
• High applicability, but lower
• Low applicability (recall) precision
• Example • Example
• All humans are mortal • Kim bought a rose
• Sokrates is a human • Kim bought a flower
• Sokrates is mortal

65
Development
• Real-world data
• A bottom-up approach compared to a top-down approach which
were common in AI/NLP
• Induction rather than deduction
• Probabilities:
• What is the most probably translation of this sentence?
• Which led to: Numerical methods
• No longer only symbolic computing

66
Development ctd.
• Machine learning
• Rigid evaluation
• Took over methods form empirical sciences, experimental method
• Shared tasks
• Large amounts of available data
• Data science
• Stopped to call it AI!
• (If it works, it is no longer AI)
• The AI-effect

67
68
Nouvelle AI/Behavior-
based robotics

Trends AI becomes an empirical


since 1990 science

Deep Learning

69
Deep learning - Neural nets
• The third large change to AI
since 1990 is the Deep learning
revolution since 2012
• The revival of neural networks
• This made the term AI popular
again
• Observe also, that nothing grows
into infinity, and the peaks were
reached a few years ago.

70
An additional observation
• The philosophical discussions:
• Can machine thinks?
• Are machines intelligent
• are less active today than in the last century

• But the ethical discussions concerning AI. E.g.


• bias and fairness
• Intrusion into our lives
• have been become more relevant now that AI systems are
everywhere.

71
Main approaches to AI
Symbolic, Rule-based Machine learning, Neural nets
• Logic, deduction • Induction rather than deduction
• Explicit coding of knowledge as • Adapt to the environment
formulas or rules • Main-focus in this course
• Dominated AI-books until the • Dominates AI today
end of the last century
• Compatible philosophy:
• Compatible philosophy: • Connectionism
• Computational theory of mind
(PSSH)
Do we still need symbolic, rule-based AI?
• Neural nets are good at many tasks, but not all
• Neural nets are often black boxes, they give
prediction but no explanations
• There is a demand for explainable AI
• And don't forget: neural nets did not get much
attention for 15 years.
• The rule-based symbolic AI may strike make.

73

You might also like