Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
62 views5 pages

Tic-Tac-Toe: Tic-Tac-Toe, Also Spelled Tick Tack Toe, or Noughts and Crosses/xs and Os As It Is Known in

Tic-tac-toe, also known as noughts and crosses, is a paper and pencil game where two players take turns marking spaces in a 3x3 grid with Xs and Os. The player who marks three of their marks in a horizontal, vertical, or diagonal row first wins. Optimal tic-tac-toe strategy involves blocking the opponent from winning if possible, otherwise trying to get three in a row yourself. If neither is possible, play the center or corner spaces.

Uploaded by

chandnigoel
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views5 pages

Tic-Tac-Toe: Tic-Tac-Toe, Also Spelled Tick Tack Toe, or Noughts and Crosses/xs and Os As It Is Known in

Tic-tac-toe, also known as noughts and crosses, is a paper and pencil game where two players take turns marking spaces in a 3x3 grid with Xs and Os. The player who marks three of their marks in a horizontal, vertical, or diagonal row first wins. Optimal tic-tac-toe strategy involves blocking the opponent from winning if possible, otherwise trying to get three in a row yourself. If neither is possible, play the center or corner spaces.

Uploaded by

chandnigoel
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Tic-tac-toe

Tic-tac-toe, also spelled tick tack toe, or noughts and crosses/Xs and Os as it is known in
the UK, Australia and New Zealand, is a pencil-and-paper game for two players, X and O,
who take turns marking the spaces in a 3×3 grid. The X player usually goes first. The player
who succeeds in placing three respective marks in a horizontal, vertical, or diagonal row wins
the game.

The following example game is won by the first player, X:

Strategy
Optimal strategy for player X. In each grid, the shaded red X denotes the optimal move, and the
location of O's next move gives the next subgrid to examine. Note that only two sequences of moves
by O (both starting with center, top-right, left-mid) lead to a draw, with the remaining sequences
leading to wins from X.[3]

A player can play perfect tic-tac-toe (win or draw) given they move according to the highest
possible move from the following table.[4]

1. Win: If the player has two in a row, play the third to get three in a row.
2. Block: If the opponent has two in a row, play the third to block them.
3. Fork: Create an opportunity where you can win in two ways.
4. Block opponent's fork:
o Option 1: Create two in a row to force the opponent into defending, as long as it
doesn't result in them creating a fork or winning. For example, if "X" has a corner,
"O" has the center, and "X" has the opposite corner as well, "O" must not play a
corner in order to win. (Playing a corner in this scenario creates a fork for "X" to
win.)
o Option 2: If there is a configuration where the opponent can fork, block that fork.
5. Center: Play the center.
6. Opposite corner: If the opponent is in the corner, play the opposite corner.
7. Empty corner: Play in a corner square.
8. Empty side: Play in a middle square on any of the 4 sides.
Depth-first search
Depth-first search

Order in which the nodes are expanded

Class Search algorithm

Data structure Graph

Worst case performance O( | V | + | E | ) for explicit graphs traversed without repetition, O(bd) for
implicit graphs with branching factor b searched to depth d

Worst case space O( | V | ) if entire graph is traversed without repetition, O(longest path length
complexity searched) for implicit graphs without elimination of duplicate nodes

Depth-first search (DFS) is an algorithm for traversing or searching a tree, tree structure, or
graph. One starts at the root (selecting some node as the root in the graph case) and explores
as far as possible along each branch before backtracking.

DFS is an uninformed search that progresses by expanding the first child node of the search
tree that appears and thus going deeper and deeper until a goal node is found, or until it hits a
node that has no children. Then the search backtracks, returning to the most recent node it
hasn't finished exploring. In a non-recursive implementation, all freshly expanded nodes are
added to a stack for exploration.

Example

For the following graph:


a depth-first search starting at A, assuming that the left edges in the shown graph are chosen
before right edges, and assuming the search remembers previously-visited nodes and will not
repeat them (since this is a small graph), will visit the nodes in the following order: A, B, D,
F, E, C, G. The edges traversed in this search form a Trémaux tree, a structure with important
applications in graph theory.

Performing the same search without remembering previously visited nodes results in visiting
nodes in the order A, B, D, F, E, A, B, D, F, E, etc. forever, caught in the A, B, D, F, E cycle
and never reaching C or G.

Iterative deepening prevents this loop and will reach the following nodes on the following
depths, assuming it proceeds left-to-right as above:

 0: A
 1: A (repeated), B, C, E

(Note that iterative deepening has now seen C, when a conventional depth-first search did
not.)

 2: A, B, D, F, C, G, E, F

(Note that it still sees C, but that it came later. Also note that it sees E via a different path, and
loops back to F twice.)

 3: A, B, D, F, E, C, G, E, F, B

For this graph, as more depth is added, the two cycles "ABFE" and "AEFB" will simply get
longer before the algorithm gives up and tries another branch.
Natural language processing (NLP) is a field of computer science and linguistics concerned
with the interactions between computers and human (natural) languages.[1] In theory, natural-
language processing is a very attractive method of human-computer interaction. Natural
language understanding is sometimes referred to as an AI-complete problem, because natural-
language recognition seems to require extensive knowledge about the outside world and the
ability to manipulate it.

NLP has significant overlap with the field of computational linguistics, and is often
considered a sub-field of artificial intelligence.

Modern NLP algorithms are grounded in machine learning, especially statistical machine
learning. Research into modern statistical NLP algorithms requires an understanding of a
number of disparate fields, including linguistics, computer science, and statistics.

Major tasks in NLP

The following is a list of some of the most commonly researched tasks in NLP. Note that
some of these tasks have direct real-world applications, while others more commonly serve as
subtasks that are used to aid in solving larger tasks. What distinguishes these tasks from other
potential and actual NLP tasks is not only the volume of research devoted to them but the fact
that for each one there is typically a well-defined problem setting, a standard metric for
evaluating the task, standard corpora on which the task can be evaluated, and competitions
devoted to the specific task.

 Automatic summarization: Produce a readable summary of a chunk of text. Often used to


provide summaries of text of a known type, such as articles in the financial section of a
newspaper.
 Coreference resolution: Given a sentence or larger chunk of text, determine which words
("mentions") refer to the same objects ("entities"). Anaphora resolution is a specific example
of this task, and is specifically concerned with matching up pronouns with the nouns or
names that they refer to. The more general task of coreference resolution also includes
identify so-called "bridging relationships" involving referring expressions. For example, in a
sentence such as "He entered John's house through the front door", "the front door" is a
referring expression and the bridging relationship to be identified is the fact that the door
being referred to is the front door of John's house (rather than of some other structure that
might also be referred to).
 Discourse analysis: This rubric includes a number of related tasks. One task is identifying the
discourse structure of connected text, i.e. the nature of the discourse relationships between
sentences (e.g. elaboration, explanation, contrast). Another possible task is recognizing and
classifying the speech acts in a chunk of text (e.g. yes-no question, content question,
statement, assertion, etc.).
 Machine translation: Automatically translate text from one human language to another. This
is one of the most difficult problems, and is a member of a class of problems colloquially
termed "AI-complete", i.e. requiring all of the different types of knowledge that humans
possess (grammar, semantics, facts about the real world, etc.) in order to solve properly.
 Morphological segmentation: Separate words into individual morphemes and identify the
class of the morphemes. The difficulty of this task depends greatly on the complexity of the
morphology (i.e. the structure of words) of the language being considered. English has fairly
simple morphology, especially inflectional morphology, and thus it is often possible to ignore
this task entirely and simply model all possible forms of a word (e.g. "open, opens, opened,
opening") as separate words. In languages such as Turkish, however, such an approach is not
possible, as each dictionary entry has thousands of possible word forms.
 Named entity recognition (NER): Given a stream of text, determine which items in the text
map to proper names, such as people or places, and what the type of each such name is (e.g.
person, location, organization). Note that, although capitalization can aid in recognizing
named entities in languages such as English, this information cannot aid in determining the
type of named entity, and in any case is often inaccurate or insufficient. For example, the
first word of a sentence is also capitalized, and named entities often span several words,
only some of which are capitalized. Furthermore, many other languages in non-Western
scripts (e.g. Chinese or Arabic) do not have any capitalization at all, and even languages with
capitalization may not consistently use it to distinguish names. For example, German
capitalizes all nouns, regardless of whether they refer to names, and French and Spanish do
not capitalize names that serve as adjectives.
 Natural language generation: Convert information from computer databases into readable
human language.
 Natural language understanding: Convert chunks of text into more formal representations
such as first-order logic structures that are easier for computer programs to manipulate.
 Optical character recognition (OCR): Given an image representing printed text, determine
the corresponding text.
 Part-of-speech tagging:
 Parsing: Determine the parse tree (grammatical analysis) of a given sentence. The grammar
for natural languages is ambiguous and typical sentences have multiple possible analyses. In
fact, perhaps surprisingly, for a typical sentence there may be thousands of potential parses
(most of which will seem completely nonsensical to a human).
 Question answering: Given a human-language question, determine its answer. Typical
questions are have a specific right answer (such as "What is the capital of Canada?"), but
sometimes open-ended questions are also considered (such as "What is the meaning of
life?").
 Relationship extraction: Given a chunk of text, identify the relationships among named
entities (i.e. who is the wife of whom).
 Sentence breaking (also known as sentence boundary disambiguation): Given a chunk of
text, find the sentence boundaries. Sentence boundaries are often marked by periods or
other punctuation marks, but these same characters can serve other purposes (e.g. marking
abbreviations).
 Speech recognition:
 Speech segmentation: Given a sound clip of a person or people speaking, separate it into
words. A subtask of speech recognition and typically grouped with it.
 Topic segmentation and recognition: Given a chunk of text, separate it into segments each of
which is devoted to a topic, and identify the topic of the segment.
 Word segmentation:
 Word sense disambiguation:

You might also like