Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
75 views6 pages

Neural Networks

This document provides an introduction to artificial neural networks (ANNs). It discusses how ANNs are inspired by biological neural networks and are composed of interconnected artificial neurons that can transmit and process signals. It also summarizes how ANNs are trained using examples to learn patterns and make predictions, and outlines some of the history and applications of ANNs.

Uploaded by

Mariam muhsen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views6 pages

Neural Networks

This document provides an introduction to artificial neural networks (ANNs). It discusses how ANNs are inspired by biological neural networks and are composed of interconnected artificial neurons that can transmit and process signals. It also summarizes how ANNs are trained using examples to learn patterns and make predictions, and outlines some of the history and applications of ANNs.

Uploaded by

Mariam muhsen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

NEURAL NETWORKS

Prepared By:
Ahmed salam
Mustafa jassem

1 2020M.D
INTRUDACTION
Artificial neural networks (ANNs), usually simply called neural
networks (NNs), are computing systems inspired by
the biological neural networks that constitute animal brains.
An ANN is based on a collection of connected units or nodes
called artificial neurons, which loosely model the neurons in a
biological brain. Each connection, like the synapses in a biological
brain, can transmit a signal to other neurons. An artificial neuron
receives a signal then processes it and can signal neurons
connected to it. The "signal" at a connection is a real number, and
the output of each neuron is computed by some non-linear
function of the sum of its inputs. The connections are
called edges. Neurons and edges typically have a weight that
adjusts as learning proceeds. The weight increases or decreases
the strength of the signal at a connection. Neurons may have a
threshold such that a signal is sent only if the aggregate signal
crosses that threshold. Typically, neurons are aggregated into
layers. Different layers may perform different transformations on
their inputs. Signals travel from the first layer (the input layer), to
the last layer (the output layer), possibly after traversing the layers
multiple times.

TRANING
Neural networks learn (or are trained) by processing examples,
each of which contains a known "input" and "result," forming
probability-weighted associations between the two, which are
stored within the data structure of the net itself. The training of a
neural network from a given example is usually conducted by
determining the difference between the processed output of the
network (often a prediction) and a target output. This difference is
the error. The network then adjusts its weighted associations
according to a learning rule and using this error value. Successive

2 2020M.D
adjustments will cause the neural network to produce output
which is increasingly similar to the target output. After a sufficient
number of these adjustments the training can be terminated
based upon certain criteria. This is known as supervised learning.
Such systems "learn" to perform tasks by considering examples,
generally without being programmed with task-specific rules. For
example, in image recognition, they might learn to identify images
that contain cats by analyzing example images that have been
manually labeled as "cat" or "no cat" and using the results to
identify cats in other images. They do this without any prior
knowledge of cats, for example, that they have fur, tails, whiskers,
and cat-like faces. Instead, they automatically generate identifying
characteristics from the examples that they process.

HISTORY
Warren McCulloch and Walter Pitts(1943) opened the subject by
creating a computational model for neural networks. In the late
1940s, D. O. Hebb created a learning hypothesis based on the
mechanism of neural plasticity that became known as Hebbian
learning. Farley and Wesley A. Clark (1954) first used
computational machines, then called "calculators", to simulate a
Hebbian network. Rosenblatt (1958) created the perceptron. The
first functional networks with many layers were published
by Ivakhnenko and Lapa in 1965, as the Group Method of Data
Handling. The basics of continuous backpropagation were derived
in the context of control theory by Kelley[ in 1960 and by Bryson in
1961, using principles of dynamic programming. Thereafter
research stagnated following Minsky and Papert (1969), who
discovered that basic perceptrons were incapable of processing
the exclusive-or circuit and that computers lacked sufficient power
to process useful neural networks.
In 1970, Seppo Linnainmaa published the general method
for automatic differentiation (AD) of discrete connected networks

3 2020M.D
of nested differentiable functions. In 1973, Dreyfus used
backpropagation to adapt parameters of controllers in proportion
to error gradients. Werbos's (1975) backpropagation algorithm
enabled practical training of multi-layer networks. In 1982, he
applied Linnainmaa's AD method to neural networks in the way
that became widely used.
The development of metal–oxide–semiconductor (MOS) very-
large-scale integration (VLSI), in the form of complementary
MOS (CMOS) technology, enabled increasing MOS transistor
counts in digital electronics. This provided more processing power
for the development of practical artificial neural networks in the
1980s.
In 1986 Rumelhart, Hinton and Williams showed that
backpropagation learned interesting internal representations of
words as feature vectors when trained to predict the next word in
a sequence.
In 1992, max-pooling was introduced to help with least-shift
invariance and tolerance to deformation to aid 3D object
recognition. Schmidhuber adopted a multi-level hierarchy of
networks (1992) pre-trained one level at a time by unsupervised
learning and fine-tuned by backpropagation.

4 2020M.D
NETWORK DESIGN

Neural architecture search (NAS) uses machine learning to


automate ANN design. Various approaches to NAS have
designed networks that compare well with hand-designed
systems. The basic search algorithm is to propose a candidate
model, evaluate it against a dataset and use the results as
feedback to teach the NAS network.Available systems
include AutoML and AutoKeras.
Design issues include deciding the number, type and
connectedness of network layers, as well as the size of each and
the connection type (full, pooling, ...).
Hyperparameters must also be defined as part of the design (they
are not learned), governing matters such as how many neurons
are in each layer, learning rate, step, stride, depth, receptive field
and padding (for CNNs),

APPLICATIONS
Because of their ability to reproduce and model nonlinear
processes, Artificial neural networks have found applications in
many disciplines. Application areas include system
identification and control (vehicle control, trajectory
prediction, process control, natural resource
management), quantum chemistry, general game playing, pattern
recognition (radar systems, face identification, signal
classification, 3D reconstruction, object recognition and more),
sequence recognition (gesture, speech, handwritten and printed
text recognition), medical diagnosis, finance (e.g. automated
trading systems), data mining, visualization, machine translation,
social network filtering and e-mail spam filtering. ANNs have been
used to diagnose several types of cancers and to distinguish
highly invasive cancer cell lines from less invasive lines using only
cell shape information.

5 2020M.D
ANNs have been used to accelerate reliability analysis of
infrastructures subject to natural disasters and to predict
foundation settlements. ANNs have also been used for building
black-box models in geoscience: hydrology, ocean modelling
and coastal engineering, and geomorphology. ANNs have been
employed in cybersecurity, with the objective to discriminate
between legitimate activities and malicious ones. For example,
machine learning has been used for classifying Android
malware, for identifying domains belonging to threat actors and
for detecting URLs posing a security risk. Research is underway
on ANN systems designed for penetration testing, for detecting
botnets, credit cards frauds and network intrusions.
ANNs have been proposed as a tool to solve partial differential
equations in physics
and simulate the properties of many-body open quantum
systems.
In brain research ANNs have studied short-term behavior
of individual neurons,
the dynamics of neural circuitry arise from interactions between
individual neurons and how behavior can arise from abstract
neural modules that represent complete subsystems. Studies
considered long-and short-term plasticity of neural systems and
their relation to learning and memory from the individual neuron to
the system level.

6 2020M.D

You might also like