Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
431 views21 pages

RNNs: A Guide for AI Enthusiasts

Recurrent neural networks (RNNs) are a class of artificial neural networks where connections between nodes can create a cycle, allowing output from some nodes to affect subsequent input to the same nodes. This allows RNNs to exhibit temporal dynamic behavior and process variable length sequences of inputs. RNNs come in many variants like Elman networks, Jordan networks, Hopfield networks, and LSTMs. LSTMs were invented in 1997 and have been applied successfully to tasks like speech recognition, machine translation, and language modeling.

Uploaded by

ava939
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
431 views21 pages

RNNs: A Guide for AI Enthusiasts

Recurrent neural networks (RNNs) are a class of artificial neural networks where connections between nodes can create a cycle, allowing output from some nodes to affect subsequent input to the same nodes. This allows RNNs to exhibit temporal dynamic behavior and process variable length sequences of inputs. RNNs come in many variants like Elman networks, Jordan networks, Hopfield networks, and LSTMs. LSTMs were invented in 1997 and have been applied successfully to tasks like speech recognition, machine translation, and language modeling.

Uploaded by

ava939
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Recurrent neural network

A recurrent neural network (RNN) is a class of artificial neural networks where connections between
nodes can create a cycle, allowing output from some nodes to affect subsequent input to the same nodes.
This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can
use their internal state (memory) to process variable length sequences of inputs.[1][2][3] This makes them
applicable to tasks such as unsegmented, connected handwriting recognition[4] or speech recognition.[5][6]
Recurrent neural networks are theoretically Turing complete and can run arbitrary programs to process
arbitrary sequences of inputs.[7]

The term "recurrent neural network" is used to refer to the class of networks with an infinite impulse
response, whereas "convolutional neural network" refers to the class of finite impulse response. Both
classes of networks exhibit temporal dynamic behavior.[8] A finite impulse recurrent network is a directed
acyclic graph that can be unrolled and replaced with a strictly feedforward neural network, while an infinite
impulse recurrent network is a directed cyclic graph that can not be unrolled.

Both finite impulse and infinite impulse recurrent networks can have additional stored states, and the
storage can be under direct control by the neural network. The storage can also be replaced by another
network or graph if that incorporates time delays or has feedback loops. Such controlled states are referred
to as gated state or gated memory, and are part of long short-term memory networks (LSTMs) and gated
recurrent units. This is also called Feedback Neural Network (FNN).

History
The Ising model (1925) by Wilhelm Lenz[9] and Ernst Ising[10][11] was a first RNN architecture that did
not learn. Shun'ichi Amari made it adaptive in 1972.[12][13] This was also called the Hopfield network
(1982). See also David Rumelhart's work in 1986.[14] In 1993, a neural history compressor system solved a
"Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in
time.[15]

LSTM

Long short-term memory (LSTM) networks were invented by Hochreiter and Schmidhuber in 1997 and set
accuracy records in multiple applications domains.[16]

Around 2007, LSTM started to revolutionize speech recognition, outperforming traditional models in
certain speech applications.[17] In 2009, a Connectionist Temporal Classification (CTC)-trained LSTM
network was the first RNN to win pattern recognition contests when it won several competitions in
connected handwriting recognition.[18][19] In 2014, the Chinese company Baidu used CTC-trained RNNs
to break the 2S09 Switchboard Hub5'00 speech recognition dataset[20] benchmark without using any
traditional speech processing methods.[21]

LSTM also improved large-vocabulary speech recognition[5][6] and text-to-speech synthesis[22] and was
used in Google Android.[18][23] In 2015, Google's speech recognition reportedly experienced a dramatic
performance jump of 49% through CTC-trained LSTM.[24]
LSTM broke records for improved machine translation,[25] Language Modeling[26] and Multilingual
Language Processing.[27] LSTM combined with convolutional neural networks (CNNs) improved
automatic image captioning.[28]

Architectures
RNNs come in many variants.

Fully recurrent

Fully recurrent neural networks (FRNN) connect the outputs of all


neurons to the inputs of all neurons. This is the most general neural
network topology because all other topologies can be represented
by setting some connection weights to zero to simulate the lack of
Compressed (left) and unfolded
connections between those neurons. The illustration to the right
(right) basic recurrent neural network
may be misleading to many because practical neural network
topologies are frequently organized in "layers" and the drawing
gives that appearance. However, what appears to be layers are, in
fact, different steps in time of the same fully recurrent neural network. The left-most item in the illustration
shows the recurrent connections as the arc labeled 'v'. It is "unfolded" in time to produce the appearance of
layers.

Elman networks and Jordan networks

An Elman network is a three-layer network (arranged horizontally


as x, y, and z in the illustration) with the addition of a set of context
units (u in the illustration). The middle (hidden) layer is connected
to these context units fixed with a weight of one.[29] At each time
step, the input is fed forward and a learning rule is applied. The
fixed back-connections save a copy of the previous values of the
hidden units in the context units (since they propagate over the
connections before the learning rule is applied). Thus the network
can maintain a sort of state, allowing it to perform such tasks as
sequence-prediction that are beyond the power of a standard
multilayer perceptron.

Jordan networks are similar to Elman networks. The context units


The Elman network
are fed from the output layer instead of the hidden layer. The
context units in a Jordan network are also referred to as the state
layer. They have a recurrent connection to themselves.[29]

Elman and Jordan networks are also known as "Simple recurrent networks" (SRN).

Elman network[30]

Jordan network[31]
Variables and functions

: input vector
: hidden layer vector
: output vector
, and : parameter matrices and vector
and : Activation functions

Hopfield

The Hopfield network is an RNN in which all connections across layers are equally sized. It requires
stationary inputs and is thus not a general RNN, as it does not process sequences of patterns. However, it
guarantees that it will converge. If the connections are trained using Hebbian learning then the Hopfield
network can perform as robust content-addressable memory, resistant to connection alteration.

Bidirectional associative memory

Introduced by Bart Kosko,[32] a bidirectional associative memory (BAM) network is a variant of a


Hopfield network that stores associative data as a vector. The bi-directionality comes from passing
information through a matrix and its transpose. Typically, bipolar encoding is preferred to binary encoding
of the associative pairs. Recently, stochastic BAM models using Markov stepping were optimized for
increased network stability and relevance to real-world applications.[33]

A BAM network has two layers, either of which can be driven as an input to recall an association and
produce an output on the other layer.[34]

Echo state

The echo state network (ESN) has a sparsely connected random hidden layer. The weights of output
neurons are the only part of the network that can change (be trained). ESNs are good at reproducing certain
time series.[35] A variant for spiking neurons is known as a liquid state machine.[36]

Independently RNN (IndRNN)

The Independently recurrent neural network (IndRNN)[37] addresses the gradient vanishing and exploding
problems in the traditional fully connected RNN. Each neuron in one layer only receives its own past state
as context information (instead of full connectivity to all other neurons in this layer) and thus neurons are
independent of each other's history. The gradient backpropagation can be regulated to avoid gradient
vanishing and exploding in order to keep long or short-term memory. The cross-neuron information is
explored in the next layers. IndRNN can be robustly trained with the non-saturated nonlinear functions
such as ReLU. Using skip connections, deep networks can be trained.

Recursive

A recursive neural network[38] is created by applying the same set of weights recursively over a
differentiable graph-like structure by traversing the structure in topological order. Such networks are
typically also trained by the reverse mode of automatic differentiation.[39][40] They can process distributed
representations of structure, such as logical terms. A special case of recursive neural networks is the RNN
whose structure corresponds to a linear chain. Recursive neural networks have been applied to natural
language processing.[41] The Recursive Neural Tensor Network uses a tensor-based composition function
for all nodes in the tree.[42]

Neural history compressor

The neural history compressor is an unsupervised stack of RNNs.[43] At the input level, it learns to predict
its next input from the previous inputs. Only unpredictable inputs of some RNN in the hierarchy become
inputs to the next higher level RNN, which therefore recomputes its internal state only rarely. Each higher
level RNN thus studies a compressed representation of the information in the RNN below. This is done
such that the input sequence can be precisely reconstructed from the representation at the highest level.

The system effectively minimises the description length or the negative logarithm of the probability of the
data.[44] Given a lot of learnable predictability in the incoming data sequence, the highest level RNN can
use supervised learning to easily classify even deep sequences with long intervals between important
events.

It is possible to distill the RNN hierarchy into two RNNs: the "conscious" chunker (higher level) and the
"subconscious" automatizer (lower level).[43] Once the chunker has learned to predict and compress inputs
that are unpredictable by the automatizer, then the automatizer can be forced in the next learning phase to
predict or imitate through additional units the hidden units of the more slowly changing chunker. This
makes it easy for the automatizer to learn appropriate, rarely changing memories across long intervals. In
turn, this helps the automatizer to make many of its once unpredictable inputs predictable, such that the
chunker can focus on the remaining unpredictable events.[43]

A generative model partially overcame the vanishing gradient problem[45] of automatic differentiation or
backpropagation in neural networks in 1992. In 1993, such a system solved a "Very Deep Learning" task
that required more than 1000 subsequent layers in an RNN unfolded in time.[15]

Second order RNNs

Second order RNNs use higher order weights instead of the standard weights, and states can be a
product. This allows a direct mapping to a finite-state machine both in training, stability, and
representation.[46][47] Long short-term memory is an example of this but has no such formal mappings or
proof of stability.

Long short-term memory

Long short-term memory (LSTM) is a deep learning system that


avoids the vanishing gradient problem. LSTM is normally
augmented by recurrent gates called "forget gates".[48] LSTM
prevents backpropagated errors from vanishing or exploding.[45]
Instead, errors can flow backwards through unlimited numbers of
virtual layers unfolded in space. That is, LSTM can learn tasks[18] Long short-term memory unit
that require memories of events that happened thousands or even
millions of discrete time steps earlier. Problem-specific LSTM-like
topologies can be evolved.[49] LSTM works even given long delays between significant events and can
handle signals that mix low and high frequency components.
Many applications use stacks of LSTM RNNs[50] and train them by Connectionist Temporal Classification
(CTC)[51] to find an RNN weight matrix that maximizes the probability of the label sequences in a training
set, given the corresponding input sequences. CTC achieves both alignment and recognition.

LSTM can learn to recognize context-sensitive languages unlike previous models based on hidden Markov
models (HMM) and similar concepts.[52]

Gated recurrent unit

Gated recurrent units (GRUs) are a gating mechanism in recurrent


neural networks introduced in 2014. They are used in the full form
and several simplified variants.[53][54] Their performance on
polyphonic music modeling and speech signal modeling was found
to be similar to that of long short-term memory.[55] They have
fewer parameters than LSTM, as they lack an output gate.[56] Gated recurrent unit

Bi-directional

Bi-directional RNNs use a finite sequence to predict or label each element of the sequence based on the
element's past and future contexts. This is done by concatenating the outputs of two RNNs, one processing
the sequence from left to right, the other one from right to left. The combined outputs are the predictions of
the teacher-given target signals. This technique has been proven to be especially useful when combined
with LSTM RNNs.[57][58]

Continuous-time

A continuous-time recurrent neural network (CTRNN) uses a system of ordinary differential equations to
model the effects on a neuron of the incoming inputs.

For a neuron in the network with activation , the rate of change of activation is given by:

Where:

 : Time constant of postsynaptic node


 : Activation of postsynaptic node
 : Rate of change of activation of postsynaptic node
 : Weight of connection from pre to postsynaptic node
 : Sigmoid of x e.g. .
 : Activation of presynaptic node
 : Bias of presynaptic node
 : Input (if any) to node

CTRNNs have been applied to evolutionary robotics where they have been used to address vision,[59] co-
operation,[60] and minimal cognitive behaviour.[61]
Note that, by the Shannon sampling theorem, discrete time recurrent neural networks can be viewed as
continuous-time recurrent neural networks where the differential equations have transformed into
equivalent difference equations.[62] This transformation can be thought of as occurring after the post-
synaptic node activation functions have been low-pass filtered but prior to sampling.

Hierarchical recurrent neural network

Hierarchical recurrent neural networks (HRNN) connect their neurons in various ways to decompose
hierarchical behavior into useful subprograms.[43][63] Such hierarchical structures of cognition are present
in theories of memory presented by philosopher Henri Bergson, whose philosophical views have inspired
hierarchical models.[64]

Recurrent multilayer perceptron network

Generally, a recurrent multilayer perceptron network (RMLP) network consists of cascaded subnetworks,
each of which contains multiple layers of nodes. Each of these subnetworks is feed-forward except for the
last layer, which can have feedback connections. Each of these subnets is connected only by feed-forward
connections.[65]

Multiple timescales model

A multiple timescales recurrent neural network (MTRNN) is a neural-based computational model that can
simulate the functional hierarchy of the brain through self-organization that depends on spatial connection
between neurons and on distinct types of neuron activities, each with distinct time properties.[66][67] With
such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable
primitives, which in turn are flexibly integrated into diverse sequential behaviors. The biological approval
of such a type of hierarchy was discussed in the memory-prediction theory of brain function by Hawkins in
his book On Intelligence. Such a hierarchy also agrees with theories of memory posited by philosopher
Henri Bergson, which have been incorporated into an MTRNN model.[64][68]

Neural Turing machines

Neural Turing machines (NTMs) are a method of extending recurrent neural networks by coupling them to
external memory resources which they can interact with by attentional processes. The combined system is
analogous to a Turing machine or Von Neumann architecture but is differentiable end-to-end, allowing it to
be efficiently trained with gradient descent.[69]

Differentiable neural computer

Differentiable neural computers (DNCs) are an extension of Neural Turing machines, allowing for the
usage of fuzzy amounts of each memory address and a record of chronology.

Neural network pushdown automata


Neural network pushdown automata (NNPDA) are similar to NTMs, but tapes are replaced by analogue
stacks that are differentiable and that are trained. In this way, they are similar in complexity to recognizers
of context free grammars (CFGs).[70]

Memristive Networks

Greg Snider of HP Labs describes a system of cortical computing with memristive nanodevices.[71] The
memristors (memory resistors) are implemented by thin film materials in which the resistance is electrically
tuned via the transport of ions or oxygen vacancies within the film. DARPA's SyNAPSE project has
funded IBM Research and HP Labs, in collaboration with the Boston University Department of Cognitive
and Neural Systems (CNS), to develop neuromorphic architectures which may be based on memristive
systems. Memristive networks are a particular type of physical neural network that have very similar
properties to (Little-)Hopfield networks, as they have a continuous dynamics, have a limited memory
capacity and they natural relax via the minimization of a function which is asymptotic to the Ising model. In
this sense, the dynamics of a memristive circuit has the advantage compared to a Resistor-Capacitor
network to have a more interesting non-linear behavior. From this point of view, engineering an analog
memristive networks accounts to a peculiar type of neuromorphic engineering in which the device behavior
depends on the circuit wiring, or topology. [72][73] The evolution of these networks can be studied
analytically using variations of the Caravelli-Traversa-Di Ventra equation.

Training

Gradient descent

Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. In
neural networks, it can be used to minimize the error term by changing each weight in proportion to the
derivative of the error with respect to that weight, provided the non-linear activation functions are
differentiable. Various methods for doing so were developed in the 1980s and early 1990s by Werbos,
Williams, Robinson, Schmidhuber, Hochreiter, Pearlmutter and others.

The standard method is called "backpropagation through time" or BPTT, and is a generalization of back-
propagation for feed-forward networks.[74][75] Like that method, it is an instance of automatic
differentiation in the reverse accumulation mode of Pontryagin's minimum principle. A more
computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL,[76][77] which
is an instance of automatic differentiation in the forward accumulation mode with stacked tangent vectors.
Unlike BPTT, this algorithm is local in time but not local in space.

In this context, local in space means that a unit's weight vector can be updated using only information
stored in the connected units and the unit itself such that update complexity of a single unit is linear in the
dimensionality of the weight vector. Local in time means that the updates take place continually (on-line)
and depend only on the most recent time step rather than on multiple time steps within a given time horizon
as in BPTT. Biological neural networks appear to be local with respect to both time and space.[78][79]

For recursively computing the partial derivatives, RTRL has a time-complexity of O(number of hidden x
number of weights) per time step for computing the Jacobian matrices, while BPTT only takes O(number
of weights) per time step, at the cost of storing all forward activations within the given time horizon.[80] An
online hybrid between BPTT and RTRL with intermediate complexity exists,[81][82] along with variants for
continuous time.[83]
A major problem with gradient descent for standard RNN architectures is that error gradients vanish
exponentially quickly with the size of the time lag between important events.[45][84] LSTM combined with
a BPTT/RTRL hybrid learning method attempts to overcome these problems.[16] This problem is also
solved in the independently recurrent neural network (IndRNN)[37] by reducing the context of a neuron to
its own past state and the cross-neuron information can then be explored in the following layers. Memories
of different range including long-term memory can be learned without the gradient vanishing and exploding
problem.

The on-line algorithm called causal recursive backpropagation (CRBP), implements and combines BPTT
and RTRL paradigms for locally recurrent networks.[85] It works with the most general locally recurrent
networks. The CRBP algorithm can minimize the global error term. This fact improves stability of the
algorithm, providing a unifying view on gradient calculation techniques for recurrent networks with local
feedback.

One approach to the computation of gradient information in RNNs with arbitrary architectures is based on
signal-flow graphs diagrammatic derivation.[86] It uses the BPTT batch algorithm, based on Lee's theorem
for network sensitivity calculations.[87] It was proposed by Wan and Beaufays, while its fast online version
was proposed by Campolucci, Uncini and Piazza.[87]

Global optimization methods

Training the weights in a neural network can be modeled as a non-linear global optimization problem. A
target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First,
the weights in the network are set according to the weight vector. Next, the network is evaluated against the
training sequence. Typically, the sum-squared-difference between the predictions and the target values
specified in the training sequence is used to represent the error of the current weight vector. Arbitrary global
optimization techniques may then be used to minimize this target function.

The most common global optimization method for training RNNs is genetic algorithms, especially in
unstructured networks.[88][89][90]

Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where
one gene in the chromosome represents one weight link. The whole network is represented as a single
chromosome. The fitness function is evaluated as follows:

Each weight encoded in the chromosome is assigned to the respective weight link of the
network.
The training set is presented to the network which propagates the input signals forward.
The mean-squared-error is returned to the fitness function.
This function drives the genetic selection process.

Many chromosomes make up the population; therefore, many different neural networks are evolved until a
stopping criterion is satisfied. A common stopping scheme is:

When the neural network has learnt a certain percentage of the training data or
When the minimum value of the mean-squared-error is satisfied or
When the maximum number of training generations has been reached.

The stopping criterion is evaluated by the fitness function as it gets the reciprocal of the mean-squared-error
from each network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness
function, reducing the mean-squared-error.
Other global (and/or evolutionary) optimization techniques may be used to seek a good set of weights, such
as simulated annealing or particle swarm optimization.

Related fields and models


RNNs may behave chaotically. In such cases, dynamical systems theory may be used for analysis.

They are in fact recursive neural networks with a particular structure: that of a linear chain. Whereas
recursive neural networks operate on any hierarchical structure, combining child representations into parent
representations, recurrent neural networks operate on the linear progression of time, combining the previous
time step and a hidden representation into the representation for the current time step.

In particular, RNNs can appear as nonlinear versions of finite impulse response and infinite impulse
response filters and also as a nonlinear autoregressive exogenous model (NARX).[91]

A Learning Algorithm Recommendation Framework may help guiding the selection of learning algorithm
and scientific discipline (e.g. RNN, GAN, RL, CNN,...). The framework has the advantage of having been
generated from an extensive analysis of the literature and dedicated to recurrent neural networks and their
variations.[92]

Libraries
Apache Singa
Caffe: Created by the Berkeley Vision and Learning Center (BVLC). It supports both CPU
and GPU. Developed in C++, and has Python and MATLAB wrappers.
Chainer: Fully in Python, production support for CPU, GPU, distributed training.
Deeplearning4j: Deep learning in Java and Scala on multi-GPU-enabled Spark.
Flux: includes interfaces for RNNs, including GRUs and LSTMs, written in Julia.
Keras: High-level API, providing a wrapper to many other deep learning libraries.
Microsoft Cognitive Toolkit
MXNet: an open-source deep learning framework used to train and deploy deep neural
networks.
PyTorch: Tensors and Dynamic neural networks in Python with GPU acceleration.
TensorFlow: Apache 2.0-licensed Theano-like library with support for CPU, GPU and
Google's proprietary TPU,[93] mobile
Theano: A deep-learning library for Python with an API largely compatible with the NumPy
library.
Torch: A scientific computing framework with support for machine learning algorithms, written
in C and Lua.

Applications
Applications of recurrent neural networks include:

Machine translation[25]
Robot control[94]
Time series prediction[95][96][97]
Speech recognition[98][99][100]
Speech synthesis[101]
Brain–computer interfaces[102]
Time series anomaly detection[103]
Text-to-Video model[104]
Rhythm learning[105]
Music composition[106]
Grammar learning[107][108][109]
Handwriting recognition[110][111]
Human action recognition[112]
Protein homology detection[113]
Predicting subcellular localization of proteins[58]
Several prediction tasks in the area of business process management[114]
Prediction in medical care pathways[115]
Predictions of fusion plasma disruptions in reactors (Fusion Recurrent Neural Network
(FRNN) code) [116]

References
1. Dupond, Samuel (2019). "A thorough review on the current advance of neural network
structures" (https://www.sciencedirect.com/journal/annual-reviews-in-control). Annual
Reviews in Control. 14: 200–230.
2. Abiodun, Oludare Isaac; Jantan, Aman; Omolara, Abiodun Esther; Dada, Kemi Victoria;
Mohamed, Nachaat Abdelatif; Arshad, Humaira (2018-11-01). "State-of-the-art in artificial
neural network applications: A survey" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6260
436). Heliyon. 4 (11): e00938. Bibcode:2018Heliy...400938A (https://ui.adsabs.harvard.edu/a
bs/2018Heliy...400938A). doi:10.1016/j.heliyon.2018.e00938 (https://doi.org/10.1016%2Fj.h
eliyon.2018.e00938). ISSN 2405-8440 (https://www.worldcat.org/issn/2405-8440).
PMC 6260436 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6260436). PMID 30519653
(https://pubmed.ncbi.nlm.nih.gov/30519653).
3. Tealab, Ahmed (2018-12-01). "Time series forecasting using artificial neural networks
methodologies: A systematic review" (https://doi.org/10.1016%2Fj.fcij.2018.10.003). Future
Computing and Informatics Journal. 3 (2): 334–340. doi:10.1016/j.fcij.2018.10.003 (https://do
i.org/10.1016%2Fj.fcij.2018.10.003). ISSN 2314-7288 (https://www.worldcat.org/issn/2314-7
288).
4. Graves, Alex; Liwicki, Marcus; Fernandez, Santiago; Bertolami, Roman; Bunke, Horst;
Schmidhuber, Jürgen (2009). "A Novel Connectionist System for Improved Unconstrained
Handwriting Recognition" (http://www.idsia.ch/~juergen/tpami_2008.pdf) (PDF). IEEE
Transactions on Pattern Analysis and Machine Intelligence. 31 (5): 855–868.
CiteSeerX 10.1.1.139.4502 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.139.
4502). doi:10.1109/tpami.2008.137 (https://doi.org/10.1109%2Ftpami.2008.137).
PMID 19299860 (https://pubmed.ncbi.nlm.nih.gov/19299860). S2CID 14635907 (https://api.s
emanticscholar.org/CorpusID:14635907).
5. Sak, Haşim; Senior, Andrew; Beaufays, Françoise (2014). "Long Short-Term Memory
recurrent neural network architectures for large scale acoustic modeling" (https://research.go
ogle.com/pubs/archive/43905.pdf) (PDF).
6. Li, Xiangang; Wu, Xihong (2014-10-15). "Constructing Long Short-Term Memory based
Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition".
arXiv:1410.4281 (https://arxiv.org/abs/1410.4281) [cs.CL (https://arxiv.org/archive/cs.CL)].
7. Hyötyniemi, Heikki (1996). "Turing machines are recurrent neural networks". Proceedings of
STeP '96/Publications of the Finnish Artificial Intelligence Society: 13–24.
8. Miljanovic, Milos (Feb–Mar 2012). "Comparative analysis of Recurrent and Finite Impulse
Response Neural Networks in Time Series Prediction" (http://www.ijcse.com/docs/INDJCSE
12-03-01-028.pdf) (PDF). Indian Journal of Computer and Engineering. 3 (1).
9. Lenz, W. (1920), "Beiträge zum Verständnis der magnetischen Eigenschaften in festen
Körpern", Physikalische Zeitschrift, 21: 613–615.
10. Ising, E. (1925), "Beitrag zur Theorie des Ferromagnetismus", Z. Phys., 31 (1): 253–258,
Bibcode:1925ZPhy...31..253I (https://ui.adsabs.harvard.edu/abs/1925ZPhy...31..253I),
doi:10.1007/BF02980577 (https://doi.org/10.1007%2FBF02980577), S2CID 122157319 (htt
ps://api.semanticscholar.org/CorpusID:122157319)
11. Brush, Stephen G. (1967). "History of the Lenz-Ising Model". Reviews of Modern Physics. 39
(4): 883–893. Bibcode:1967RvMP...39..883B (https://ui.adsabs.harvard.edu/abs/1967RvM
P...39..883B). doi:10.1103/RevModPhys.39.883 (https://doi.org/10.1103%2FRevModPhys.3
9.883).
12. Amari, Shun-Ichi (1972). "Learning patterns and pattern sequences by self-organizing nets
of threshold elements". IEEE Transactions. C (21): 1197–1206.
13. Schmidhuber, Juergen (2022). "Annotated History of Modern AI and Deep Learning".
arXiv:2212.11279 (https://arxiv.org/abs/2212.11279) [cs.NE (https://arxiv.org/archive/cs.NE)].
14. Williams, Ronald J.; Hinton, Geoffrey E.; Rumelhart, David E. (October 1986). "Learning
representations by back-propagating errors". Nature. 323 (6088): 533–536.
Bibcode:1986Natur.323..533R (https://ui.adsabs.harvard.edu/abs/1986Natur.323..533R).
doi:10.1038/323533a0 (https://doi.org/10.1038%2F323533a0). ISSN 1476-4687 (https://ww
w.worldcat.org/issn/1476-4687). S2CID 205001834 (https://api.semanticscholar.org/CorpusI
D:205001834).
15. Schmidhuber, Jürgen (1993). Habilitation thesis: System modeling and optimization (ftp://ftp.i
dsia.ch/pub/juergen/habilitation.pdf) (PDF). Page 150 ff demonstrates credit assignment
across the equivalent of 1,200 layers in an unfolded RNN.
16. Hochreiter, Sepp; Schmidhuber, Jürgen (1997-11-01). "Long Short-Term Memory". Neural
Computation. 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735 (https://doi.org/10.1162%2
Fneco.1997.9.8.1735). PMID 9377276 (https://pubmed.ncbi.nlm.nih.gov/9377276).
S2CID 1915014 (https://api.semanticscholar.org/CorpusID:1915014).
17. Fernández, Santiago; Graves, Alex; Schmidhuber, Jürgen (2007). An Application of
Recurrent Neural Networks to Discriminative Keyword Spotting (http://dl.acm.org/citation.cf
m?id=1778066.1778092). Proceedings of the 17th International Conference on Artificial
Neural Networks. ICANN'07. Berlin, Heidelberg: Springer-Verlag. pp. 220–229. ISBN 978-3-
540-74693-5.
18. Schmidhuber, Jürgen (January 2015). "Deep Learning in Neural Networks: An Overview".
Neural Networks. 61: 85–117. arXiv:1404.7828 (https://arxiv.org/abs/1404.7828).
doi:10.1016/j.neunet.2014.09.003 (https://doi.org/10.1016%2Fj.neunet.2014.09.003).
PMID 25462637 (https://pubmed.ncbi.nlm.nih.gov/25462637). S2CID 11715509 (https://api.s
emanticscholar.org/CorpusID:11715509).
19. Graves, Alex; Schmidhuber, Jürgen (2009). "Offline Handwriting Recognition with
Multidimensional Recurrent Neural Networks" (https://papers.nips.cc/paper/3449-offline-han
dwriting-recognition-with-multidimensional-recurrent-neural-networks). In Koller, D.;
Schuurmans, D.; Bengio, Y.; Bottou, L. (eds.). Advances in Neural Information Processing
Systems. Vol. 21. Neural Information Processing Systems (NIPS) Foundation. pp. 545–552.
20. "2000 HUB5 English Evaluation Speech - Linguistic Data Consortium" (https://catalog.ldc.u
penn.edu/LDC2002S09). catalog.ldc.upenn.edu.
21. Hannun, Awni; Case, Carl; Casper, Jared; Catanzaro, Bryan; Diamos, Greg; Elsen, Erich;
Prenger, Ryan; Satheesh, Sanjeev; Sengupta, Shubho (2014-12-17). "Deep Speech:
Scaling up end-to-end speech recognition". arXiv:1412.5567 (https://arxiv.org/abs/1412.556
7) [cs.CL (https://arxiv.org/archive/cs.CL)].
22. Fan, Bo; Wang, Lijuan; Soong, Frank K.; Xie, Lei (2015) "Photo-Real Talking Head with
Deep Bidirectional LSTM", in Proceedings of ICASSP 2015
23. Zen, Heiga; Sak, Haşim (2015). "Unidirectional Long Short-Term Memory Recurrent Neural
Network with Recurrent Output Layer for Low-Latency Speech Synthesis" (https://static.goog
leusercontent.com/media/research.google.com/en//pubs/archive/43266.pdf) (PDF).
Google.com. ICASSP. pp. 4470–4474.
24. Sak, Haşim; Senior, Andrew; Rao, Kanishka; Beaufays, Françoise; Schalkwyk, Johan
(September 2015). "Google voice search: faster and more accurate" (http://googleresearch.b
logspot.ch/2015/09/google-voice-search-faster-and-more.html).
25. Sutskever, Ilya; Vinyals, Oriol; Le, Quoc V. (2014). "Sequence to Sequence Learning with
Neural Networks" (https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-n
eural-networks.pdf) (PDF). Electronic Proceedings of the Neural Information Processing
Systems Conference. 27: 5346. arXiv:1409.3215 (https://arxiv.org/abs/1409.3215).
Bibcode:2014arXiv1409.3215S (https://ui.adsabs.harvard.edu/abs/2014arXiv1409.3215S).
26. Jozefowicz, Rafal; Vinyals, Oriol; Schuster, Mike; Shazeer, Noam; Wu, Yonghui (2016-02-
07). "Exploring the Limits of Language Modeling". arXiv:1602.02410 (https://arxiv.org/abs/16
02.02410) [cs.CL (https://arxiv.org/archive/cs.CL)].
27. Gillick, Dan; Brunk, Cliff; Vinyals, Oriol; Subramanya, Amarnag (2015-11-30). "Multilingual
Language Processing From Bytes". arXiv:1512.00103 (https://arxiv.org/abs/1512.00103)
[cs.CL (https://arxiv.org/archive/cs.CL)].
28. Vinyals, Oriol; Toshev, Alexander; Bengio, Samy; Erhan, Dumitru (2014-11-17). "Show and
Tell: A Neural Image Caption Generator". arXiv:1411.4555 (https://arxiv.org/abs/1411.4555)
[cs.CV (https://arxiv.org/archive/cs.CV)].
29. Cruse, Holk; Neural Networks as Cybernetic Systems (http://www.brains-minds-media.org/ar
chive/615/bmm615.pdf), 2nd and revised edition
30. Elman, Jeffrey L. (1990). "Finding Structure in Time" (https://doi.org/10.1016%2F0364-021
3%2890%2990002-E). Cognitive Science. 14 (2): 179–211. doi:10.1016/0364-
0213(90)90002-E (https://doi.org/10.1016%2F0364-0213%2890%2990002-E).
31. Jordan, Michael I. (1997-01-01). "Serial Order: A Parallel Distributed Processing Approach".
Neural-Network Models of Cognition - Biobehavioral Foundations. Advances in Psychology.
Neural-Network Models of Cognition. Vol. 121. pp. 471–495. doi:10.1016/s0166-
4115(97)80111-2 (https://doi.org/10.1016%2Fs0166-4115%2897%2980111-2). ISBN 978-0-
444-81931-4. S2CID 15375627 (https://api.semanticscholar.org/CorpusID:15375627).
32. Kosko, Bart (1988). "Bidirectional associative memories". IEEE Transactions on Systems,
Man, and Cybernetics. 18 (1): 49–60. doi:10.1109/21.87054 (https://doi.org/10.1109%2F21.8
7054). S2CID 59875735 (https://api.semanticscholar.org/CorpusID:59875735).
33. Rakkiyappan, Rajan; Chandrasekar, Arunachalam; Lakshmanan, Subramanian; Park, Ju H.
(2 January 2015). "Exponential stability for markovian jumping stochastic BAM neural
networks with mode-dependent probabilistic time-varying delays and impulse control".
Complexity. 20 (3): 39–65. Bibcode:2015Cmplx..20c..39R (https://ui.adsabs.harvard.edu/ab
s/2015Cmplx..20c..39R). doi:10.1002/cplx.21503 (https://doi.org/10.1002%2Fcplx.21503).
34. Rojas, Rául (1996). Neural networks: a systematic introduction (https://books.google.com/bo
oks?id=txsjjYzFJS4C&pg=PA336). Springer. p. 336. ISBN 978-3-540-60505-8.
35. Jaeger, Herbert; Haas, Harald (2004-04-02). "Harnessing Nonlinearity: Predicting Chaotic
Systems and Saving Energy in Wireless Communication". Science. 304 (5667): 78–80.
Bibcode:2004Sci...304...78J (https://ui.adsabs.harvard.edu/abs/2004Sci...304...78J).
CiteSeerX 10.1.1.719.2301 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.719.
2301). doi:10.1126/science.1091277 (https://doi.org/10.1126%2Fscience.1091277).
PMID 15064413 (https://pubmed.ncbi.nlm.nih.gov/15064413). S2CID 2184251 (https://api.se
manticscholar.org/CorpusID:2184251).
36. Maass, Wolfgang; Natschläger, Thomas; Markram, Henry (2002). "Real-time computing
without stable states: a new framework for neural computation based on perturbations" (http
s://igi-web.tugraz.at/people/maass/psfiles/130.pdf) (PDF). Neural Computation. 14 (11):
2531–2560. doi:10.1162/089976602760407955 (https://doi.org/10.1162%2F089976602760
407955). PMID 12433288 (https://pubmed.ncbi.nlm.nih.gov/12433288). S2CID 1045112 (htt
ps://api.semanticscholar.org/CorpusID:1045112).
37. Li, Shuai; Li, Wanqing; Cook, Chris; Zhu, Ce; Yanbo, Gao (2018). "Independently Recurrent
Neural Network (IndRNN): Building a Longer and Deeper RNN". arXiv:1803.04831 (https://a
rxiv.org/abs/1803.04831) [cs.CV (https://arxiv.org/archive/cs.CV)].
38. Goller, Christoph; Küchler, Andreas (1996). "Learning task-dependent distributed
representations by backpropagation through structure". Proceedings of International
Conference on Neural Networks (ICNN'96). Vol. 1. p. 347. CiteSeerX 10.1.1.52.4759 (https://
citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.52.4759).
doi:10.1109/ICNN.1996.548916 (https://doi.org/10.1109%2FICNN.1996.548916). ISBN 978-
0-7803-3210-2. S2CID 6536466 (https://api.semanticscholar.org/CorpusID:6536466).
39. Linnainmaa, Seppo (1970). The representation of the cumulative rounding error of an
algorithm as a Taylor expansion of the local rounding errors. M.Sc. thesis (in Finnish),
University of Helsinki.
40. Griewank, Andreas; Walther, Andrea (2008). Evaluating Derivatives: Principles and
Techniques of Algorithmic Differentiation (https://books.google.com/books?id=xoiiLaRxcbE
C) (Second ed.). SIAM. ISBN 978-0-89871-776-1.
41. Socher, Richard; Lin, Cliff; Ng, Andrew Y.; Manning, Christopher D., "Parsing Natural
Scenes and Natural Language with Recursive Neural Networks" (https://ai.stanford.edu/~an
g/papers/icml11-ParsingWithRecursiveNeuralNetworks.pdf) (PDF), 28th International
Conference on Machine Learning (ICML 2011)
42. Socher, Richard; Perelygin, Alex; Wu, Jean Y.; Chuang, Jason; Manning, Christopher D.; Ng,
Andrew Y.; Potts, Christopher. "Recursive Deep Models for Semantic Compositionality Over
a Sentiment Treebank" (http://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) (PDF).
Emnlp 2013.
43. Schmidhuber, Jürgen (1992). "Learning complex, extended sequences using the principle of
history compression" (ftp://ftp.idsia.ch/pub/juergen/chunker.pdf) (PDF). Neural Computation.
4 (2): 234–242. doi:10.1162/neco.1992.4.2.234 (https://doi.org/10.1162%2Fneco.1992.4.2.2
34). S2CID 18271205 (https://api.semanticscholar.org/CorpusID:18271205).
44. Schmidhuber, Jürgen (2015). "Deep Learning" (https://doi.org/10.4249%2Fscholarpedia.328
32). Scholarpedia. 10 (11): 32832. Bibcode:2015SchpJ..1032832S (https://ui.adsabs.harvar
d.edu/abs/2015SchpJ..1032832S). doi:10.4249/scholarpedia.32832 (https://doi.org/10.424
9%2Fscholarpedia.32832).
45. Hochreiter, Sepp (1991), Untersuchungen zu dynamischen neuronalen Netzen (http://peopl
e.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf), Diploma thesis,
Institut f. Informatik, Technische Univ. Munich, Advisor Jürgen Schmidhuber
46. Giles, C. Lee; Miller, Clifford B.; Chen, Dong; Chen, Hsing-Hen; Sun, Guo-Zheng; Lee, Yee-
Chun (1992). "Learning and Extracting Finite State Automata with Second-Order Recurrent
Neural Networks" (https://clgiles.ist.psu.edu/pubs/NC1992-recurrent-NN.pdf) (PDF). Neural
Computation. 4 (3): 393–405. doi:10.1162/neco.1992.4.3.393 (https://doi.org/10.1162%2Fne
co.1992.4.3.393). S2CID 19666035 (https://api.semanticscholar.org/CorpusID:19666035).
47. Omlin, Christian W.; Giles, C. Lee (1996). "Constructing Deterministic Finite-State Automata
in Recurrent Neural Networks". Journal of the ACM. 45 (6): 937–972.
CiteSeerX 10.1.1.32.2364 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.23
64). doi:10.1145/235809.235811 (https://doi.org/10.1145%2F235809.235811).
S2CID 228941 (https://api.semanticscholar.org/CorpusID:228941).
48. Gers, Felix A.; Schraudolph, Nicol N.; Schmidhuber, Jürgen (2002). "Learning Precise
Timing with LSTM Recurrent Networks" (http://www.jmlr.org/papers/volume3/gers02a/gers02
a.pdf) (PDF). Journal of Machine Learning Research. 3: 115–143. Retrieved 2017-06-13.
49. Bayer, Justin; Wierstra, Daan; Togelius, Julian; Schmidhuber, Jürgen (2009-09-14). Evolving
Memory Cell Structures for Sequence Learning (https://mediatum.ub.tum.de/doc/1289041/do
cument.pdf) (PDF). Artificial Neural Networks – ICANN 2009. Lecture Notes in Computer
Science. Vol. 5769. Berlin, Heidelberg: Springer. pp. 755–764. doi:10.1007/978-3-642-
04277-5_76 (https://doi.org/10.1007%2F978-3-642-04277-5_76). ISBN 978-3-642-04276-8.
50. Fernández, Santiago; Graves, Alex; Schmidhuber, Jürgen (2007). "Sequence labelling in
structured domains with hierarchical recurrent neural networks". Proc. 20th International
Joint Conference on Artificial Intelligence, Ijcai 2007: 774–779. CiteSeerX 10.1.1.79.1887 (ht
tps://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.79.1887).
51. Graves, Alex; Fernández, Santiago; Gomez, Faustino J. (2006). "Connectionist temporal
classification: Labelling unsegmented sequence data with recurrent neural networks".
Proceedings of the International Conference on Machine Learning: 369–376.
CiteSeerX 10.1.1.75.6306 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.75.63
06).
52. Gers, Felix A.; Schmidhuber, Jürgen (November 2001). "LSTM recurrent networks learn
simple context-free and context-sensitive languages" (https://semanticscholar.org/paper/f828
b401c86e0f8fddd8e77774e332dfd226cb05). IEEE Transactions on Neural Networks. 12 (6):
1333–1340. doi:10.1109/72.963769 (https://doi.org/10.1109%2F72.963769). ISSN 1045-
9227 (https://www.worldcat.org/issn/1045-9227). PMID 18249962 (https://pubmed.ncbi.nlm.n
ih.gov/18249962). S2CID 10192330 (https://api.semanticscholar.org/CorpusID:10192330).
53. Heck, Joel; Salem, Fathi M. (2017-01-12). "Simplified Minimal Gated Unit Variations for
Recurrent Neural Networks". arXiv:1701.03452 (https://arxiv.org/abs/1701.03452) [cs.NE (htt
ps://arxiv.org/archive/cs.NE)].
54. Dey, Rahul; Salem, Fathi M. (2017-01-20). "Gate-Variants of Gated Recurrent Unit (GRU)
Neural Networks". arXiv:1701.05923 (https://arxiv.org/abs/1701.05923) [cs.NE (https://arxiv.o
rg/archive/cs.NE)].
55. Chung, Junyoung; Gulcehre, Caglar; Cho, KyungHyun; Bengio, Yoshua (2014). "Empirical
Evaluation of Gated Recurrent Neural Networks on Sequence Modeling". arXiv:1412.3555
(https://arxiv.org/abs/1412.3555) [cs.NE (https://arxiv.org/archive/cs.NE)].
56. Britz, Denny (October 27, 2015). "Recurrent Neural Network Tutorial, Part 4 – Implementing
a GRU/LSTM RNN with Python and Theano – WildML" (http://www.wildml.com/2015/10/rec
urrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/).
Wildml.com. Retrieved May 18, 2016.
57. Graves, Alex; Schmidhuber, Jürgen (2005-07-01). "Framewise phoneme classification with
bidirectional LSTM and other neural network architectures". Neural Networks. IJCNN 2005.
18 (5): 602–610. CiteSeerX 10.1.1.331.5800 (https://citeseerx.ist.psu.edu/viewdoc/summar
y?doi=10.1.1.331.5800). doi:10.1016/j.neunet.2005.06.042 (https://doi.org/10.1016%2Fj.neu
net.2005.06.042). PMID 16112549 (https://pubmed.ncbi.nlm.nih.gov/16112549).
S2CID 1856462 (https://api.semanticscholar.org/CorpusID:1856462).
58. Thireou, Trias; Reczko, Martin (July 2007). "Bidirectional Long Short-Term Memory
Networks for Predicting the Subcellular Localization of Eukaryotic Proteins". IEEE/ACM
Transactions on Computational Biology and Bioinformatics. 4 (3): 441–446.
doi:10.1109/tcbb.2007.1015 (https://doi.org/10.1109%2Ftcbb.2007.1015). PMID 17666763
(https://pubmed.ncbi.nlm.nih.gov/17666763). S2CID 11787259 (https://api.semanticscholar.o
rg/CorpusID:11787259).
59. Harvey, Inman; Husbands, Phil; Cliff, Dave (1994), "Seeing the light: Artificial evolution, real
vision" (https://www.researchgate.net/publication/229091538_Seeing_the_Light_Artificial_E
volution_Real_Vision), 3rd international conference on Simulation of adaptive behavior:
from animals to animats 3, pp. 392–401
60. Quinn, Matt (2001). "Evolving communication without dedicated communication channels".
Advances in Artificial Life: 6th European Conference, ECAL 2001. pp. 357–366.
doi:10.1007/3-540-44811-X_38 (https://doi.org/10.1007%2F3-540-44811-X_38). ISBN 978-
3-540-42567-0.
61. Beer, Randall D. (1997). "The dynamics of adaptive behavior: A research program".
Robotics and Autonomous Systems. 20 (2–4): 257–289. doi:10.1016/S0921-
8890(96)00063-2 (https://doi.org/10.1016%2FS0921-8890%2896%2900063-2).
62. Sherstinsky, Alex (2018-12-07). Bloem-Reddy, Benjamin; Paige, Brooks; Kusner, Matt;
Caruana, Rich; Rainforth, Tom; Teh, Yee Whye (eds.). Deriving the Recurrent Neural
Network Definition and RNN Unrolling Using Signal Processing (https://www.researchgate.n
et/publication/331718291). Critiquing and Correcting Trends in Machine Learning Workshop
at NeurIPS-2018 (https://ml-critique-correct.github.io/).
63. Paine, Rainer W.; Tani, Jun (2005-09-01). "How Hierarchical Control Self-organizes in
Artificial Adaptive Systems". Adaptive Behavior. 13 (3): 211–225.
doi:10.1177/105971230501300303 (https://doi.org/10.1177%2F105971230501300303).
S2CID 9932565 (https://api.semanticscholar.org/CorpusID:9932565).
64. "Burns, Benureau, Tani (2018) A Bergson-Inspired Adaptive Time Constant for the Multiple
Timescales Recurrent Neural Network Model. JNNS" (https://www.researchgate.net/publicat
ion/328474302).
65. Tutschku, Kurt (June 1995). Recurrent Multilayer Perceptrons for Identification and Control:
The Road to Applications. Institute of Computer Science Research Report. Vol. 118.
University of Würzburg Am Hubland. CiteSeerX 10.1.1.45.3527 (https://citeseerx.ist.psu.edu/
viewdoc/summary?doi=10.1.1.45.3527).
66. Yamashita, Yuichi; Tani, Jun (2008-11-07). "Emergence of Functional Hierarchy in a Multiple
Timescale Neural Network Model: A Humanoid Robot Experiment" (https://www.ncbi.nlm.ni
h.gov/pmc/articles/PMC2570613). PLOS Computational Biology. 4 (11): e1000220.
Bibcode:2008PLSCB...4E0220Y (https://ui.adsabs.harvard.edu/abs/2008PLSCB...4E0220
Y). doi:10.1371/journal.pcbi.1000220 (https://doi.org/10.1371%2Fjournal.pcbi.1000220).
PMC 2570613 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2570613). PMID 18989398
(https://pubmed.ncbi.nlm.nih.gov/18989398).
67. Alnajjar, Fady; Yamashita, Yuichi; Tani, Jun (2013). "The hierarchical and functional
connectivity of higher-order cognitive mechanisms: neurorobotic model to investigate the
stability and flexibility of working memory" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3
575058). Frontiers in Neurorobotics. 7: 2. doi:10.3389/fnbot.2013.00002 (https://doi.org/10.3
389%2Ffnbot.2013.00002). PMC 3575058 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3
575058). PMID 23423881 (https://pubmed.ncbi.nlm.nih.gov/23423881).
68. "Proceedings of the 28th Annual Conference of the Japanese Neural Network Society
(October, 2018)" (http://jnns.org/conference/2018/JNNS2018_Technical_Programs.pdf)
(PDF).
69. Graves, Alex; Wayne, Greg; Danihelka, Ivo (2014). "Neural Turing Machines".
arXiv:1410.5401 (https://arxiv.org/abs/1410.5401) [cs.NE (https://arxiv.org/archive/cs.NE)].
70. Sun, Guo-Zheng; Giles, C. Lee; Chen, Hsing-Hen (1998). "The Neural Network Pushdown
Automaton: Architecture, Dynamics and Training". In Giles, C. Lee; Gori, Marco (eds.).
Adaptive Processing of Sequences and Data Structures. Lecture Notes in Computer
Science. Berlin, Heidelberg: Springer. pp. 296–345. CiteSeerX 10.1.1.56.8723 (https://citese
erx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.8723). doi:10.1007/bfb0054003 (https://doi.
org/10.1007%2Fbfb0054003). ISBN 978-3-540-64341-8.
71. Snider, Greg (2008), "Cortical computing with memristive nanodevices" (http://www.scidacre
view.org/0804/html/hardware.html), Sci-DAC Review, 10: 58–65
72. Caravelli, Francesco; Traversa, Fabio Lorenzo; Di Ventra, Massimiliano (2017). "The
complex dynamics of memristive circuits: analytical results and universal slow relaxation".
Physical Review E. 95 (2): 022140. arXiv:1608.08651 (https://arxiv.org/abs/1608.08651).
Bibcode:2017PhRvE..95b2140C (https://ui.adsabs.harvard.edu/abs/2017PhRvE..95b2140
C). doi:10.1103/PhysRevE.95.022140 (https://doi.org/10.1103%2FPhysRevE.95.022140).
PMID 28297937 (https://pubmed.ncbi.nlm.nih.gov/28297937). S2CID 6758362 (https://api.se
manticscholar.org/CorpusID:6758362).
73. Caravelli, Francesco (2019-11-07). "Asymptotic Behavior of Memristive Circuits" (https://ww
w.ncbi.nlm.nih.gov/pmc/articles/PMC789). Entropy. 21 (8): 789. arXiv:1712.07046 (https://arx
iv.org/abs/1712.07046). Bibcode:2019Entrp..21..789C (https://ui.adsabs.harvard.edu/abs/20
19Entrp..21..789C). doi:10.3390/e21080789 (https://doi.org/10.3390%2Fe21080789).
PMC 789 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC789). PMID 33267502 (https://pub
med.ncbi.nlm.nih.gov/33267502).
74. Werbos, Paul J. (1988). "Generalization of backpropagation with application to a recurrent
gas market model" (https://www.semanticscholar.org/paper/Learning-representations-by-bac
k-propagating-errors-Rumelhart-Hinton/052b1d8ce63b07fec3de9dbb583772d860b7c769).
Neural Networks. 1 (4): 339–356. doi:10.1016/0893-6080(88)90007-x (https://doi.org/10.101
6%2F0893-6080%2888%2990007-x). S2CID 205001834 (https://api.semanticscholar.org/C
orpusID:205001834).
75. Rumelhart, David E. (1985). Learning Internal Representations by Error Propagation (https://
books.google.com/books?id=Ff9iHAAACAAJ). San Diego (CA): Institute for Cognitive
Science, University of California.
76. Robinson, Anthony J.; Fallside, Frank (1987). The Utility Driven Dynamic Error Propagation
Network (https://books.google.com/books?id=6JYYMwEACAAJ). Technical Report
CUED/F-INFENG/TR.1. Department of Engineering, University of Cambridge.
77. Williams, Ronald J.; Zipser, D. (1 February 2013). "Gradient-based learning algorithms for
recurrent networks and their computational complexity". In Chauvin, Yves; Rumelhart, David
E. (eds.). Backpropagation: Theory, Architectures, and Applications (https://books.google.co
m/books?id=B71nu3LDpREC). Psychology Press. ISBN 978-1-134-77581-1.
78. Schmidhuber, Jürgen (1989-01-01). "A Local Learning Algorithm for Dynamic Feedforward
and Recurrent Networks". Connection Science. 1 (4): 403–412.
doi:10.1080/09540098908915650 (https://doi.org/10.1080%2F09540098908915650).
S2CID 18721007 (https://api.semanticscholar.org/CorpusID:18721007).
79. Príncipe, José C.; Euliano, Neil R.; Lefebvre, W. Curt (2000). Neural and adaptive systems:
fundamentals through simulations (https://books.google.com/books?id=jgMZAQAAIAAJ).
Wiley. ISBN 978-0-471-35167-2.
80. Yann, Ollivier; Tallec, Corentin; Charpiat, Guillaume (2015-07-28). "Training recurrent
networks online without backtracking". arXiv:1507.07680 (https://arxiv.org/abs/1507.07680)
[cs.NE (https://arxiv.org/archive/cs.NE)].
81. Schmidhuber, Jürgen (1992-03-01). "A Fixed Size Storage O(n3) Time Complexity Learning
Algorithm for Fully Recurrent Continually Running Networks". Neural Computation. 4 (2):
243–248. doi:10.1162/neco.1992.4.2.243 (https://doi.org/10.1162%2Fneco.1992.4.2.243).
S2CID 11761172 (https://api.semanticscholar.org/CorpusID:11761172).
82. Williams, Ronald J. (1989). Complexity of exact gradient computation algorithms for
recurrent neural networks (http://citeseerx.ist.psu.edu/showciting?cid=128036) (Report).
Technical Report NU-CCS-89-27. Boston (MA): Northeastern University, College of
Computer Science.
83. Pearlmutter, Barak A. (1989-06-01). "Learning State Space Trajectories in Recurrent Neural
Networks" (http://repository.cmu.edu/cgi/viewcontent.cgi?article=2865&context=compsci).
Neural Computation. 1 (2): 263–269. doi:10.1162/neco.1989.1.2.263 (https://doi.org/10.116
2%2Fneco.1989.1.2.263). S2CID 16813485 (https://api.semanticscholar.org/CorpusID:1681
3485).
84. Hochreiter, Sepp; et al. (15 January 2001). "Gradient flow in recurrent nets: the difficulty of
learning long-term dependencies" (https://books.google.com/books?id=NWOcMVA64aAC).
In Kolen, John F.; Kremer, Stefan C. (eds.). A Field Guide to Dynamical Recurrent Networks.
John Wiley & Sons. ISBN 978-0-7803-5369-5.
85. Campolucci, Paolo; Uncini, Aurelio; Piazza, Francesco; Rao, Bhaskar D. (1999). "On-Line
Learning Algorithms for Locally Recurrent Neural Networks". IEEE Transactions on Neural
Networks. 10 (2): 253–271. CiteSeerX 10.1.1.33.7550 (https://citeseerx.ist.psu.edu/viewdoc/
summary?doi=10.1.1.33.7550). doi:10.1109/72.750549 (https://doi.org/10.1109%2F72.7505
49). PMID 18252525 (https://pubmed.ncbi.nlm.nih.gov/18252525).
86. Wan, Eric A.; Beaufays, Françoise (1996). "Diagrammatic derivation of gradient algorithms
for neural networks". Neural Computation. 8: 182–201. doi:10.1162/neco.1996.8.1.182 (http
s://doi.org/10.1162%2Fneco.1996.8.1.182). S2CID 15512077 (https://api.semanticscholar.or
g/CorpusID:15512077).
87. Campolucci, Paolo; Uncini, Aurelio; Piazza, Francesco (2000). "A Signal-Flow-Graph
Approach to On-line Gradient Calculation". Neural Computation. 12 (8): 1901–1927.
CiteSeerX 10.1.1.212.5406 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.212.
5406). doi:10.1162/089976600300015196 (https://doi.org/10.1162%2F08997660030001519
6). PMID 10953244 (https://pubmed.ncbi.nlm.nih.gov/10953244). S2CID 15090951 (https://a
pi.semanticscholar.org/CorpusID:15090951).
88. Gomez, Faustino J.; Miikkulainen, Risto (1999), "Solving non-Markovian control tasks with
neuroevolution" (http://www.cs.utexas.edu/users/nn/downloads/papers/gomez.ijcai99.pdf)
(PDF), IJCAI 99, Morgan Kaufmann, retrieved 5 August 2017
89. Syed, Omar (May 1995). "Applying Genetic Algorithms to Recurrent Neural Networks for
Learning Network Parameters and Architecture" (http://arimaa.com/arimaa/about/Thesis/).
M.Sc. thesis, Department of Electrical Engineering, Case Western Reserve University,
Advisor Yoshiyasu Takefuji.
90. Gomez, Faustino J.; Schmidhuber, Jürgen; Miikkulainen, Risto (June 2008). "Accelerated
Neural Evolution Through Cooperatively Coevolved Synapses" (http://dl.acm.org/citation.cf
m?id=1390681.1390712). Journal of Machine Learning Research. 9: 937–965.
91. Siegelmann, Hava T.; Horne, Bill G.; Giles, C. Lee (1995). "Computational Capabilities of
Recurrent NARX Neural Networks" (https://books.google.com/books?id=830-HAAACAAJ&p
g=PA208). IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics). 27
(2): 208–15. CiteSeerX 10.1.1.48.7468 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=
10.1.1.48.7468). doi:10.1109/3477.558801 (https://doi.org/10.1109%2F3477.558801).
PMID 18255858 (https://pubmed.ncbi.nlm.nih.gov/18255858).
92. Feltus, Christophe (2022). "Learning Algorithm Recommendation Framework for IS and
CPS Security: Analysis of the RNN, LSTM, and GRU Contributions". IGI International
Journal of Systems and Software Security and Protection (IJSSSP). 13 (1).
doi:10.4018/IJSSSP.293236 (https://doi.org/10.4018%2FIJSSSP.293236).
S2CID 247143453 (https://api.semanticscholar.org/CorpusID:247143453).
93. Metz, Cade (May 18, 2016). "Google Built Its Very Own Chips to Power Its AI Bots" (https://w
ww.wired.com/2016/05/google-tpu-custom-chips/). Wired.
94. Mayer, Hermann; Gomez, Faustino J.; Wierstra, Daan; Nagy, Istvan; Knoll, Alois;
Schmidhuber, Jürgen (October 2006). "A System for Robotic Heart Surgery that Learns to
Tie Knots Using Recurrent Neural Networks". 2006 IEEE/RSJ International Conference on
Intelligent Robots and Systems. pp. 543–548. CiteSeerX 10.1.1.218.3399 (https://citeseerx.i
st.psu.edu/viewdoc/summary?doi=10.1.1.218.3399). doi:10.1109/IROS.2006.282190 (http
s://doi.org/10.1109%2FIROS.2006.282190). ISBN 978-1-4244-0258-8. S2CID 12284900 (htt
ps://api.semanticscholar.org/CorpusID:12284900).
95. Wierstra, Daan; Schmidhuber, Jürgen; Gomez, Faustino J. (2005). "Evolino: Hybrid
Neuroevolution/Optimal Linear Search for Sequence Learning" (https://www.academia.edu/
5830256). Proceedings of the 19th International Joint Conference on Artificial Intelligence
(IJCAI), Edinburgh: 853–858.
96. Petneházi, Gábor (2019-01-01). "Recurrent neural networks for time series forecasting".
arXiv:1901.00069 (https://arxiv.org/abs/1901.00069) [cs.LG (https://arxiv.org/archive/cs.LG)].
97. Hewamalage, Hansika; Bergmeir, Christoph; Bandara, Kasun (2020). "Recurrent Neural
Networks for Time Series Forecasting: Current Status and Future Directions". International
Journal of Forecasting. 37: 388–427. arXiv:1909.00590 (https://arxiv.org/abs/1909.00590).
doi:10.1016/j.ijforecast.2020.06.008 (https://doi.org/10.1016%2Fj.ijforecast.2020.06.008).
S2CID 202540863 (https://api.semanticscholar.org/CorpusID:202540863).
98. Graves, Alex; Schmidhuber, Jürgen (2005). "Framewise phoneme classification with
bidirectional LSTM and other neural network architectures". Neural Networks. 18 (5–6):
602–610. CiteSeerX 10.1.1.331.5800 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=1
0.1.1.331.5800). doi:10.1016/j.neunet.2005.06.042 (https://doi.org/10.1016%2Fj.neunet.200
5.06.042). PMID 16112549 (https://pubmed.ncbi.nlm.nih.gov/16112549). S2CID 1856462 (ht
tps://api.semanticscholar.org/CorpusID:1856462).
99. Fernández, Santiago; Graves, Alex; Schmidhuber, Jürgen (2007). An Application of
Recurrent Neural Networks to Discriminative Keyword Spotting (http://dl.acm.org/citation.cf
m?id=1778066.1778092). Proceedings of the 17th International Conference on Artificial
Neural Networks. ICANN'07. Berlin, Heidelberg: Springer-Verlag. pp. 220–229. ISBN 978-3-
540-74693-5.
100. Graves, Alex; Mohamed, Abdel-rahman; Hinton, Geoffrey E. (2013). "Speech recognition
with deep recurrent neural networks". 2013 IEEE International Conference on Acoustics,
Speech and Signal Processing. pp. 6645–6649. arXiv:1303.5778 (https://arxiv.org/abs/1303.
5778). Bibcode:2013arXiv1303.5778G (https://ui.adsabs.harvard.edu/abs/2013arXiv1303.57
78G). doi:10.1109/ICASSP.2013.6638947 (https://doi.org/10.1109%2FICASSP.2013.663894
7). ISBN 978-1-4799-0356-6. S2CID 206741496 (https://api.semanticscholar.org/CorpusID:2
06741496).
101. Chang, Edward F.; Chartier, Josh; Anumanchipalli, Gopala K. (24 April 2019). "Speech
synthesis from neural decoding of spoken sentences" (https://www.ncbi.nlm.nih.gov/pmc/arti
cles/PMC9714519). Nature. 568 (7753): 493–498. Bibcode:2019Natur.568..493A (https://ui.a
dsabs.harvard.edu/abs/2019Natur.568..493A). doi:10.1038/s41586-019-1119-1 (https://doi.o
rg/10.1038%2Fs41586-019-1119-1). ISSN 1476-4687 (https://www.worldcat.org/issn/1476-4
687). PMC 9714519 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9714519).
PMID 31019317 (https://pubmed.ncbi.nlm.nih.gov/31019317). S2CID 129946122 (https://ap
i.semanticscholar.org/CorpusID:129946122).
102. Moses, David A., Sean L. Metzger, Jessie R. Liu, Gopala K. Anumanchipalli, Joseph G.
Makin, Pengfei F. Sun, Josh Chartier, et al. "Neuroprosthesis for Decoding Speech in a
Paralyzed Person with Anarthria." New England Journal of Medicine 385, no. 3 (July 15,
2021): 217–27. https://doi.org/10.1056/NEJMoa2027540.
103. Malhotra, Pankaj; Vig, Lovekesh; Shroff, Gautam; Agarwal, Puneet (April 2015). "Long Short
Term Memory Networks for Anomaly Detection in Time Series" (https://www.elen.ucl.ac.be/P
roceedings/esann/esannpdf/es2015-56.pdf) (PDF). European Symposium on Artificial
Neural Networks, Computational Intelligence and Machine Learning – ESANN 2015.
104. "Papers with Code - DeepHS-HDRVideo: Deep High Speed High Dynamic Range Video
Reconstruction" (https://paperswithcode.com/paper/deephs-hdrvideo-deep-high-speed-high-
dynamic). paperswithcode.com. Retrieved 2022-10-13.
105. Gers, Felix A.; Schraudolph, Nicol N.; Schmidhuber, Jürgen (2002). "Learning precise timing
with LSTM recurrent networks" (http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf)
(PDF). Journal of Machine Learning Research. 3: 115–143.
106. Eck, Douglas; Schmidhuber, Jürgen (2002-08-28). Learning the Long-Term Structure of the
Blues. Artificial Neural Networks – ICANN 2002. Lecture Notes in Computer Science.
Vol. 2415. Berlin, Heidelberg: Springer. pp. 284–289. CiteSeerX 10.1.1.116.3620 (https://cite
seerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.116.3620). doi:10.1007/3-540-46084-5_47
(https://doi.org/10.1007%2F3-540-46084-5_47). ISBN 978-3-540-46084-8.
107. Schmidhuber, Jürgen; Gers, Felix A.; Eck, Douglas (2002). "Learning nonregular languages:
A comparison of simple recurrent networks and LSTM". Neural Computation. 14 (9): 2039–
2041. CiteSeerX 10.1.1.11.7369 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.
11.7369). doi:10.1162/089976602320263980 (https://doi.org/10.1162%2F08997660232026
3980). PMID 12184841 (https://pubmed.ncbi.nlm.nih.gov/12184841). S2CID 30459046 (http
s://api.semanticscholar.org/CorpusID:30459046).
108. Gers, Felix A.; Schmidhuber, Jürgen (2001). "LSTM Recurrent Networks Learn Simple
Context Free and Context Sensitive Languages" (ftp://ftp.idsia.ch/pub/juergen/L-IEEE.pdf)
(PDF). IEEE Transactions on Neural Networks. 12 (6): 1333–1340. doi:10.1109/72.963769
(https://doi.org/10.1109%2F72.963769). PMID 18249962 (https://pubmed.ncbi.nlm.nih.gov/1
8249962).
109. Pérez-Ortiz, Juan Antonio; Gers, Felix A.; Eck, Douglas; Schmidhuber, Jürgen (2003).
"Kalman filters improve LSTM network performance in problems unsolvable by traditional
recurrent nets". Neural Networks. 16 (2): 241–250. CiteSeerX 10.1.1.381.1992 (https://citese
erx.ist.psu.edu/viewdoc/summary?doi=10.1.1.381.1992). doi:10.1016/s0893-
6080(02)00219-8 (https://doi.org/10.1016%2Fs0893-6080%2802%2900219-8).
PMID 12628609 (https://pubmed.ncbi.nlm.nih.gov/12628609).
110. Graves, Alex; Schmidhuber, Jürgen (2009). "Offline Handwriting Recognition with
Multidimensional Recurrent Neural Networks". Advances in Neural Information Processing
Systems. Vancouver (BC): MIT Press. 22, NIPS'22: 545–552.
111. Graves, Alex; Fernández, Santiago; Liwicki, Marcus; Bunke, Horst; Schmidhuber, Jürgen
(2007). Unconstrained Online Handwriting Recognition with Recurrent Neural Networks (htt
p://dl.acm.org/citation.cfm?id=2981562.2981635). Proceedings of the 20th International
Conference on Neural Information Processing Systems. NIPS'07. Curran Associates Inc.
pp. 577–584. ISBN 978-1-60560-352-0.
112. Baccouche, Moez; Mamalet, Franck; Wolf, Christian; Garcia, Christophe; Baskurt, Atilla
(2011). Salah, Albert Ali; Lepri, Bruno (eds.). "Sequential Deep Learning for Human Action
Recognition". 2nd International Workshop on Human Behavior Understanding (HBU).
Lecture Notes in Computer Science. Amsterdam, Netherlands: Springer. 7065: 29–39.
doi:10.1007/978-3-642-25446-8_4 (https://doi.org/10.1007%2F978-3-642-25446-8_4).
ISBN 978-3-642-25445-1.
113. Hochreiter, Sepp; Heusel, Martin; Obermayer, Klaus (2007). "Fast model-based protein
homology detection without alignment" (https://doi.org/10.1093%2Fbioinformatics%2Fbtm24
7). Bioinformatics. 23 (14): 1728–1736. doi:10.1093/bioinformatics/btm247 (https://doi.org/1
0.1093%2Fbioinformatics%2Fbtm247). PMID 17488755 (https://pubmed.ncbi.nlm.nih.gov/17
488755).
114. Tax, Niek; Verenich, Ilya; La Rosa, Marcello; Dumas, Marlon (2017). Predictive Business
Process Monitoring with LSTM neural networks. Proceedings of the International
Conference on Advanced Information Systems Engineering (CAiSE). Lecture Notes in
Computer Science. Vol. 10253. pp. 477–492. arXiv:1612.02130 (https://arxiv.org/abs/1612.0
2130). doi:10.1007/978-3-319-59536-8_30 (https://doi.org/10.1007%2F978-3-319-59536-8_
30). ISBN 978-3-319-59535-1. S2CID 2192354 (https://api.semanticscholar.org/CorpusID:21
92354).
115. Choi, Edward; Bahadori, Mohammad Taha; Schuetz, Andy; Stewart, Walter F.; Sun, Jimeng
(2016). "Doctor AI: Predicting Clinical Events via Recurrent Neural Networks" (http://proceed
ings.mlr.press/v56/Choi16.html). Proceedings of the 1st Machine Learning for Healthcare
Conference. 56: 301–318. arXiv:1511.05942 (https://arxiv.org/abs/1511.05942).
Bibcode:2015arXiv151105942C (https://ui.adsabs.harvard.edu/abs/2015arXiv151105942C).
PMC 5341604 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5341604). PMID 28286600
(https://pubmed.ncbi.nlm.nih.gov/28286600).
116. "Artificial intelligence helps accelerate progress toward efficient fusion reactions" (https://ww
w.princeton.edu/news/2017/12/15/artificial-intelligence-helps-accelerate-progress-toward-eff
icient-fusion-reactions). Princeton University. Retrieved 2023-06-12.

Further reading
Mandic, Danilo P. & Chambers, Jonathon A. (2001). Recurrent Neural Networks for
Prediction: Learning Algorithms, Architectures and Stability. Wiley. ISBN 978-0-471-49517-
8.

External links
Recurrent Neural Networks (http://www.idsia.ch/~juergen/rnn.html) with over 60 RNN papers
by Jürgen Schmidhuber's group at Dalle Molle Institute for Artificial Intelligence Research
Elman Neural Network implementation (http://jsalatas.ictpro.gr/weka) for WEKA
Retrieved from "https://en.wikipedia.org/w/index.php?title=Recurrent_neural_network&oldid=1166530398"

You might also like