Analogue Dynamics Engine Design
Analogue Dynamics Engine Design
i
Acknowledgements
ii
Abstract
This report outlines the design and implementation of the Analogue Dynamics
Engine (ADE). The ADE is a physics engine constructed from a hybrid, ana-
logue and digital, computer. Software physics engines are becoming increas-
ingly common in computer games, and the ADE was designed as a hardware
equivalent to these software engines. Analogue computers, although currently
rare, have useful properties such as their ability to evaluate functions in real-
time. The physics engine exploits this functionality while using digital compo-
nents to provide reconfigurability.
The core hybrid computer was constructed by connecting twenty nine cus-
tom designed reconfigurable analogue cells to thirty two bus lines, using pro-
grammable interconnect. Each cell can perform inversion, integration, addition
and multiplication. At the periphery of this computer lie two ADCs and two
DACs, so that the hybrid computer may provide a digital interface.
In order to make the engine suitable for use with games, it was decided
to make simulations multiplexable, so that multiple simulations could be run
concurrently. This requires simulations to be executed faster than real-time.
Additionally, state must be saved and restored, which was achieved through
replicating the capacitors.
Finally, this report analyses the viability of this project for use in computer
games. Ultimately, it was determined that an analogue computer could be-
come a viable replacement for the software physics engines in use today. In
fact, it offers benefits that cannot be obtained using todays software physics
engines.
iii
Contents
Acknowledgements ii
Abstract iii
Contents iv
List of Figures ix
CD Contents xiv
I Background 1
1 Introduction 2
1.1 Project Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Physics Engine . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Dedicated Hardware . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 Analogue Computer . . . . . . . . . . . . . . . . . . . . . 3
1.1.4 Hybrid Computer . . . . . . . . . . . . . . . . . . . . . . 3
1.1.5 Project Definition . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Physics Engines 5
2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Available Engines . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4.1 Engineering Analyses . . . . . . . . . . . . . . . . . . . . 7
2.4.2 Computer Games . . . . . . . . . . . . . . . . . . . . . . . 7
2.5 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
iv
CONTENTS v
2.5.1 Realism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5.2 Expertise . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5.3 Expense . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Analogue Computers 10
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3.2 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4.1 Real-Time . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4.2 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.4.3 Potentially Infinite Accuracy . . . . . . . . . . . . . . . . 15
3.5 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.5.1 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.5.2 Inflexibility . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.5.3 Not Dynamically Reconfigurable . . . . . . . . . . . . . . 16
4 Hybrid Computers 17
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.4 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.4.1 Reconfigurability . . . . . . . . . . . . . . . . . . . . . . . 18
4.4.2 Leveraging Analogue and Digital . . . . . . . . . . . . . 19
4.5 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.5.1 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5 Analogue 20
5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.2.1 Wireless Communications . . . . . . . . . . . . . . . . . . 20
5.2.2 Wireline Communications . . . . . . . . . . . . . . . . . . 22
5.2.3 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.2.4 Disk Drives . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.2.5 Microprocessors and Memories . . . . . . . . . . . . . . . 23
5.2.6 Game Controllers . . . . . . . . . . . . . . . . . . . . . . . 23
5.2.7 Zoom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6 Operational Amplifiers 24
6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.3 Terminals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.4 Ideal Operational Amplifier . . . . . . . . . . . . . . . . . . . . . 26
6.5 Practical Operational Amplifier . . . . . . . . . . . . . . . . . . . 27
6.6 Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.6.1 Inverter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.6.2 Multiplier and Divider . . . . . . . . . . . . . . . . . . . . 28
6.6.3 Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
CONTENTS vi
6.6.4 Integrator . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.6.5 Differentiator . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.6.6 Logarithm Calculator . . . . . . . . . . . . . . . . . . . . 32
6.6.7 Antilogarithm Calculator . . . . . . . . . . . . . . . . . . 33
6.6.8 Further Operations . . . . . . . . . . . . . . . . . . . . . . 33
8 Prototype 1: Mass-Spring-Damper 38
8.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
8.1.1 Physics of Mass-Spring . . . . . . . . . . . . . . . . . . . 38
8.1.2 Physics of Mass-Spring-Damper . . . . . . . . . . . . . . 40
8.2 Voltage Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
8.2.1 Voltage Source . . . . . . . . . . . . . . . . . . . . . . . . 41
8.2.2 Sinusoidal Voltage Source . . . . . . . . . . . . . . . . . . 41
8.3 Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
8.3.1 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
8.3.2 Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8.3.3 Type Conversion . . . . . . . . . . . . . . . . . . . . . . . 43
8.3.4 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8.4 Core Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8.4.1 Resistor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
8.4.2 Capacitor . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
8.4.3 Operational Amplifier . . . . . . . . . . . . . . . . . . . . 45
8.5 Derived Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
8.5.1 Inverter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
8.5.2 Integrator . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
8.5.3 Differentiator . . . . . . . . . . . . . . . . . . . . . . . . . 50
8.6 Mass-Spring-Damper . . . . . . . . . . . . . . . . . . . . . . . . . 51
8.7 Simplified Mass-Spring-Damper . . . . . . . . . . . . . . . . . . 53
10.5 Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
10.6 Hybrid Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
10.7 Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
10.8 Cell Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
10.9 Computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
10.10DAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
10.11ADC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
10.12Digitised Computer . . . . . . . . . . . . . . . . . . . . . . . . . . 78
10.13ADE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
11 Multiplexing 81
11.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
11.1.1 Key Concept . . . . . . . . . . . . . . . . . . . . . . . . . . 81
11.1.2 Multiplexing Suggestions . . . . . . . . . . . . . . . . . . 82
11.1.2.1 Suggestion 1: Iterating Outputs . . . . . . . . . 82
11.1.2.2 Suggestion 2: Simulating Change . . . . . . . . 83
11.1.2.3 Suggestion 3: Capacitor Replication . . . . . . . 84
11.2 Capacitor Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
11.3 Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
11.4 Cell Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
11.5 Computer and Digitised Computer . . . . . . . . . . . . . . . . . 86
11.6 Operation Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . 86
11.7 Operation Decoders . . . . . . . . . . . . . . . . . . . . . . . . . . 86
11.8 Control Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
11.9 ADE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
11.10Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
III Conclusions 96
13 Synthesis 97
13.1 Discrete Components . . . . . . . . . . . . . . . . . . . . . . . . . 97
13.2 ASICs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
13.3 FPAAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
13.3.1 Zetex Semiconductors TRAC . . . . . . . . . . . . . . . . 98
13.3.1.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . 99
13.3.2 Lattice Semiconductor ispPAC . . . . . . . . . . . . . . . 99
13.3.2.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . 99
13.3.3 Anadigm FPAAs . . . . . . . . . . . . . . . . . . . . . . . 99
13.3.3.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . 100
13.3.4 Other FPAAs . . . . . . . . . . . . . . . . . . . . . . . . . 101
13.3.5 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . 101
13.4 FPMAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
CONTENTS viii
14 PC Interfaces 102
14.1 Peripheral Buses . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
14.1.1 PCI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
14.1.2 PCIe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
14.1.3 Motherboard . . . . . . . . . . . . . . . . . . . . . . . . . 103
14.2 Graphics Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
14.3 External Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
14.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
15 Analysis 105
15.1 Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
15.2 Die Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
15.3 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
15.4 Timing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
15.4.1 ADC Conversion Rate . . . . . . . . . . . . . . . . . . . . 110
15.4.2 Operational Amplifier Bandwidth . . . . . . . . . . . . . 110
15.4.3 Execution Speed . . . . . . . . . . . . . . . . . . . . . . . 111
16 Conclusions 113
16.1 Knowledge Acquired . . . . . . . . . . . . . . . . . . . . . . . . . 113
16.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
16.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Bibliography 116
List of Figures
8.1 Mass-spring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8.2 Behaviour of a mass-spring . . . . . . . . . . . . . . . . . . . . . 39
8.3 Mass-spring-damper . . . . . . . . . . . . . . . . . . . . . . . . . 40
8.4 Behaviour of a mass-spring-damper . . . . . . . . . . . . . . . . 40
8.5 Operational amplifier in negative feedback configuration . . . . 45
8.6 Schematic of inverter . . . . . . . . . . . . . . . . . . . . . . . . . 46
8.7 Comparison of inverters behavioural and structural models . . 47
(a) Original . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
(b) Magnified . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
8.8 Schematic of integrator . . . . . . . . . . . . . . . . . . . . . . . . 48
8.9 Comparison of integrators behavioural and structural models . 49
(a) Original, using a sinusoidal waveform . . . . . . . . . . . 49
(b) Magnified, using a sinusoidal waveform . . . . . . . . . . 49
(c) Original, using a square waveform . . . . . . . . . . . . . . 49
(d) Magnified, using a square waveform . . . . . . . . . . . . 50
8.10 Schematic of differentiator . . . . . . . . . . . . . . . . . . . . . . 50
8.11 Differentiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
(a) Behavioural . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
ix
LIST OF FIGURES x
(b) Structural . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
8.12 Mass-spring-damper analogue computers . . . . . . . . . . . . . 54
(a) Original . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
(b) Optimised . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
8.13 Comparison of mass-spring-dampers behavioural and struc-
tural models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
(a) Displacement in . . . . . . . . . . . . . . . . . . . . . . . . . 56
(b) Displacement out . . . . . . . . . . . . . . . . . . . . . . . . 56
(c) Displacement out magnified . . . . . . . . . . . . . . . . . 56
(d) Velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
(e) Velocity magnified . . . . . . . . . . . . . . . . . . . . . . . 57
xii
List of Code
xiii
CD Contents
xiv
CD CONTENTS xv
Background
1
Chapter 1
Introduction
This chapter defines the key idea behind the project. The chapter continues by
describing the projects fundamental elements, namely physics engines, ded-
icated hardware, analogue computers and hybrid computers, while outlining
the progression of the idea behind the project. After summarising the objective,
the main advantages are highlighted.
2
Chapter 1. Introduction 3
created for CPU intensive software, such as Graphics Processing Units (GPUs)
and MPEG decoders. Therefore, implementing a physics engine from dedi-
cated hardware would be beneficial.
Digital hardware would be the accepted choice for implementing such ded-
icated hardware. However, digital hardware operates by successively applying
operations to data. This means that, for example, finding the second derivative
of a function would take twice as long as finding the first derivative, unless
some optimisation was utilised. Since physics is based on complex mathe-
matics, this successive application of operations would become a bottleneck.
Consequently, digital hardware may not be the most suitable approach for con-
structing a physics engine. A more suitable approach could be to implement
the hardware as an analogue computer.
1.2 Advantages
The primary advantage obtained by the design and implementation of such a
physics engine is that it should operate substantially faster than the software
physics engines in use today. This speed gain is achieved primarily through
the inherent real-time behaviour and parallel nature of analogue computers.
These advantages will be discussed further in Section 3.4.1 and Section 3.4.2
respectively, after analogue computers have been discussed in greater depth.
Chapter 2
Physics Engines
2.1 Overview
A physics engine or physics software development kit (SDK) is a middleware
solution that performs physics calculations on behalf of other software, to sim-
ulate realistically the behaviour of objects. Physics engines may be integrated
with software that requires physics calculations to be performed.
Traditionally, physics engines have modelled rigid body dynamics, which
describe the interactions between rigid bodies or solid objects. These are typ-
ically modelled by ordinary differential equations (ODEs), which are capable
of expressing the time-varying behaviour of a system. Recently, physics en-
gines have expanded their abilities beyond rigid body dynamics to include
related fields. For this project, rigid body dynamics is the only field of concern,
but other areas could easily be added at a later stage. Physics engines typi-
cally work with Newtonian physics, since the extra accuracy provided by Ein-
steinian physics is unlikely to be noticed but substantially increases the com-
plexity of the calculations.
2.2 History
Observing the growing use of physics in games, MathEngine released the first
physics engine, the Fast Dynamics Toolkit in 1998. However, the engine of-
5
Chapter 2. Physics Engines 6
ten created jittering objects in resting contact with a flat surface. According to
Eberly [1, p. 4], this resulted from inaccuracies arising from the application of
numerical methods to differential equations and from underdeveloped colli-
sion detection algorithms.
To resolve this problem, Hugh Reynolds and Dr Steven Collins founded
Telekinesys in 1998. After renaming the company Havok.com, they released
the Havok Game Dynamics SDK. This was the first physics engine to prove
that sophisticated physics simulation could be achieved using consumer level
CPUs.
Following the trend already established by GPUs, AGEIA announced
PhysX [2], the first physics processing unit (PPU), on 7 March 2005. The PPU
is capable of 32,000 rigid bodies, compared to the 200 typical for a CPU. It can
handle 40,000 to 50,000 particles when simulating particle dynamics. If the PPU
were unavailable, the physics calculations would be performed in software by
the NovodeX engine, in the same way that software would be used to render
graphics if no GPU were available. AGEIA intends to sell its PPU integrated
circuits (ICs) to companies who will design and manufacture suitable boards,
similar to what NVIDIA does with its GPUs.
In addition to the commercial engines outlined above, there also exist some
free, open source engines. However, these are typically less sophisticated.
The primary open source engine is the Open Dynamics Engine (ODE) [8].
This supports only rigid body dynamics, but its realistic simulations have led
to its use in a relatively large number of games.
DynaMechs (Dynamics of Mechanisms) [9] allows for rigid body simula-
tion, with a particular emphasis on articulated moving objects. However, it
appears to be defunct as no updates have been made since July 2001.
AERO (Animation Editor for Realistic Object movements) [10] also offers
rigid body dynamics, but it too appears to be defunct. No updates have been
made since February 2001, but the last note from the developers had promised
a complete rewrite of the engine.
In addition to the general purpose physics engines outlined above, there
exist commercial and free physics engines designed for niche markets such as
the simulation of only vehicles or robots.
Finally, due to the current demand for physics, some companies have
started working on hardware physics engines or PPUs. The first of these,
PhysX, will be available shortly. PhysX is capable of simulating rigid body
dynamics, universal collision detection, finite element analysis, soft body dy-
namics, fluid dynamics, hair and clothing. More details are outlined in Sec-
tion 2.2.
As can be seen from this discussion, a large number of physics engines are
currently available. This highlights the growing demand for these engines.
2.4 Applications
There are two primary applications of physics engines: engineering analysis
simulations and computer games. Both of these are discussed below.
2.5 Advantages
The three primary advantages that may be obtained using a physics engine are
increased realism, leveraging of expertise and reduced expenditure. Each of
these is discussed in the proceeding sections.
2.5.1 Realism
As outlined in Section 2.4.2, users are demanding increasing realism from com-
puter games. Realism and accuracy are extremely important in engineering
analyses (Section 2.4.1). It is extremely difficult to bluff this realism without
a physics foundation. Reinforcing this view, Gary Powell of MathEngine plc
stated The illusion and immersive experience of the virtual world, so carefully
built up with high polygon models, detailed textures and advanced lighting,
is so often shattered as soon as objects begin to move and interact. [11, p. x]
Based on these demands, many software developers now use physics engines
to increase the realism of their products.
2.5.2 Expertise
The majority of game developers do not have a substantial knowledge of
physics. Therefore, it is often desirable to use an existing physics engine in-
stead of replicating the relevant parts inside a product under development. If
an existing physics engine is used, the end developer is unburdened from hav-
ing to understand the underlying physics. Moreover, the physics implemen-
tation is typically left to domain experts. These experts should have a greater
knowledge of the relevant physics, thereby producing a superior physics en-
gine. Therefore, physics engines are commonly used in order to leverage this
expertise.
2.5.3 Expense
Due to the typical developers lack of expertise in physics (Section 2.5.2), it is
often time-consuming and expensive for companies to implement their own
physics calculations.
Chapter 2. Physics Engines 9
Table 2.1: Costs arising from implementing in-house physics software [12, p.
23]
Analogue Computers
3.1 Overview
An analogue computer is a computer which operates with numbers repre-
sented by some physically measurable quantity, such as weight, length, volt-
age, etc. [13, p. 432].
It is possible to construct analogue computers that process a number of nat-
ural analogue phenomena such as hydraulics or mechanics, but the majority
of analogue computers process only voltages. Consequently, all further discus-
sion will be restricted to voltage processing or electronic analogue computers.
Analogue computers are based on mathematical operations. Therefore, un-
like digital computers, they may be used to directly model equations that ad-
here to a number of restrictions. Consequently, the analogue computer is often
regarded as a more natural tool for evaluating equations than a digital com-
puter.
3.2 History
The history of analogue computers could be said to go back into antiquity,
since devices like the slide rule can be considered to be analogue computers.
However, as stated in Section 3.1, only electronic analogue computers will be
considered here.
Between 1937 and 1938, George A Philbrick constructed the first electronic
analogue computer while working at The Foxboro Company [14, pp. 131135].
However, he referred to it as an automatic control analyzer as opposed to
an analogue computer. It consisted of a hardwired analogue computer, with
10
Chapter 3. Analogue Computers 11
was never commercialised. The emphasis of the project was on high perfor-
mance and high precision. This led to the development to the drift-free DC
operational amplifier, which became a requisite component for all subsequent
analogue computers.
Both projects had enormous influence on the future of the electronic ana-
logue computer. According to Small, Projects Cyclone and Typhoon were
instrumental in establishing the technological basis for the postwar general-
purpose electronic analogue computer industry in the USA. [17, p. 89]
In the UK, the Royal Aircraft Establishment (RAE) had a number of ana-
logue computers built for missile simulation. These included the GEPUS
(General-Purpose Simulator), TRIDAC (Three-Dimensional Analogue Com-
puter) and G-PAC (General-Purpose Analogue Computer).
In the late 1940s, George A Philbrick Researches, Inc (GAP/R) constructed
the first commercial analogue computers [15, p. 12]. Like POLYPHEMUS but
unlike the first operational amplifier based computers, these were repetitive
operation or rep op machines. These machines repeated their simulations
much faster than real-time, so that a steady waveform could be displaced on
an oscilloscope. This allowed parameters to be adjusted and the effects seen
instantly. However, these machines failed to achieve commercial success.
Observing the continued reliance of the militaries of the US and UK on ana-
logue computers, commercial interest grew substantially throughout the 1950s.
This led to the formation of a substantial number of companies manufactur-
ing general purpose analogue computers in the US, UK, USSR, Japan, West
Germany and France. In 1955, ninety-five analogue computer installations ex-
isted in the US. However, the late 1950s saw a shift in the market. Repetitive
operation or rep op machines started to gain popularity, eroding the mar-
ket share of single shot analogue computers. Although GAP/R had always
manufactured repetitive operation analogue computers, a vast improvement
in electronic component design finally made these computers competitive.
The early 1960s saw a decline in the number of analogue computers, as hy-
brid computers began to grow in popularity (Section 4.2). However, the demise
of analogue computers did not occur until the late 1970s, when the increasing
speed and power of digital computers made analogue computers a less viable
alternative. Digital computers were then able to match the real-time behav-
iour of analogue computers for many operations and offered greater flexibility
and easy of use for many applications. Additionally, substantial work in the
development of algorithms and numerical analysis combined to improve the
accuracy of digital computers to be greater than that offered by analogue com-
puters [18, p. 59].
3.3 Applications
Analogue computers were used for two primary purposes: simulation and
control. Both of these are discussed below.
3.3.1 Simulation
Analogue computers were used to simulate various systems, modelling enti-
ties using representations or analogues of those entities. These systems were
Chapter 3. Analogue Computers 13
3.3.2 Control
Analogue computers were also used in control systems. A control system is a
group of components assembled in such a way as to regulate an energy input
to achieve the desired output. [20, p. 2] Analogue computers can be used to
control the behaviour of a closed loop system, whereby the systems output is
used to influence its input.
Analogue computers were regularly used in the control systems of aircraft,
for example in automatic pilot control systems [21, pp. 9497]. Such systems
were designed to keep the aircraft flying on a fixed compass bearing. The con-
trol systems monitored the flight of aircraft for deviations and controlled the
rudders to perform corrections as necessary. Therefore, these closed loop con-
trol systems were able to keep the flight path of aircraft straight. More sophisti-
cated analogue computer control systems were found on the Concorde, which
used analogue computers to implement fly-by-wire.
3.4 Advantages
An analogue computer operates in real-time, with an inherent parallelism, pro-
viding results with potentially infinite accuracy. Each of these benefits is de-
scribed in sequence.
3.4.1 Real-Time
When a digital computer performs a calculation, there is a delay called the
propagation delay before the result is obtained. This delay is primarily in-
fluenced by the complexity of the calculation being performed. As computing
power increases, this propagation delay is gradually reduced and becomes less
of a problem. However, complex calculations, such as the computation of large
prime numbers, still require large amounts of computational power and time.
The elimination of this propagation delay would be of great benefit.
Analogue computers eliminate this propagation delay, as they operate in
real-time. While this is useful for even a single calculation, the real advantage
becomes apparent when the same value must be recalculated at myriad time
steps. A single calculation may be left running for a specified period with
samples taken at the necessary intervals.
Suppose a variable must be differentiated with regard to time, using one
millisecond intervals, until one second has elapsed. A digital computer must
Chapter 3. Analogue Computers 14
3.4.2 Parallelism
Parallelism is both a widely researched and widely implemented method for
gaining greater performance from digital hardware. However, for analogue
computers, parallelism needs neither to be researched nor implemented, for it
exists by default.
As outlined in Section 3.4.1, analogue computers operate in real-time. This
real-time behaviour permits analogue computers to achieve greater parallelism
than their digital counterparts.
Figure 3.1 shows a block diagram of a sample analogue computer. This
computer integrates its two inputs and adds the two resultant signals to gen-
erate the output. In this computer, the results would be outputted from both
integrators simultaneously. Nevertheless, two identical integrators in digital
hardware would also work in this way. However, the subsequent addition will
happen in effectively zero time since its output will be generated concurrently
with the outputs of the integrators. In other words, the adder will instanta-
neously add the outputs of the integrators. To recapitulate, the two integra-
tions and the addition are performed in parallel. In contrast, digital hardware
would typically perform the integrations first, with the addition performed
subsequently, leading to theoretically greater delays.
R
- -
+ -
R
- -
3.5 Disadvantages
Analogue computers are rare today. This indicates that analogue computers
have disadvantages. The main issues are noise, inflexibility and that they are
not dynamically reconfigurable. This section discusses each of these in turn
and highlights how they are unproblematic for the projects physics engine.
3.5.1 Noise
Theoretical analogue computers offer infinite accuracy (Section 3.4.3). How-
ever, practical analogue computers cannot realise this ideal.
Analogue computers, like all analogue devices, are affected by noise. Every
analogue component will introduce artefacts on waveforms, as a side effect of
that components primary function. This is one of the primary reasons for the
current lack of interest in analogue components. Meanwhile, digital compo-
nents use only two voltage levels. An erroneous voltage can be corrected to
Chapter 3. Analogue Computers 16
the nearest permissible voltage. Furthermore, adding error detection and cor-
rection circuits to digital computers is unproblematic, but adding such circuits
to analogue computers is relatively difficult.
There are, however, various techniques for reducing noise in analogue com-
puters. The most successful technique is to use higher quality and, conse-
quently, more expensive components. Additionally, there are a number of de-
sign rules that attempt to reduce noise. One such design rule is specified in
Section 6.6.5.
As outlined in Section 2.4.2, the primary application for the projects
physics engine is computer games. High accuracy is unimportant for games.
Objects must only appear realistic; their behaviour does not need to be entirely
accurate. In the projects physics engine, any deviations were, as predicted,
well within the limits of human perception. Therefore, for this physics engine,
noise was almost completely disregarded.
3.5.2 Inflexibility
Analogue computers are inflexible in that they are limited in the variety of
functions that they can perform. For computation on an analogue computer,
a program must have a mathematical basis. In comparison, digital computers
only require programs to have a Boolean algebraic basis, substantially increas-
ing the variety of programs that can be created. For example, a program based
on string manipulation could be expressed in Boolean algebra, but not in tra-
ditional mathematics. This inflexibility was one of the principal reasons why
analogue computers have decreased in popularity.
Physics is based on mathematics. Therefore, the inflexibility of analogue
computers did not restrict the projects physics engine. If additional functional-
ity were desired, the digital circuits of hybrid computers could be used to sup-
plement the capabilities of analogue computers, as discussed in Section 4.4.2.
Hybrid Computers
4.1 Overview
A hybrid computer is a computer that consists of analogue and digital compu-
tational elements. There is a cornucopia of definitions describing what exactly
constitutes a hybrid computer, but for the purposes of this report, a hybrid
computer is defined to be an analogue computer making use of some digital
components.
The construction of such a system is not trivial, since analogue and digital
are extremely dissimilar. The outputs generated by analogue components are
continuous, while the outputs generated by digital components are discontin-
uous.
4.2 History
The first hybrid systems constructed consisted of an existing analogue com-
puter connected to an existing digital computer. They were not entirely new
systems but the interconnection of two readily available systems.
The expansion of the US intercontinental ballistic missile (ICBM) pro-
gramme in 1954, followed by the space race of the 1960s demanded greater
computational power. Greater computational power meant that both analogue
computers and their programs would substantially increase in size, making
programming difficult and error-prone [17, p. 239]. Hybrid computers were
proposed as the solution.
Convair Astronautics designed the first hybrid system in 1954, to perform
simulation studies for the Atlas ICBM [15, p. 13]. It consisted of an Electronic
17
Chapter 4. Hybrid Computers 18
Associates, Inc (EAI) PACE analogue computer connected to an IBM 704 digital
computer, via a converter called an Add-a-Verter manufactured by Epsco, Inc.
In 1955, Ramo-Wooldridge Corporation developed a second hybrid system,
for the same purpose as Convairs. It also used an Add-a-Verter and a PACE
analogue computer, but used a UNIVAC 1103A for its digital computation.
Commercially available computers typically used designs that were more
primitive. For example, the HYDAC 2000 added some digital control elements
to an analogue computer. It was much later before more sophisticated designs
became available.
Despite the quantity of hybrid computers constructed, there was often dis-
agreement as to the best way to unite analogue and digital, leading to a multi-
tude of esoteric designs. For example, the TRICE system designed by Packard
Bell for NASAs spaceflight simulations used a pulse frequency modulated sig-
nal as the information carrier, whereas most used conventional signals [18, p.
59].
Hybrid computers declined around the late 1970s during the demise of ana-
logue computers (Section 3.2). After analogue computers were deemed obso-
lete, many believed the analogue component of the hybrid computer was a
hindrance and that a digital computer was a better investment. Furthermore,
it became increasingly obvious that hybrid computers were losing their niche
as the abilities of digital computers improved [17, p. 264].
4.3 Applications
Hybrid computers can potentially be used where either analogue or digital
computers are used. However, they were typically used for simulations that
needed to model elements too complex for pure analogue computers. In par-
ticular, hybrid computers were often used for scientific applications, in contrast
to pure analogue computers, which were often used only for engineering ap-
plications.
For example, Professor Vincent Rideout from the University of Wisconsin,
in associated with NASA, modelled the cardiovascular respiratory system of
the human body in 1972 [23, pp. 1012]. This simulation used 120 differential
equations to simulate the human heart, circulatory system, lungs and control
systems.
4.4 Advantages
The advantages associated with hybrid computers are their ease of reconfig-
urability and the way they unite the advantages of analogue and digital. These
advantages are outlined below.
4.4.1 Reconfigurability
The greatest advantage attained through the fusion of analogue and digital
computing elements is reconfigurability. As outlined in Section 3.5.3, pure ana-
logue computers are not reconfigurable.
Chapter 4. Hybrid Computers 19
4.5 Disadvantages
Although hybrid computers solve many of the issues arising from analogue
computers, the problem of noise remains partially unsolved. This issue is ex-
amined below.
4.5.1 Noise
The analogue components in a hybrid computer will suffer from noise. More-
over, since results generated by these analogue components will be used by
digital components, the digital components will also essentially be affected by
noise. A more thorough analysis of noise is provided in Section 3.5.1.
As with analogue computers, this noise is likely to prove unproblematic for
the projects physics engine since it is unlikely to reduce the perceived accuracy
of the engine.
Chapter 5
Analogue
The primary objective of this chapter is to outline how analogue is still widely
used today, despite advances in digital technology.
5.1 Overview
Analogue signals are defined over a continuous range of amplitudes. Typically,
they are defined over a continuous range of times, as illustrated in Figure 5.1a.
However, in ICs analogue signals are often defined only at discrete time values,
as illustrated in Figure 5.1b [24, p. 2]. These are referred to as discrete time or
sampled data analogue signals.
In contrast, digital signals are defined only at discrete times and discrete
amplitudes, as illustrated in Figure 5.1c. They represent objects using only two
values, zero and one.
Since digital is discrete in both time and amplitude, continuous analogue
signals potentially allow for more accurate modelling of many objects. The
properties of objects rarely have a finite number of levels but an infinite num-
ber.
5.2 Applications
Many believe analogue is obsolete, having been superseded by digital. The
proliferation of digital computers, digital television, digital cinema, digital ra-
dio, digital music, digital cameras, digital camcorders and digital mobile tele-
phony reinforces this viewpoint. However, this belief is unfounded.
There are many applications where analogue is still regularly used. Some
of these applications are discussed below.
20
Chapter 5. Analogue 21
3
Amplitude
2
0 1 2 3 4 5 6 7
Time
(a) Continuous time analogue signal
3
Amplitude
2
0 1 2 3 4 5 6 7
Time
(b) Discrete time analogue signal
3
Amplitude
2
0 1 2 3 4 5 6 7
Time
(c) Digital signal
The signals used for wireless communications are often transmitted close
to the 1 GHz frequency range with only a few millivolts of amplitude. They
will incur substantial noise during transmission. Therefore, a wireless receiver
must amplify the appropriate part of the signal and filter out noise before
processing, while operating at a very high speed. This can only be performed
in analogue electronics, even for digital communications.
Therefore, these wireless devices typically must include both digital and
analogue circuits on the same IC. This has led to a growth of interest in mixed-
signal design. New languages have been proposed in order to simplify design,
while tools allow for a more automated design flow. This is discussed further
in Section 7.1.
5.2.3 Sensors
Sensors are also inherently analogue in nature. This applies for mechanical,
electrical, acoustic and optical sensors. For example, the phototransistors in
a digital camera produce analogue signals, which are only converted to digi-
tal at a later processing step. A different type of sensor is used to control the
airbag release mechanism in vehicles [25, pp. 45]. This uses a specially con-
structed capacitor, which detects the sudden change in velocity indicating that
the airbag should be released.
Recent developments in very large scale integration (VLSI) design allow for
the sensors analogue and digital processing elements to be placed on the same
IC as the sensor. This increases the level of mixed signal integration, fuelling
the need for greater automation of analogue design.
5.2.7 Zoom
Camcorders and digital cameras typically provide both an analogue optical
and digital zoom. The analogue zoom manipulates the movement of a lens,
which controls the magnification of the photograph by modifying the light that
hits the charge coupled device (CCD) or complementary metal oxide semicon-
ductor (CMOS) sensor. In contrast, the digital zoom uses the centre portion
of the light hitting the CCD or CMOS sensor, which is then interpolated to
create a full size image. The analogue zoom acquires new data whereas the
digital zoom works with existing data. Therefore, the analogue zoom allows
for greater quality photographs. The digital zoom typically provides lower
quality photographs.
Chapter 6
Operational Amplifiers
6.1 Overview
The operational amplifier or op amp is the core component of analogue com-
puters. The operational amplifier is an amplifier with a very high open loop
gain and a very low output impedance. It can be wired up with auxiliary pas-
sive components, which cause the operational amplifier to perform a specific
mathematical function. The relation of the output signal to the input signal
is determined solely by the arrangement and magnitude of the other circuit
elements.
6.2 History
In the early 1940s, George A Philbrick developed the first operational ampli-
fier using vacuum tubes [26, p. 541]. Later, in 1962, Burr-Brown Corp and
GAP/R developed the first IC-based operational amplifiers. However, in 1963,
Fairchild Semiconductors A702 became the first commercially available IC-
based operational amplifier.
The best selling operational amplifier of all time is the 741. It was the first
internally compensated operational amplifier available, meaning it required no
external compensatory components. Released in 1968, it was invented by Dave
Fullagar while working at Fairchild Semiconductor. The 741 is still manufac-
tured by a number of companies, including National Semiconductor [27] and
Texas Instruments [28], and has remained popular to this day. However, many
operational amplifiers have since surpassed the 741s characteristics, exploiting
developments in semiconductor fabrication technologies.
Immediately following their introduction, operational amplifiers were a
commercial success. In fact, they proved so popular that the commonly ac-
cepted analogue design rules were reformulated to accommodate these de-
24
Chapter 6. Operational Amplifiers 25
6.3 Terminals
An operational amplifier typically has five terminals, as shown in Figure 6.1a:
V+ noninverting input
V inverting input
Vout output
VS+ positive power supply
VS negative power supply
V
Vout
V+ +
VS
(a) Complete
V
Vout
V+ +
(b) Simplified
Vin
Vout
+
Infinite open loop gain The amplification from input to output with no feed-
back applied is infinite. This makes the performance entirely dependent
on input and feedback networks.
Infinite input impedance The impedance viewed from the two input termi-
nals is infinite. This means that no current will flow in or out of either
input terminal.
Infinite bandwidth The bandwidth range extends from zero to infinity. This
ensures zero response time, no phase change with frequency and a re-
sponse to direct current (DC) signals.
Zero output impedance The impedance viewed from the output terminal
with respect to ground is zero. This ensures that the amplifier produces
the same output voltage irrespective of the current drawn into the load.
Zero voltage and current offset This guarantees that when the input signal
voltage is zero the output signal will also be zero, regardless of the in-
put source impedance.
Finite bandwidth The range of frequency that may be inputted to the opera-
tional amplifier is limited. This partially limits the range of signals that
may be processed by the projects physics engine.
Finite open loop gain The maximum amplification provided by the opera-
tional amplifier is limited by the magnitude of the supply voltages. This
results in upper and lower bounds being placed on the voltages that may
be processed by the projects physics engine.
6.6 Circuits
A vast number of circuits may be constructing using an operational amplifier
at their core. The operational amplifier circuits relevant to the projects physics
engine are described below.
6.6.1 Inverter
The inverter is the basic operational amplifier configuration. This circuit can
be constructed by placing a resistor at the V input and another resistor on
the feedback loop of an operational amplifier, as illustrated in Figure 6.3. The
mathematical function performed by the circuit is:
Rf
Vout = Vin
Rin
where
Vout = Voltage at Vout terminal
Vin = Voltage at Vin terminal
Rf = Resistance of R f resistor
Rin = Resistance of Rin resistor
Rin
Vin
Vout
+
6.6.3 Adder
The adder can potentially sum infinite sources, unlike its digital equivalent.
Each source is allocated an input line, connected to the operational amplifiers
V terminal through a resistor. A resistor is placed on the feedback loop. This
configuration is illustrated in Figure 6.4. The mathematical function performed
by the circuit is:
Rn
Vn
..
.
R2 Rf
V2
R1
V1
Vout
+
V1 V Vn
Vout = R f + 2 ++
R1 R2 Rn
Chapter 6. Operational Amplifiers 29
where
Vout = Voltage at Vout terminal
Rf = Resistance of R f resistor
V1 = Voltage at V1 terminal
R1 = Resistance of R1 resistor
V2 = Voltage at V2 terminal
R2 = Resistance of R2 resistor
Vn = Voltage at Vn terminal
Rn = Resistance of Rn resistor
6.6.4 Integrator
The integrator integrates the input signal with respect to time. It is constructed
by placing a resistor at the V input and a capacitor on the feedback loop of an
operational amplifier, as illustrated in Figure 6.5. The mathematical function
performed by the circuit is:
C
R
Vin
Vout
+
Z t
1
Vout = Vin dt
RC 0
where
Vout = Voltage at Vout terminal
R = Resistance of R resistor
C = Capacitance of C capacitor
Vin = Voltage at Vin terminal
t = Time
6.6.5 Differentiator
The differentiator differentiates the input signal with respect to time. It is con-
structed by placing a capacitor at the V input and a resistor on the feedback
loop of an operational amplifier, as illustrated in Figure 6.6a. The mathematical
function performed by the circuit is:
dVin
Vout = RC
dt
where
Vout = Voltage at Vout terminal
R = Resistance of R resistor
C = Capacitance of C capacitor
Vin = Voltage at Vin terminal
t = Time
This circuit may also perform multiplication and division if the values of the
resistor and capacitor are appropriately weighted. As with the other circuits,
the output is inverted by this operation.
However, the differentiator is a problematic analogue computer element.
The differentiation process strongly accentuates noise, which is always present
in analogue electronic circuits. Noise tends to have sudden abrupt changes, ap-
pearing as voltage spikes. Since the output of a differentiator is proportional
to the rate of change of its input, these sudden changes are greatly amplified.
Therefore, using the differentiator in its current incarnation would result in
very poor performance by the analogue computer.
In order to rectify this problem, a circuit referred to as the practical or low
frequency differentiator is often constructed instead. This consists of the pre-
vious differentiator circuit with a resistor placed before the input capacitor, as
illustrated in Figure 6.6b. The mathematical function of the circuit remains as
before. The frequency range over which this device operates is determined by
Rin . This works because a capacitor is essentially a short circuit at high fre-
quencies, converting the circuit to an inverter. Furthermore, an appropriate
value of Rin will be overshadowed by the capacitance at lower frequencies.
While this circuit performs its stated function correctly, it introduces an ad-
ditional problem. Analogue computers have traditionally been constructed
for a particular experiment with the inputs having a known frequency range.
However, the projects physics engine had to simulate an extensive range of
problems. Limiting the input frequency to the operational amplifiers would
have substantially reduced this flexibility. Therefore, an alternative solution
was desirable.
The standard solution involves reformulating the problem definition, so
that differentiation is transformed into integration. This can be performed by
using the following formula:
dx
y = a
dt
1
Z
x = y dt
a
Chapter 6. Operational Amplifiers 31
C
Vin
Vout
+
(a) Theoretical
R
Rin C
Vin
Vout
+
(b) Practical
where
y = Variable
a = Constant
x = Variable
t = Time
where
I = Current through diode
Io = Saturation current
e = Eulers constant
K = Variable
V = Voltage across diode
R
Vin
Vout
+
Vin
Vout
+
34
Chapter 7
Implementation Background
This chapter outlines some decisions that were made before implementation
proceeded, such as the tools and software used to implement the project. It
also highlights how multiple abstraction levels and testbenches were utilised
during the implementation.
7.1 VHDL-AMS
The traditional means for designing analogue and mixed signal circuits is
schematic entry or schematic capture. This involves creating a circuit schematic
by interconnecting components. It is a relatively intuitive way of constructing
circuits.
For digital design, schematic entry could also be used. However, hard-
ware description languages (HDLs) effectively replaced digital schematic en-
try. Schematic entry provides only a structural level of abstraction, where the
components and their interconnections are made explicit. In contrast, HDLs
often provide multiple levels of abstraction, which provides flexibility to de-
scribe each entity at its most natural abstraction level. In addition, searching
and reuse are easier with HDLs, as the graphical natural of schematics hinders
both of these tasks.
Until recently, there were few analogue and mixed signal HDLs. This situa-
tion changed in 1999, when the Institute of Electrical and Electronics Engineers
(IEEE) standardised VHDL-AMS [30], the analogue and mixed signal exten-
sions to digital VHDL. There is currently considerable interest in VHDL-AMS,
although few companies have started using it. This interest is attributable to
the continual growth in integration density, which makes it is no longer neces-
sary to split the digital and analogue parts of a design onto different ICs [31,
p. 199]. It is probable that in the future, it will be possible to fabricate an
IC based on a VHDL-AMS description. Currently, this is an area of active re-
search [32, 33]. Today, many electronic design automation (EDA) tools that
previously supported VHDL have been upgraded to support VHDL-AMS.
A competing language is Verilog-AMS, which is based on digital Verilog.
This has received less support as the standard has not yet been finalised by the
IEEE.
35
Chapter 7. Implementation Background 36
7.2 Software
Mentor Graphics SystemVision and Xilinx ISE were the two software applica-
tions used for implementing this project. These two applications are described
below.
this usually means a differential equation defining the behaviour of the hard-
ware. Structural code describes the structure of a system or the hierarchy of
components from which it has been constructed. In VHDL-AMS, this involves
the use of port maps.
Behavioural descriptions have been provided for all the components de-
scribed in the following chapters. These descriptions are used as reference
models since they describe idealistic components. The structural description
may later be compared against this ideal model.
In the project, all non-core components have also been provided with a
structural definition. Core components are components such as resistors and
capacitors, which can have no valid structural definition. The structural defin-
ition is the most important as it describes accurately how the system would be
constructed and how it would behave.
In summation, behavioural and structural definitions were created for al-
most all components. Behavioural descriptions are optional but greatly assist
in the debugging and validation of designs.
7.4 Testbenches
For the project, testbenches have been constructed to ensure the correct behav-
iour of every component. Some components have been provided with multiple
testbenches where it was deemed beneficial for either testing or comprehen-
sion.
The same testbenches are used for both the behavioural and structural de-
scriptions. This allows for deviations between the two descriptions to be more
easily analysed. The behavioural definition is typically the idealistic compo-
nent against which deviations in the structural definition should be measured.
In a digital system, it is often desirable to run several test cases for each com-
ponent in order to verify that it works as stated, for every possible scenario. It
is plausible that a design could be 100% tested. However, this is not possible
for most analogue designs. The range of input values to an analogue design
is potentially infinite compared to a maximum of two for digital. Moreover,
analogue designs can be more easily affected by temperature or process varia-
tions so that testbenches can never fully test a real design. Considering these
complexities, the created testbenches test some standard cases, which demon-
strate the standard behaviour of the components. It was deemed superfluous
to stress test the designs because temperature and process variations have not
been considered.
Chapter 8
Prototype 1:
Mass-Spring-Damper
The chapter outlines the first prototype constructed for the project. Firstly, in-
formation is provided on the physics underlying the system. This is followed
with information on the design and construction of the individual entities con-
stituting the prototype. The chapter is concluded with a discussion of the im-
plemented prototype and analyses the deviation of the device from ideality, to
check the viability of this project.
8.1 Background
In order to gain familiarity with the design and construction of analogue com-
puters, it was decided to first implement a prototype. In addition, this proto-
type served as an introduction to VHDL-AMS (Section 7.1) and SystemVision
(Section 7.2.1). Such a prototype needed to be a pure analogue non-hybrid
computer. In other words, it would not feature reconfigurable elements. More-
over, it needed to implement a single relatively simple equation that would be
of practical use.
Based on these criteria, the mass-spring-damper system, one of the simplest
problems in classical Newtonian physics, was chosen for the prototype. Before
discussing the mass-spring-damper system, the simpler mass-spring system is
described.
38
Chapter 8. Prototype 1: Mass-Spring-Damper 39
mass
spring
displacement
Figure 8.1: Mass-spring
will result in the systems oscillation being damped over time. Consequently,
from the point of view of a physics engine, this model is unrealistic and of little
use. A more developed system that models the damping was necessary.
mass
spring damper
displacement
Figure 8.3: Mass-spring-damper
If the fixed object is disturbed, the mass attached to the spring will oscil-
late. However, it will not oscillate perpetually, since the damper will impede
the motion of the spring. Instead, it will oscillate with a wave whose ampli-
tude starts at a large value and gradually decreases until it reaches zero. This
behaviour is illustrated in Figure 8.4.
This system is feasible, since the damper may be constructed to model the
forces inherent in any assembled mass-spring system. In addition, this system
is used for many purposes. In particular, vehicle suspension systems are often
based on some derivative of this system. This aspect will be discussed further
in Chapter 9.
Chapter 8. Prototype 1: Mass-Spring-Damper 41
x (t) = a sin ( f t)
where
x = Displacement
t = Time
a = Amplitude
f = Frequency
This formula was mapped into the code shown in Code 8.2.
21 voltage == amplitude * sin(math_2_pi * frequency * now)
;
Code 8.2: Excerpt from Behavioural/sinusoidal voltage source.vhd
8.3 Packages
Many functions were used repeatedly throughout the project. Following the
principles of modular design, these functions were placed inside four pack-
ages.
8.3.1 Arithmetic
This package provides a function for the addition of two std ulogic -
vectors, since the packages supplied with SystemVision 2002 (Section 7.2.1)
do not contain such a function. This addition is done by converting the std -
ulogic vectors to std logic vectors, summing them and converting the
result back to an std ulogic vector. Although perhaps slightly inefficient,
it utilises the standard packages to implement the functionality, which results
in a reduced likelihood of errors, as the supplied functions have presumably
been fully tested. Moreover, this source of inefficiency is of little consequence,
as this function will only be used during simulations. The function would be
unnecessary in synthesised code since hardware has no concept of types.
This package also provides a function for testing two real numbers for
equality. The two numbers are said to be equal if they lie within a certain
range of each another. This is necessary because, due to various sources of er-
rors in floating point arithmetic, it is nearly impossible to get two equal real
numbers to have exactly the same value.
The testbench checks the validity of this package using assertions.
Chapter 8. Prototype 1: Mass-Spring-Damper 43
8.3.2 Constants
This package defines constants that were used throughout the rest of the code.
These will be discussed in further detail when appropriate (Section 8.5.2 and
Section 10.1).
The testbench checks the validity of this package using assertions.
8.3.4 Types
This package defines a new type, time vector, which is an array of time ob-
jects. This type is awaiting IEEE approval for addition to the standard libraries.
The testbench only checks if the creation and initialisation of such an object
is valid.
8.4.1 Resistor
A resistor impedes the flow of current.
The resistance of the projects resistor is specified at instantiation, making
it somewhat variable.
The behaviour of an ideal resistor can be described using Ohms Law:
V = IR
where
V = Voltage across resistor
I = Current through resistor
R = Resistance
While this equation describes an ideal resistor, the majority of practical resis-
tors closely approximate this ideal. Therefore, this equation provides a satis-
factory description for the project. It was translated into the code shown in
Code 8.3.
18 voltage == current * res;
Code 8.3: Excerpt from Behavioural/resistor.vhd
8.4.2 Capacitor
A capacitor stores a quantity of electrical energy.
The capacitance of the projects capacitor is specified at instantiation, mak-
ing it somewhat variable.
The behaviour of an ideal capacitor can be described by the capacitor equa-
tion:
dV
I=q
dt
where
I = Current through capacitor
q = Charge
V = Voltage across capacitor
t = Time
While this equation describes an ideal capacitor, the majority of practical ca-
pacitors closely approximate this ideal. Therefore, this equation provides a
satisfactory description for the project. It was translated into the code shown
in Code 8.4.
18 current == cap * voltagedot;
Code 8.4: Excerpt from Behavioural/capacitor.vhd
Chapter 8. Prototype 1: Mass-Spring-Damper 45
Vout
Vin +
8.5.1 Inverter
The inverter was described in detail in Section 6.6.1.
Chapter 8. Prototype 1: Mass-Spring-Damper 46
The inverter is the simplest analogue computing entity constructed for this
project. The projects inverter has two inputs, each of which is multiplied by a
different specified factor. The two multiplied inputs are summed and the result
is inverted.
The behavioural level description consists of an equation describing this
behaviour, as shown in Code 8.5. The structural level description instantiates
three resistors and an operational amplifier, connecting them in the configura-
tion shown in Figure 8.6. The corresponding code is shown in Code 8.6
19 voltage == -((weight1 * input1reference) + (weight2 *
input2reference));
Code 8.5: Excerpt from Behavioural/inverter.vhd
V2
V1
Vout
+
40 resistor1: resistor
41 generic map (
42 res => resistance1
43 )
44 port map (
45 terminal1 => input1,
46 terminal2 => weighted_input
47 );
48 resistor2: resistor
49 generic map (
50 res => resistance2
51 )
52 port map (
53 terminal1 => input2,
54 terminal2 => weighted_input
55 );
56
57 parallel_resistor: resistor
58 generic map (
59 res => basic_resistance
60 )
61 port map (
62 terminal1 => weighted_input,
63 terminal2 => output
Chapter 8. Prototype 1: Mass-Spring-Damper 47
64 );
65 op_amp: operational_amplifier
66 port map (
67 positive_input => electrical_ref,
68 negative_input => weighted_input,
69 output => output
70 );
Code 8.6: Excerpt from Structural/inverter.vhd
The testbench supplies the inverter with two sinusoidal waveforms multi-
plied by different factors and verifies that the output is the inverted sum of
these, using assertions. The testbench output shown in Figure 8.7 demon-
strates that the deviation of the structural model from ideality is extremely
miniscule. The difference may only be observed in the highly magnified wave-
form. Therefore, it may be assumed that this difference is insignificant and
would be invisible to a computer game player, if the inverter were part of a
physics engine.
(a) Original
(b) Magnified
8.5.2 Integrator
The integrator was described in detail in Section 6.6.4.
As with the inverter, the projects integrator has two inputs, each of which
is multiplied by a different specified factor. The two multiplied inputs are
summed and the result is inverted and integrated with respect to time.
The behavioural level description consists of an equation describing this
behaviour, as shown in Code 8.7. The structural level description instantiates
two resistors, a capacitor and an operational amplifier, connecting them in the
configuration shown in Figure 8.8.
19 voltagedot == -((weight1 * input1reference) + (
weight2 * input2reference));
Code 8.7: Excerpt from Behavioural/integrator.vhd
V2
V1
Vout
+
A difficulty arose during the construction of this entity regarding the basic
quantities that should be assigned to the resistors and capacitors. Empirical
evidence suggested that a large range of resistances were suitable, but that
the range of capacitances was very limited. In essence, should the value of
the capacitors have been too great, the output waveform would be completely
distorted. Low capacitances worked well, but if the values were too low, the
waveform would also be somewhat distorted. Moreover, it was unsatisfactory
to fix the capacitors to some small capacitance and leave the resistors at 1
because this meant a multiplicative factor of one could never be obtained from
the circuit. Consequently, a high resistance had to be chosen to counteract the
effects of the low capacitance. After repeated experimentation, it appeared that
the values specified in the constants library, 100 k resistance and 10 F
capacitance, provided the least distorted waveform possible.
The first testbench supplies the integrator with two sinusoidal waveforms
multiplied by different factors. The output is a sinusoidal waveform phase
shifted by 90o or 2 rad. The second testbench supplies the equivalent square
waveform, creating a triangular waveform output. As illustrated by Figure 8.9,
the distortion to a sinusoidal waveform is negligible. Nevertheless, there is mi-
nor distortion caused to a square wave input. However, the distortion offered
is only 0.03846%. Such minimal distortion is likely to prove impossible to per-
ceive in a computer game using these calculations.
Chapter 8. Prototype 1: Mass-Spring-Damper 49
8.5.3 Differentiator
The differentiator was described in detail in Section 6.6.5.
It was noted that use of the differentiator should be avoided in analogue
computers due to the high level of noise introduced by these components. In
order to study the impact of a differentiator on the project, a differentiator en-
tity was constructed and simulated.
As with the integrator, the projects differentiator has two inputs, each of
which is multiplied by a different specified factor. The two multiplied inputs
are summed and the result is inverted and differentiated with respect to time.
The behavioural level description consists of an equation describing this
behaviour, as shown in Code 8.8. The structural level description instantiates
a resistor, two capacitors and an operational amplifier, connecting them in the
configuration shown in Figure 8.10. The basic resistances and capacitances
used were those that were deemed most appropriate for the integrator.
19 voltageinteg == -((weight1 * input1reference) + (
weight2 * input2reference));
Code 8.8: Excerpt from Behavioural/differentiator.vhd
V2
V1
Vout
+
(a) Behavioural
(b) Structural
8.6 Mass-Spring-Damper
The mass-spring-damper is provided with three parameters, the mass, the
spring constant of the spring and the viscosity of the damper. An input dis-
placement is specified and the output displacement and velocity are generated.
The mass-spring-damper uses an input and output displacement. It would
have been more common to use a single displacement with the output con-
Chapter 8. Prototype 1: Mass-Spring-Damper 52
nected back to the input, as was often done with analogue computers. How-
ever, the simulator refused to simulate this system as the input source and
looped back output constituted multiple line drivers (Section 12.4). To solve
this problem, the dual displacement version of the mass-spring-damper equa-
tion was used instead.
The mass-spring-damper is specified by a second order differential equa-
tion, which may be used to calculate the position of the mass at any particular
point in time. This equation is [34, p. 31]:
d ( xout xin )
F = b k ( xout xin ) (8.1)
dt
where
F = Force
b = Dampers viscosity
xout = Displacement out
xin = Displacement in
t = Time
k = Spring constant
d ( xout xin )
F = b k ( xout xin )
dt
But F = ma
d ( xout xin )
ma = b k ( xout xin )
dt
dV
But a =
dt
dV d ( xout xin )
m = b k ( xout xin )
dt dt
dV b d ( xout xin ) k
= ( xout xin )
dt m dt m
b k
Z
V = ( xout xin ) ( xout xin ) dt (8.2)
m m
dxout
But V =
dt
dxout b k
Z
= ( xout xin ) ( xout xin ) dt
dt m m
b k
Z ZZ
xout = ( xout xin ) dt ( xout xin ) dt dt (8.3)
m m
Chapter 8. Prototype 1: Mass-Spring-Damper 53
where
F = Force
b = Dampers viscosity
xout = Displacement out
xin = Displacement in
t = Time
k = Spring constant
m = Mass
a = Aceleration
To model the output displacement and velocity, Equation 8.2 and Equation 8.3
were implemented. These equations are readily implementable as an analogue
computer, as illustrated in Figure 8.12a, by using an adder or an integrator
with an inverter for each operation in these equations. However, this solution
is nonoptimal and may be optimised through the elimination of duplicate in-
versions. An optimised computer is illustrated in Figure 8.12b. Note that the
number of operational amplifiers has been reduced from nine to seven, which
is a 22.2% reduction. This is important because the greater the number of op-
erational amplifiers, the greater the error in the output, the greater the area
consumed and the greater the expense involved. In more complex analogue
computers, the decrease is often on a larger scale and is consequently of greater
importance.
The behavioural level description consists of the derived equations, as
shown in Code 8.9. The structural level description instantiates inverters and
integrators, connecting them in the configuration shown in Figure 8.12b.
25 voltage_difference == voltage_displacement_out -
displacement_inreference;
26 voltage_velocity == -((viscosity / mass) *
voltage_difference) - ((spring_constant / mass) *
voltage_differenceinteg);
27 voltage_displacement_out == voltage_velocityinteg;
Code 8.9: Excerpt from Behavioural/mass spring damper.vhd
The testbench uses a square wave to perturb the mass-spring-damper. It
supplies a mass of 0.45 kg, a spring constant of 10 N/m and a viscosity of
0.75 m2 /s as the parameters to the mass-spring-damper. These parameters
have no special meaning; they were chosen because they demonstrate the
damped oscillatory behaviour of the system illustrated in Figure 8.13. It was
important to determine the error between the ideal behavioural model and the
real structural model, because if the error were too great, corrective measures
would have needed to have been taken before proceeding with the implemen-
tation of the reconfigurable physics engine. However, examination showed
that the error was very low, well below the limits of human perception and not
of consequence if the calculations were used in a computer game.
+ -V
k
m
-
xin - 1 -
b
R m
-
+ - 1 -
Chapter 8. Prototype 1: Mass-Spring-Damper
- + - 1 -xout
k
- 1 -
R m
-
(a) Original
+ -V
k
m
-
xin -
b
R m
-
+ - - 1
Chapter 8. Prototype 1: Mass-Spring-Damper
- 1 - + -xout
k
-
R m
-
(b) Optimised
(a) Displacement in
(d) Velocity
Prototype 2: Vehicle
Suspension System
This chapter outlines the second prototype constructed for the project. Firstly,
information is provided on the physics underlying the system. This is followed
with a discussion of the implemented prototype.
9.1 Background
The mass-spring-damper represents a real world system. However, it is un-
likely to be of much use in computer games, which are the main target of the
project. Consequently, it was decided to construct a more useful prototype to
demonstrate that analogue computers are suitable for the modelling of game
physics.
Based on these criteria, it was decided to model a vehicle suspension sys-
tem. One advantage of this system is that it is based on the previously con-
structed mass-spring-damper, allowing for reuse of existing components.
59
Chapter 9. Prototype 2: Vehicle Suspension System 60
vehicle displacement
displacement1 displacement2
+ - 1 - mass-spring-damper - xout
- 0.5
-
x2 mass-spring-damper
Chapter 9. Prototype 2: Vehicle Suspension System
(c) Displacement
Reconfigurable Hybrid
Computer
This chapter describes the reconfigurable hybrid computer, which was con-
structed after the second prototype. The entities that constitute the reconfig-
urable hybrid computer are described, before the computer itself is outlined.
10.1 Switch
A switch is used to control which circuit paths are currently active and which
are disabled. A switch in an IC would typically be implemented using a single
transistor.
The projects switch has a digital control signal. When the signal is high,
the switch is closed and current flows. When the signal is low, the switch is
open and current does not flow.
The behavioural level description of the switch is idealistic. The entity
breaks the circuit when open but effectively disappears when closed. How-
ever, this model is unrealistic as no circuit element can effectively disappear.
To take into account these nonidealities, the structural level description was
created. If the switch is closed, it has a low resistance and current can easily
pass through the entity. If the switch is open, it has a high resistance and cur-
rent cannot easily pass through the entity. This description works on the basis
that the switch will be on one possible circuit path. Current will flow down
the path of lesser resistance. If there is only one path, the switch will serve no
useful purpose.
The testbench supplies a sinusoidal waveform to the switch, while the con-
trol signal is tested in both positions. For the behavioural description, when
the switch is closed, the output voltage becomes the input voltage, but when
the switch is open, the output voltage becomes zero. For the structural descrip-
tion, since there is only one circuit path in the testbench, the current will flow
down this path and the output voltage will stay at the level of the input voltage
irrespective of the position of the switch.
65
Chapter 10. Reconfigurable Hybrid Computer 66
the switches are opened and closed. A single resistor wired in this fashion
should not modify the inputted voltage. Consequently, the behavioural level
description will show the input voltage at the output while the switch is closed,
but zero volts while the switch is open. However, at the structural level, the
switches should not modify the inputted voltage, for reasons discussed in Sec-
tion 10.1. Consequently, at this level, the output is the same as the input.
The behavioural level description of the device alternates between the ca-
pacitor equation and zero volts depending on the state of the switches. The
structural level description instantiates a capacitor and two switches in the for-
mation illustrated in Figure 10.2.
The testbench is similar to the one used for the capacitor. The testbench
attaches a sinusoidal voltage source to one terminal of the switchable capac-
itor, so that the voltage at the other terminal may be observed. Meanwhile,
the switches are opened and closed. A single capacitor wired in this way will
slightly modify the input voltage while it is charging. After a short period of
time, the voltage will become identical to the input voltage. Consequently, the
behavioural level description will show a slightly modified input voltage at
the output while the switch is closed, but zero volts while the switch is open.
However, at the structural level, the switches should not modify the inputted
voltage, for reasons discussed in Section 10.1. Consequently, at this level, the
output is approximately the same as the input, with a slight deviation while
the capacitor is charging.
10.5 Cell
The cell is the basic reconfigurable entity of the projects hybrid computer, used
to perform a single operation.
The required functions had to be decided. Inversion is often necessary in
analogue computers, since almost all functions invert and this extra inversion
would need to be eliminated. Integration is often necessary for differential
equations, which describe much of physics. Moreover, integration can be used
to eliminate differentiation, which offers poor performance in analogue com-
puters as outlined in Section 6.6.5. Addition is also necessary, but any entity
can be turned into an adder by providing it with multiple inputs. Multiplica-
tion is also necessary, but again, any entity can be turned into a multiplier by
providing it with variable inputs.
Chapter 10. Reconfigurable Hybrid Computer 68
input1
input2
output
+
Figure 10.3: Schematic of cell. Red indicates inversion; blue indicates integra-
tion.
Instead of supplying variable resistors at the inputs, the values of the ca-
pacitor and resistor in parallel with the operational amplifier could have been
made controllable. However, this would have resulted in both inputs being
multiplied by the same factor, offering less flexibility. Moreover, variable resis-
tors are a standard circuit building block whereas variable capacitors are not.
For these reasons, it was decided to use variable resistors at the inputs. This ad-
heres to the RISC philosophy, as there is the potential to make the entire set of
entities variable. Such a scheme would lead to a lot of underutilised hardware
Chapter 10. Reconfigurable Hybrid Computer 69
10.7 Router
A routing entity was required to route the appropriate signals to the cells in-
put. This entity would likely consist of switches, to create and terminate con-
nections as required.
The projects router takes a 5-bit digital select signal, allowing thirty two
inputs. The first input is always connected to ground, allowing thirty one input
lines to be connected. Ground is necessary for when an input must be disabled,
such as when only one of a cells two inputs is used. Two outputs are provided,
corresponding to the cells two inputs.
Only a behavioural level description of the router was created. This de-
scription assigns the voltage of the chosen input signal to the output signal. A
structural level description seems an obvious choice for this entity. However,
this description required thirty two switches, exceeding the design size limits
of the simulator (Section 12.1).
The testbench generates thirty one input signals of differing amplitudes.
Meanwhile, the control signal is incremented, so that each signal will be dis-
Chapter 10. Reconfigurable Hybrid Computer 70
(c) Velocity
(a) Displacement
played at the output. The output waveforms are discontinuous sine waves that
gradually increase in amplitude.
- Router Cell -
The first testbench supplies a sinusoidal waveform, while changing the op-
eration, so that the output shows both integration and inversion. The first half
of the waveform is phase shifted while the second half is inverted. The second
testbench supplies a square waveform, requesting the same sequence of events
from the cell. The first half of the waveform is triangular while the second half
is an inverted square waveform.
10.9 Computer
The computer constitutes the bulk of the reconfigurable hybrid computer. It
consists of cells connected via programmable interconnect. The inputs and
outputs are analogue but the control signals are digital.
The main challenge lay in designing the interconnect. A few ideas were
considered, based around the general ideas of a mesh and a bus.
The mesh would consist of cells arranged in a matrix formation. Each cell
would get its inputs from the cells to its immediate left. The first cells would
also get inputs from the rightmost cells. The meshs inputs would come in the
left and outputs would leave from the right. At first glance, this arrangement
appears useful. A great deal of hardware is not required, since the maximum
number of inputs to each cell is equal to the number of cells to its immediate
left. However, on attempting to map systems to such an architecture, problems
are quickly uncovered. Firstly, such an architecture is difficult to program. This
is a disadvantage, but not sufficient to make this design worthless. However,
the lack of flexible routing in this design prevents the mapping of many stan-
dard physics systems, such as the vehicle suspension system. Methods could
Chapter 10. Reconfigurable Hybrid Computer 73
be designed to overcome these problems, but such methods would likely re-
move all advantages gained by the use of such a system, while substantially
increasing the difficulty involved in programming such a system.
The alternative to the mesh was a bus-based architecture. This concept
would use a bus line for each input and a bus line for each cell. Each cell would
be connected to every bus line. A cells output would be connected to a specific
bus line for each cell, so that only one cell would drive each bus line. The last
bus lines would be used as the outputs. Therefore, if a specific signal needed to
be present outside the analogue computer, it could be processed by the last cell,
which would output it to an appropriate bus line. Such a solution results in the
use of a lot of hardware to route the correct signal to the cells inputs. However,
it appears to be the only solution sufficient to map physics problems, such as
the vehicle suspension system, to the hybrid computer. In addition, this solu-
tion is substantially easier to program that the aforementioned mesh.
Based on this analysis, the bus solution was chosen. Two inputs were re-
quired. The design of the cell router left twenty nine remaining bus lines, so
twenty nine cells were created. The last two bus lines were used as the outputs.
This is sufficient for relatively large problems, such as the vehicle suspension
system.
The behavioural level description simply merges the behavioural descrip-
tions of twenty nine cell routers, connected to thirty one bus lines. A more
natural description cannot be created. The structural level description creates
thirty one bus lines and instantiates twenty nine cell routers as illustrated in
Figure 10.7.
A possible optimisation involves removing the complete flexibility permit-
ted by the current solution, and only allowing a subset of bus lines to be routed
to each cell. However, analysing the prototypes constructed suggested that
such a solution would be difficult to implement correctly. For example, de-
signing an optimised solution that works with the vehicle suspension system
could result in a solution that fails to work with a multitude of other neces-
sary systems. If more problems were to be analysed, patterns could possibly
be detected and such an optimisation could then be implemented.
The testbenches test the system by configuring it to simulate the simplified
mass-spring-damper, the mass-spring-damper and the vehicle suspension sys-
tem problems, using the same parameters as before. These testbenches ensure
that the same level of functionality may be achieved through the computer,
as through the prototypes. Moreover, the prototypes provide a benchmark
against which the computers error may be analysed. The outputs of the mass-
spring-damper testbench compared with the mass-spring-damper prototype
are shown in Figure 10.8. The outputs of the vehicle suspension system test-
bench compared with the vehicle suspension system prototype are shown in
Figure 10.9. Both highlight that the deviation from ideality is minimal.
10.10 DAC
Digital to analogue converters (DACs) are used to convert digital signals to
equivalent analogue signals. They provide a means by which digital electron-
ics can communicate with analogue electronics.
The projects DAC performs its conversion with a granularity of 8 bits.
Cell Router 1 Cell Router 2 Cell Router 28 Cell Router 29
input1
input2
.. .. ..
. . .
Chapter 10. Reconfigurable Hybrid Computer
output1
output2
Figure 10.7: Schematic of computer
74
Chapter 10. Reconfigurable Hybrid Computer 75
(c) Velocity
(a) Displacement
More sophisticated DACs are widely available but are expensive. As stated in
Section 2.4.2, accuracy is not very important for computer games. Objects only
need to appear as if they are behaving realistically. Higher accuracy would
lead to more expense that would probably not be observed. Each bit inputted
to the DAC represents 0.1, instead of the more standard 1, to allow for greater
accuracy. This works well because most systems simulated will use numbers
close to zero, so achieving accuracy for lower numbers at the expense of higher
numbers becoming impossible is satisfactory. Additionally, the DAC has clock
and reset inputs. All output voltage changes will occur on the rising edge of the
clock, and the active low reset input forces the device to output 0 V. Although
these inputs serve little purpose for this entity, they were added to create an
inverse of the analogue to digital converter (ADC) (Section 10.11).
Only a behavioural level description of the DAC was constructed because
DACs are standard entities. There are many different ways a DAC could be
implemented, but these details are unimportant for the project. The description
applies type conversion functions to implement the DAC whenever a rising
edge of the clock is observed. The core process of the DAC implementation is
displayed in Code 10.1.
25 process(clock, reset)
26 variable registered_output_voltage: voltage;
27 begin
28 if reset = 0 then
29 registered_output_voltage := 0.0;
30 elsif rising_edge(clock) then
31 registered_output_voltage := to_real(input) *
voltage_per_bit;
32 else
33 registered_output_voltage :=
registered_output_voltage;
34 end if;
35 temporary_output_voltage <= registered_output_voltage
;
36 end process;
Code 10.1: Excerpt from Behavioural/dac.vhd
The testbench supplies the DAC with the binary values representing 10 and
10, which produces a square waveform of amplitude 2 V at the output.
10.11 ADC
ADCs are used to convert analogue signals to equivalent digital signals. They
provide a means by which analogue electronics can communicate with digital
electronics.
The projects ADC performs its conversion with a granularity of 8 bits.
More sophisticated ADCs are widely available but are expensive. As stated in
Section 2.4.2, accuracy is not very important for computer games. Each bit out-
putted from the ADC represents 0.1, instead of the more standard 1, to allow
for greater accuracy. This works well because most systems simulated will use
Chapter 10. Reconfigurable Hybrid Computer 78
numbers close to zero, so achieving accuracy for lower numbers at the expense
of higher numbers becoming impossible is satisfactory. Additionally, the ADC
has clock and reset inputs. All output voltage changes will occur on the rising
edge of the clock, and the active low reset input forces the device to output
zero. Although real ADCs do not use these signals, they were added to allow
for synchronous behaviour. A synchronous ADC could easily be constructed
by placing a register after an ADC.
Only a behavioural level description of the ADC was constructed because
ADCs are standard entities. There are many different ways an ADC could be
implemented, but these details are unimportant for the project. The description
applies type conversion functions to implement the ADC whenever a rising
edge of the clock is observed. The core process of the ADC implementation is
displayed in Code 10.2.
22 process(clock, reset)
23 begin
24 if reset = 0 then
25 output <= "00000000";
26 elsif rising_edge(clock) then
27 output <= to_std_ulogic_vector(inputreference /
voltage_per_bit, 8);
28 end if;
29 end process;
Code 10.2: Excerpt from Behavioural/adc.vhd
The testbench supplies the ADC with a waveform that steps from 0 V to
9 V, so the digital output should rise from zero to ninety in steps of ten.
Computer
48 convert_input1: dac
49 port map (
50 clock => clock,
51 reset => reset,
52 input => input1,
53 output => analogue_input1
54 );
55 convert_input2: dac
56 port map (
57 clock => clock,
58 reset => reset,
59 input => input2,
60 output => analogue_input2
61 );
62
63 compute: computer
64 port map (
..
.
214 );
215
216 convert_output1: adc
217 port map (
218 clock => clock,
219 reset => reset,
220 input => analogue_output1,
221 output => output1
222 );
223 convert_output2: adc
224 port map (
225 clock => clock,
226 reset => reset,
227 input => analogue_output2,
228 output => output2
229 );
Code 10.3: Excerpt from Structural/digitised computer.vhd
10.13 ADE
The ADE is the top level entity. Currently, it only consists of the digitised com-
puter and consequently it is unbeneficial to create such an entity. The reasons
for creating this entity will be outlined in Chapter 11.
The behavioural level description is the same as that used for the digitised
computer. The structural level description instantiates a single instance of the
digitised computer.
The testbenches test the system by configuring it to simulate the simpli-
fied mass-spring-damper, the mass-spring-damper and the vehicle suspension
system problems, using the same parameters as before. These testbenches en-
sure that the same level of functionality may be achieved through the ADE,
as through the prototypes. Moreover, the prototypes provide a benchmark
against which the ADEs error may be analysed. The outputs of the mass-
spring-damper testbench are shown in Figure 10.11. The outputs of the vehicle
suspension system testbench are shown in Figure 10.12. In both cases, the os-
cillatory behaviour may be observed by noting how the numbers change.
(c) Velocity
(a) Displacement
Figure 10.12: ADEs structural description, using the vehicle suspension sys-
tem testbench
Chapter 11
Multiplexing
11.1 Background
The previously outlined physics engine will perform the vast majority of
physics calculations with the desired accuracy. However, if such a system were
to be used in practice, one problem remains: the system is capable of running
only a single physics simulation at any particular time. For example, with the
above system, one vehicle suspension system can be simulated at any particu-
lar time. However, it is likely that a racing game would need to simulate many
vehicle suspension systems simultaneously.
To facilitate this, it was necessary to decide on a scheme through which
simulations could be multiplexed. In other words, a scheme through which
many simulations could be run during the timeframe of one.
81
Chapter 11. Multiplexing 82
describes the system being simulated. If it were necessary to run the system at
twice the speed of real-time, the equation
2ax2 + 2bx + 2c = 0
would be modelled instead. If it were necessary to run the system at half the
speed of real-time, the equation
a 2 b c
x + x+ =0
2 2 2
would be modelled instead.
Input -
?
Register File - Digitised Computer
Output
This system works on the principle that the vast majority of analogue com-
puting problems, such as the mass-spring-damper system, can be designed so
that their input and output are connected. This design was not used for this
project, since it would have required two signals driving the same line, which
the simulator cannot simulate (Section 12.4). In the scenario outlined here, the
output of the register would constitute a single line driver, so this should not
be a problem. However, it is not obvious that the vehicle suspension system
can be redesigned to have a final loop.
Chapter 11. Multiplexing 83
There are larger problems as to why this suggestion will not work. State
is not determined by the output value. Supplying the previous output value
will work in the same way as supplying the first input value. Since the signal
varies with time, the state must depend on time. Only the integrators work
with respect to time; all other components work based on the instantaneous
input signal. This time-varying behaviour is provided by the charge stored in
the capacitors, used by the integrators. Therefore, to save state, the charges in
the capacitors need to be stored, instead of the output value.
Consequently, this suggestion is invalid and it could not work.
In summation, there are three distinct phases in which this system operates:
firstly, the input value loading phase during which two values are read for each
simulation; secondly, the actual simulation; thirdly, the output value reading
phase during which the results of the simulation are obtained.
Chapter 11. Multiplexing 84
However, such a system would be very slow. The loading and reading
phases will each take time away from simulation, particularly if many output
registers are used. In addition, part of the time allocated to simulation will be
consumed trying to restore the previous state, during which time the system is
effectively idle. Moreover, the system would need to run each simulation for a
relatively long period of time, after which the probability of not having reached
stability is very low. If the software using the physics engine only required
the first few readings, the physics engine would still record many results, sev-
eral of which would be worthless. Therefore, the physics engine would have
expended time performing worthless actions. To summarise, simulation re-
sults will take considerable time to appear, but they will come in clusters. The
biggest problem with this is that results are typically required instantaneously
and concurrently. Therefore, this solution is not ideal.
There is another problem associated with this solution. The time required
to load the previous state is not easily determined, due to the different charges
stored in each capacitor. The only way to fully ensure the system reaches its
previous state is to leave it charging for as long as the simulation has run previ-
ously. However, this would be an extremely long delay and clearly unsuitable.
A modification of this solution would be to connect a register to each ca-
pacitor so that the capacitors charge is stored locally. This would be difficult
to implement and would likely be problematic. More importantly, it does not
solve any issues.
Although this solution could work, a better solution would be desirable.
somewhat beneficial as the operation select signal can be a decoded 2-bit signal.
Higher powers of two also exhibit this beneficial characteristic.
The testbench supplies a sinusoidal waveform, while modifying which ca-
pacitor is currently activated. The output is a mostly undistorted version of the
input. However, some distortion will be present during the charging phase of
each capacitor.
11.3 Cell
The cell was modified to replace the switchable capacitor with the capacitor
stack, as illustrated in Figure 11.4. The operation signal was also modified, to
accommodate the extra bits required for operation selection.
input1
input2
output
+
Figure 11.4: Schematic of multiplexed cell. Red indicates inversion; blue indi-
cates integration.
The testbenches are similar to the ones used previously. However, instead
of testing only inversion and integration, all three integrations are tested. For
the sinusoidal waveform, the first three quarters of the waveform are phase
shifted while the last quarter is inverted. For the square waveform, the first
Chapter 11. Multiplexing 86
three quarters of the waveform are triangular while the last quarter is an in-
verted square waveform. In both cases, when switching between the different
capacitors used for integration, there will be slight glitches.
11.9 ADE
The top level component was modified to accommodate the new size of the op-
eration signals. It also instantiates the control unit so that the operation signal
inputs are correctly decoded. The new version of this component is illustrated
in Figure 11.5.
- Control Unit
The previous testbenches have been modified to accommodate the new size
of the operation signal. Since multiplexing is not used in these testbenches, the
integration operation has been hardwired to 01.
In addition, new testbenches have been created to test the multiplexed func-
tionality of the physics engine. The capacitor switching algorithm shown in
Code 11.1, changes the currently active capacitor every second. This switching
could be performed more quickly if necessary, but this substantially increases
the time the simulator takes to perform the simulation (Section 12.2). These
testbenches also multiply the multipliers of their equations by three, since three
Chapter 11. Multiplexing 88
simulations will be run concurrently and each must run at one third of real-
time. The outputs are displayed in Figure 11.6 and Figure 11.7. As it is diffi-
cult to determine what is occurring during these simulations, Figure 11.8 and
Figure 11.9 show the same simulation executed on the computer component.
Examining these reveals that the same values are repeated at the outputs three
times, corresponding to each of the three simulations. The outputs are not ex-
actly the same but vary slightly in timing. This could be due to leakage of the
switches or the finite precision provided by the simulator. However, it is of
little consequence since such differences will be imperceptible when used in a
computer game, as outlined in Section 2.4.2.
72 multiplexer: process
73 begin
74 loop
75 integrate <= "01";
76 wait for 1 sec;
77 integrate <= "10";
78 wait for 1 sec;
79 integrate <= "11";
80 wait for 1 sec;
81 end loop;
82 wait;
83 end process;
Code 11.1: Excerpt from Structural/ade testbench mass spring -
damper multiplexed.vhd
(c) Velocity
(a) Displacement
Figure 11.7: ADEs structural description, using the vehicle suspension system
testbench. The first waveform indicates the current simulation number.
Finally, another testbench was created for the purposes of evaluating the
speed of the entity. This is described in greater detail in Section 15.4.3.
11.10 Hierarchy
The final hierarchy of VHDL-AMS entities is displayed in Figure 11.10. Note
that this hierarchy only specifies the types of entity and not the quantity in-
stantiated.
Chapter 11. Multiplexing 90
(c) Velocity
(a) Displacement
ADE
Control Unit
Decoders
Decoder
Digitised Computer
ADC
DAC
Computer
Cell Router
Router
Cell
Operational Amplifier
Variable Resistor
Switchable Resistor
Switch
Resistor
Capacitor Stack
Switchable Capacitor
Switch
Capacitor
Figure 11.10: Hierarchy of VHDL-AMS entities
Chapter 12
12.1 Size
The software used placed limits on the size of designs. This raised a number of
issues throughout the project.
If the design were too large, the simulator would enter an infinite loop.
During this infinite loop, 100% of the systems CPU was utilised by the simu-
lator and no feedback was provided as to what was happening. To ensure that
the software was not just sluggish, the simulator was left running for twenty
four hours. After this time had elapsed, no feedback or results were available.
Therefore, it was clear that the simulator could not simulate the system.
However, the manufacturer did not supply guidelines as to the limit on
design sizes. It was often unclear when the simulator was entering the infinite
loop as opposed to when it was only sluggish.
The problem had to be manually diagnosed and corrected. Correction was
limited to reducing the complexity of the circuit either by using alternative
designs or by removing entities. Obviously, these corrective methods were
nonideal.
12.2 Speed
The dichotomy of analogue and digital means digital can never truly simulate
analogue. The finite granularity of digital means that continuous analogue can
only be approximated. This dichotomy resulted in simulations taking exces-
sive amounts of time. Indeed, large designs could take longer than four hours
to simulate.
In addition, the simulator provided options to control the accuracy of the
approximation. The default accuracy of the software was sufficient for many
93
Chapter 12. Problems and Solutions 94
simulations, but if precise timing was required, the accuracy had to be in-
creased. This resulted in the simulator working even slower.
This was a great hindrance during the project, because more time was
consumed simulating than coding. During simulation, the VHDL-AMS code
could not be modified. This prevented the rapid testing of alternative ideas.
Certainly, had the simulator been faster, more coding would have been
achieved in the same timeframe.
One possible solution was to reduce the complexity or number of entities
in the design, but clearly, this was not a good solution. Consequently, the only
real solution was to wait.
separately. This ultimately had no impact on the outcome of the project, but it
removed the most obvious design path for much of the project.
Even if such designs are superior, the ideas cannot be applied to all situ-
ations. For example, it appears that the vehicle suspension system cannot be
designed in this way, as it has two inputs but only one output.
Conclusions
96
Chapter 13
Synthesis
One of the goals of the project was to find suitable hardware on which the
physics engine could be implemented. This chapter analyses the viability of
discrete components, application specific integrated circuits (ASICs), field pro-
grammable analogue arrays (FPAAs) and field programmable mixed arrays
(FPMAs). A number of commercially available FPAAs are analysed and com-
pared. Finally, a decision as to the most suitable hardware is made.
13.2 ASICs
The project described in the previous chapters was designed so that it could be
synthesised to an ASIC. Compared to other solutions, an ASIC offers a greater
degree of flexibility in what can be created. Moreover, since the IC has only a
single purpose, there are no unused components, which would be present in a
reconfigurable device. This provides more die area for useful entities.
97
Chapter 13. Synthesis 98
13.3 FPAAs
FPAAs are the analogue equivalent of field programmable gate arrays (FP-
GAs). They typically contain a number of operational amplifiers and passive
components joined with programmable interconnect. Like an FPGA, a specific
function may then be downloaded from a host PC to the FPAA. Typically, they
may be reprogrammed potentially infinite times. However, some are write-
once like an electronically programmable read only memory (EPROM). These
have an advantage since the switches of the reprogrammable devices offer a
resistance, thereby modifying the functionality of the device. The write-once
fuses offer much less resistance.
FPAAs provide a natural match for the project outlined above since they
consist of operational amplifiers and their functionality is instantly reprogram-
mable. Instead of the design outlined in previous chapters, a number of ana-
logue computers could be designed. Each analogue computer would perform a
specific task. For example, one analogue computer would implement the mass-
spring-damper system while another would implement the vehicle suspension
system. Then, the game programmer would select the required function and
the host computer would download the appropriate design to the FPAA. The
programmer would supply the inputs and read the outputs from the design.
Of course, this does not offer the complete programmability offered by the de-
sign discussed above, but it is likely that this complete programmability is not
required and would not be fully exploited. Moreover, this solution simplifies
programming since the programmer no longer needs to understand the inter-
nal architecture of the physics engine hardware. In addition, software physics
engines only offer functionality equivalent to the FPAA solution outlined here.
13.3.1.1 Analysis
The TRAC has enough operational amplifiers to simulate the vehicle suspen-
sion system. In addition, it performs all of the requisite functions: addition,
inversion and integration.
However, integration may only be performed using external components.
This places a limit on the number of concurrent integrations and therefore, on
the flexibility of the device. Nevertheless, these limits are probably sufficiently
high for this project.
The device is programmed on power-up. Therefore, dynamically repro-
gramming the device would require repeated stopping and starting. This
would prove challenging to implement effectively and would result in long
delays during which the device would be performing no useful task.
Coupled with these potential disadvantages, there is one serious problem:
the device has very recently been discontinued. This clearly indicates that the
device could not be used to implement the physics engine.
13.3.2.1 Analysis
These FPAAs are the only ones to include DACs and JTAG, which are useful
but not entirely necessary.
The biggest disadvantage associated with these FPAAs is the configuration
of the operational amplifiers. While suitable for signal conditioning and fil-
tering operations, it is likely that many operational amplifiers would not be
utilised in the physics engine, leading to unnecessary hardware with reduced
functionality. Certainly, six operational amplifiers are insufficient for even the
mass-spring-damper system. Consequently, it appears that these FPAAs are
not suitable for the physics engine.
than eight concurrent operations, but certain configurations will only allow for
less, due to the internal layout of the device.
The device can perform a wide range of functions and the user may de-
fine additional functions. Moreover, integration and differentiation may be
performed without the use of external components. It also includes an ADC,
which simplifies integration with digital components.
Finally, these devices are the first FPAAs to be completely dynamically re-
configurable. This reconfiguration may be performed either from the host com-
puter using an API or from an on-board microcontroller.
13.3.3.1 Analysis
The dynamic reprogrammability of the device is the most desirable feature for
the project, making it the most suitable of the currently available FPAAs. Both
methods of reprogramming the device, from the host computer or from an on-
board microprocessor, would be suitable for implementing this project.
Furthermore, it includes an ADC, which makes an external ADC unneces-
sary. A DAC would still be required. It includes sufficient inputs and outputs
for the vehicle suspension system. It also performs a satisfactorily wide variety
of functions with which to implement all of the necessary physics systems.
The glaring problem is that the device is quite limited in the number of
concurrent operations it can perform. This is sufficient for simple problems like
the mass-spring-damper, but not for more complex problems like the vehicle
suspension system, which requires twenty concurrent operations.
Nevertheless, the dynamic reprogrammability of the device offers a solu-
tion to this problem. The system being simulated could be divided into a num-
ber of constituent parts with each part being simulated sequentially, possibly
using input values from previous simulations. For example, the vehicle sus-
pension system could be first modelled as two mass-spring-damper systems.
The two outputs could then be supplied to a third mass-spring-damper system
with an additional halving operation, to obtain the final output. The obvious
disadvantage with this solution is that it takes longer to perform the required
operation, since in the example, three simulations were required instead of
one. Therefore, some of the advantages associated with the inherent concur-
rency of analogue computers have been lost. The simulation could then be
accelerated using the approach taken by the multiplexed physics engine, de-
scribed in Chapter 11. However, this acceleration leads to loss of accuracy and
the reconfiguration between simulations would still lead to some time being
unutilised.
Another solution to this problem would be to use multiple FPAAs but this
increases the cost of the solution and the complexity of programming the de-
vices.
In summation, these devices would be suitable for implementing the
physics engine. However, there are some disadvantages associated with the
lack of operational amplifiers. This problem will be solved as technology pro-
gresses and more operational amplifiers may be placed on a single device.
Chapter 13. Synthesis 101
13.3.5 Disadvantages
Of course, any FPAA solution has one major disadvantage: only the analogue
part of the design may be implemented on the device.
To rectify this problem, discrete DACs could be placed at the inputs to the
FPAA and discrete ADCs could be placed at the outputs. The digital part of
the design could then be implemented in software on the host computer. This
means that part of the design is no longer hardware-based but software-based,
potentially reducing its overall speed. Since only a small part of the design
would need to be placed in software, this is likely to have only minimal impact.
Another solution would be to use a combination of an FPGA and an FPAA,
with the digital part implemented on the FPGA and the analogue part im-
plemented on the FPAA. Again, discrete DACs and ADCs would be needed.
However, this solution is likely to be of considerable expense, due to the
amount of hardware required.
The ideal solution would be to place both the analogue and digital parts of
the design on the same device, with the DACs and ADCs integrated into the
device. This is the approach taken by FPMAs.
13.4 FPMAs
FPMAs are an amalgamation of FPAAs and FPGAs, consisting of program-
mable analogue elements and programmable digital elements with the DACs
and ADCs necessary to unite the two divisions.
However, FPMAs are currently not commercially available. They only ex-
ist as research prototypes. Additionally, it is questionable as to whether such
devices will ever be commercially viable, as many potential manufacturers cite
a lack of consumer interest.
13.5 Decision
Ultimately, it was decided not to create a hardware implementation of the
physics engine. The only viable hardware for the physics engine was an FPAA,
but all of the commercially available FPAAs had some problems associated
with them. The Anadigm FPAAs appeared to be the most appropriate. More-
over, FPMAs were more suitable, but unavailable. Besides, any solution in-
volving FPAAs would have required redesigning the analogue computer for
the differing architecture, using different software. Consequently, it was de-
cided to concentrate on furthering the VHDL-AMS solution, rather than create
a similar implementation for an alternative architecture.
Chapter 14
PC Interfaces
14.1.1 PCI
PCI is the current standard peripheral bus in desktop PCs, having superseded
the slower Industry Standard Architecture (ISA). It allows for data transfer at a
rate of 133 Mbps. PCI evolved into PCI-X, which used a faster clock to achieve
a data transfer rate of 2, 133 Mbps.
PCI offers one possible solution for allowing the CPU and physics engine
to communicate. Its ubiquity makes PCI solutions both low cost and commer-
cially viable. There are, however, faster peripheral buses currently available.
14.1.2 PCIe
PCIe is the next generation of the PCI bus. Although backwards compatible
with PCI, it uses a redesigned architecture to achieve a data transfer rate of
2.5 Gbps. It is expected that faster versions will be available in the future.
Although the majority of PCs do not yet support PCIe, it is available in
many new PCs, having gained widespread support from companies such as
Intel. GPU manufacturers have also supported the technology and are grad-
ually switching to this technology from AGP, due to its increased bandwidth
and speed.
The increased bandwidth and speed is likely to be of benefit to the physics
engine, particularly if it needs to operate at 72 fps (Section 15.4.3). Therefore,
102
Chapter 14. PC Interfaces 103
this would be the most desirable way to interconnect the CPU and physics
engine. However, since it is currently widely undeployed, it makes little sense
to use PCIe only solutions. Consequently, this solution coupled with the PCI
solution (Section 14.1.1) appears to be the best option.
14.1.3 Motherboard
The device could also be integrated onto the motherboard. In order to reduce
the manufacturing cost of PCs, many devices formerly provided only as expan-
sion cards are now integrated onto the motherboard. Examples include GPUs,
audio ICs and network ICs. This allows for sharing of certain resources, such
as main memory, which removes the need to place memory on each card.
Lower end desktop PCs usually use an integrated GPU, but higher end
desktop PCs usually use a graphics card. The graphics card offers higher per-
formance since it is not sharing main memory with the CPU. The same argu-
ments would apply to a physics engine. Since the physics engine would be
primarily aimed at high end users, it makes little sense to manufacture a moth-
erboard integrated physics engine for desktop PCs.
Laptops also use motherboard integrated GPUs in order to reduce space.
They typically do not feature high end graphics cards as they are rarely used
for game playing. Accordingly, it makes little sense to make a motherboard
integrated physics engine for laptops either.
However, such devices are typically unsuccessful since they consume desk
space. The advantages gained by an external device offer little more than
niches for the physics engine. Therefore, such a solution is not entirely ap-
propriate.
14.4 Conclusions
The optimal solution is to place the physics engine IC along with a GPU on
graphics cards (Section 14.2). This allows high end users to purchase a card
that performs two functions they are likely to use, saving PC expansion slots
and expense.
The other viable solution is a custom built PCI (Section 14.1.1) or PCIe (Sec-
tion 14.1.2) card. PCIe is the preferable solution due to its higher bandwidth,
but a PCI solution would still be necessary, as PCIe is currently rare.
Chapter 15
Analysis
15.1 Interface
The top-level ADE component has an interface that consists of a large number
of bits. As the calculations in Table 15.1 indicate, it utilises 846 bits or 105.75
bytes of data. The majority of ICs do not have 846 pins. Therefore, a way of
reducing this number is desirable.
The most obvious solution is to input the bits serially and convert them to
parallel on the IC, using a standard serial-to-parallel converter.
Alternatively, the description of a number of physics systems could be
stored on the device. Afterwards, the programmer would only select the re-
105
Chapter 15. Analysis 106
quantity of area. To use the minimal area, the area consumed by all of the resis-
tors and the area consumed by all of the capacitors should be equal. This can be
achieved by setting the resistors to 31, 959, 361.354853709459277903125423
and setting the capacitors to 0.000000031289736640752012689654937536124 F.
These values are obviously unsuitable for fabrication but serve to illustrate the
likely area of the physics engine. The calculated area uses these values.
The operational amplifiers, ADCs and DACs would typically be purchased
as intellectual property (IP) blocks. Therefore, to calculate the area of these
components, some recently IP blocks, designed by austriamicrosystems, were
used [41, 42, 43].
The digital control unit would be constructed from standard logic games. A
2-to-4 decoder consists of two NOT gates and four NAND gates, or six NAND
gates. A NAND IP block was used to determine the area [44]. Registers could
be constructed from standard logic gates, but they are typically constructed
using some optimisations. Consequently, an IP block for a D-type flip-flop was
used to determine the size of the registers [45].
In addition, routing typically adds 10% to the size of the IC.
These data are displayed in Table 15.2.
15.3 Design
One of the disadvantages associated with analogue design is that it is very
labour intensive, in comparison to digital design.
The current analogue circuit sizing process is labour intensive. Experienced
designers regularly take weeks to size complex cells. In addition, the layout
process is similarly labour intensive. In contrast, the digital circuit sizing and
layout processes have been highly automated with only manual fine-tuning
required today. This leads to reduced time-to-market, making digital designs
more competitive than analogue designs.
However, due to the current resurgence of analogue circuits for use in appli-
cations such as wireless communications (Chapter 5), there has been renewed
interest in automating these techniques for analogue designs. Moreover, the
rapid increase in computing power predicted by Moores Law makes automa-
tion continually more viable, as the once extremely long execution times have
now been substantially reduced. In addition, distributed computing tech-
niques have been employed so that analogue back-end engineering runs at
comparable speeds to its digital equivalent. Consequently, analogue designs
now have virtually the same time-to-market as digital designs. The technol-
ogy is likely to improve as time progresses.
15.4 Timing
This section analyses how fast the physics engine could be multiplexed. The
relevant issues, ADC conversion rate and operation amplifier bandwidth, are
first considered. Then, a sample problem is designed and analysed. The results
are compared against a software implementation of the problem.
fiers. It was mentioned in Section 6.4 that ideal operational amplifiers have infi-
nite bandwidth. In other words, they can process an infinite range of input fre-
quencies. In contrast, practical operational amplifiers have a finite, but usually
very large, bandwidth. If the analogue computer is executing faster than real-
time, this may lead to higher frequencies and the possibility of exceeding the
operational amplifiers bandwidth. Current operational amplifiers have band-
widths in the range of a few kilohertz to a few hundred megahertz [48]. More-
over, high bandwidth operational amplifiers are not of substantially greater
cost than their lower bandwidth counterparts. The operational amplifier IP
block used in Section 15.2 has a bandwidth of 2.57 MHz [41].
Whether the operational amplifiers bandwidth limit will be reached de-
pends on the system in question. In the case of the mass-spring-damper exam-
ple developed in Chapter 8, the maximum frequency is 1.1 Hz. This example
should be somewhat representative of the type of problems for which the en-
gine would be used. If fifty of these simulations were to be multiplexed, the
maximum frequency would not exceed 55.5 Hz. These figures are well within
the limits of todays operational amplifiers. Consequently, it appears that the
bandwidth of the operational amplifiers will also be unproblematic.
ure 15.1) was created for reference purposes, but the graphical component was
disabled for the analysis. The system used for testing had an AMD Athlon
64 3000+ (2 GHz) CPU with 512 MB of RAM, making it a relatively powerful
system. As its operating system, it used the 32 bit version of Fedora Core 3,
which used GNU/Linux kernel 2.6.9-1.667. The 64 bit version was not used
since ODE failed to compile with the 64 bit X11 libraries. The average CPU
time taken to execute the simulation over 100 runs was used. The calculated
figure was 4.053 s, much slower than that obtained from the projects physics
engine and slower than real-time.
Note, however, that the two systems cannot be compared precisely. For
example, the software physics engine is modelling more than just the vehicle
suspension systems. It is also modelling collision detection and the effects of
gravity, for instance. More importantly, the software engine simulates the flight
of the vehicles through the air, whereas the hardware engine does not. How-
ever, the comparison of the two figures does provide a rough estimate into the
relative performance of the two systems.
Chapter 16
Conclusions
This chapter provides some conclusions to the project, outlining the knowledge
acquired and potential future work. The chapter concludes with a summary of
what was achieved during the project.
113
Chapter 16. Conclusions 114
16.3 Summary
The ultimate result of this project is a reconfigurable hybrid computer with
a purely digital interface. This computer has the capability to be used as a
physics engine because it is adroit at performing sophisticated simulations that
would be of use in a physics engine, such as the vehicle suspension system.
Moreover, the computer executes in real-time, which is necessary for computer
games. Accuracy is high albeit with some minor deviations, which are unim-
portant to computer games.
In addition, the physics engine is capable of multiplexing three simula-
tions. This allows multiple simulations to run concurrently, which is pivotal
for games, as they often contain a multitude of objects that need to be simu-
lated concurrently.
Chapter 16. Conclusions 115
Based on these points, the constructed physics engine would be suitable for
use as a hardware equivalent to todays software physics engines, if it were to
be synthesised and connected to a PC. Although some minor issues remain,
the project forms a foundation for future work and proves the viability of the
original concept. Therefore, this project was successful in achieving its goals.
Bibliography
[1] D. H. Eberly, Game Physics, ser. The Morgan Kaufmann Series in Interac-
tive 3D Technology, D. H. Eberly, Ed. San Francisco, California, USA:
Morgan Kaufmann, 2004.
[2] (2005) PhysX. AGEIA. Mountain View, California, USA. [Online].
Available: http://www.ageia.com/technology.html
[3] (2005) Havok. Havok.com. Dublin, Ireland. [Online]. Available: http:
//www.havok.com/
[4] (2005) Meqon. Meqon Research. Linkoping, Sweden. [Online]. Available:
http://www.meqon.com/
[5] (2005) RenderWare Physics. Criterion Software. Guildford, Surrey, UK.
[Online]. Available: http://www.renderware.com/physics.asp
[6] (2005) NovodeX. AGEIA. Mountain View, California, USA. [Online].
Available: http://www.ageia.com/novodex.html
[7] (2005) SD/FAST. PTC. Needham, Massachusetts, USA. [Online]. Avail-
able: http://www.sdfast.com/
[8] R. Smith. (2005, Feb.) Open Dynamics Engine. ODE. Mountain View,
California, USA. [Online]. Available: http://ode.org/
[9] S. McMillan. (2001, July) DynaMechs. DynaMechs. Columbus, Ohio,
USA. [Online]. Available: http://dynamechs.sourceforge.net/
[10] H. Keller. (2001, Feb.) AERO. AERO. Stuttgart, Germany. [Online].
Available: http://www.aero-simulation.de/
[11] D. M. Bourg, Physics for Game Developers, R. Denn, Ed. Sebastopol, Cali-
fornia, USA: OReilly, 2002.
[12] R. Smith. (2002, Nov.) IGC 2002 slides. ODE. Mountain View, California,
USA. [Online]. Available: http://ode.org/slides/igc02/index.html
[13] Analogue, in The Oxford English Dictionary, 2nd ed., J. A. Simpson and
E. S. C. Weiner, Eds. Oxford, UK: Clarendon Press, 1989, vol. I, pp. 431
432. [Online]. Available: http://dictionary.oed.com/cgi/entry/50007887
[14] P. A. Holst, A note of history, Simulation, pp. 131135, Sept. 1971.
116
BIBLIOGRAPHY 117