Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
24 views62 pages

FULLTEXT01

This thesis investigates the potential benefits of using IP multicast instead of unicast for parts of the communication within a financial system. The financial system contains a market server communicating with a market place and clients. The author created a model to compare the performance and latency of multicast and unicast transmission. The results showed that performance and latency were approximately the same when using multicast, but multicast allowed more clients to connect simultaneously and reduced the hardware resources required for the market server compared to unicast. Therefore, the author's recommendation is to use multicast technology for these types of transaction systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views62 pages

FULLTEXT01

This thesis investigates the potential benefits of using IP multicast instead of unicast for parts of the communication within a financial system. The financial system contains a market server communicating with a market place and clients. The author created a model to compare the performance and latency of multicast and unicast transmission. The results showed that performance and latency were approximately the same when using multicast, but multicast allowed more clients to connect simultaneously and reduced the hardware resources required for the market server compared to unicast. Therefore, the author's recommendation is to use multicast technology for these types of transaction systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

IP Multicast analysis in Market Server

Applications and financial systems

DD221X, Master’s Thesis in Computer Science (30 ECTS credits)


Degree Progr. In Computer Science and Engineering 300 credits
Royal Institution of Technology year 2014

ROBERT HEDIN

Supervisor: Örjan Ekeberg <[email protected]>


Examiner:Anders Lansner <[email protected]>
Abstract
Software applications may be able to utilize multicast (UDP) transmis-
sion instead of unicast (TCP) when communicating through a network.
The beneficial factors of switching to multicast depend on what kind of
software that would be using the technique and the transmission type.
This study investigates the beneficial factors of using multicast in-
stead of unicast transmissions for parts of the communication within a fi-
nancial system. The financial system contains a server (Market Server),
communicating with a market place, and a set of clients communicating
with the Market Server. The interesting aspects of a financial system are
the requirement to transmit large portions of data at the lowest latency
as possible. A model was created in order to compare performance and
latency when using multicast and unicast. The model was tested un-
der the same conditions as the previous implementation (unicast) using
various number of clients and transmitting data at different speed.
The result from this Master’s thesis was that the performance and
latency did not become worse when transmitting specific parts of the
message types. Instead the software became more scalable and could
handle more clients connected at the same time. The resources required
to run the Market Server decreased in proportion to the number of
clients able to connect compared to the previous implementation. The
result from this study shows that the use of UDP/Multicast is more
efficient and my recommendation is to use this technology in these kinds
of transaction systems.
Referat

En analys av IP Multicast inom


marknadsserver-applikationer och financiella system
Klient-Server applikationer som distribuerar identisk eller liknande
information till flera mottagare via TCP/IP (unicast) kan i vissa fall
gynnas av att byta kommunikationsmedium till multicast. I denna studie
undersöktes vilka förmåner det fanns genom att använda multicast inom
delar av ett finansiellt system. Det finansiella systemet består av en
”Market Server” som bl.a fungerar som ett kommunikationslager mellan
klienter och marknader (Tex. EUREX, OMX eller NASDAQ). Klienter
kopplar upp sig mot en Market Server för att indirekt kommunicera med
en marknad. Den information som distribueras från en Market Server
till klienterna är i flera fall identisk, vilket gjorde detta system till en
bra kandidat för att undersöka fördelarna/nackdelarna med multicast.
Det som gör denna studie intressant är att financiella system måste
klara av att skicka stora mängder data mellan en server och klienter med
så låg latens som möjligt. Därför skapades en model för att kunna mä-
ta skillnaden mellan användning av multicast och TCP/IP unicast. Den
nya modellen innehöll samma funktionalitet som den tidigare implemen-
tation, samt att de krav som fanns på den tidigare implementationen
såsom garanterad leverans av information kvarstod. Under testerna be-
prövades modellen på samma vilkor som den tidigare implementationen
med olika parametrar; e.x. varierande antal klienter, storlek och hastig-
het på informationen som distribueras.
Resultatet från denna studie visade att prestandan och latensen
blev approximativt den samma som för den tidigare modellen, samti-
digt som att det gick att öka antalet klienter som använde systemet.
Hårdvaruresurserna som användes för att drifta Market Servern mins-
kade i proportion till antalet klienter som kunde köra samtidigt jämfört
med den tidigare implementationen.
Contents

1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Transport layer . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Internet layer and transmission methods . . . . . . . . . . . . 5
1.2.3 AMS – Arena Market Server . . . . . . . . . . . . . . . . . . 8
1.2.4 Front Arena clients . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.5 TNP – Transaction Network Protocol . . . . . . . . . . . . . 10
1.2.6 MESMA – Multi Exchange, Simulation, Measurement and
Analysis platform . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Previous research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.1 Area of research . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.2 Problems with multicast . . . . . . . . . . . . . . . . . . . . . 13
1.3.3 Online auction . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.4 Multimedia . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.5 PGM (Pragmatic General Multicast) . . . . . . . . . . . . . . 15

2 Problem description 17
2.1 Problem definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.1 Test accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.2 Test reliability . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.3 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3 Methodology 21
3.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4 Implementation 25
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2 Multicast enabled sockets . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3 Message states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3.1 Receive message state . . . . . . . . . . . . . . . . . . . . . . 30
4.3.2 Recovery State . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.4 Recovery mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.5 Market Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

5 Experiment 37
5.1 Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.1.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.1.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.2 Test suits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.2.1 Information transmitted . . . . . . . . . . . . . . . . . . . . . 39
5.2.2 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.2.3 Varying number of clients . . . . . . . . . . . . . . . . . . . . 40
5.2.4 Increased amount of transmitted data . . . . . . . . . . . . . 40

6 Results and discussion 41


6.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.1.1 Test case using varying number of clients . . . . . . . . . . . 41
6.1.2 Test case with increased amount of transmitted data . . . . . 44
6.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.2.1 Increase of clients . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.2.2 Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.2.3 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.2.4 Data transmitted . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.2.5 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Bibliography 53
Chapter 1

Introduction

The flow of data transmitted between applications, separated by a network, may be


of significant size and can affect the performance and latency of the application. The
performance of an application is in most cases considered while developing software
but the performance impact from the network is not as highlighted as it perhaps
should be. There are multiple factors that can influence the network performance,
such as the transport protocol or the method of sending data (e.g. multicast or
unicast). Choosing the appropriate transport protocol for the needs of a specific
application can reduce the amount of data transmitted over the network. For exam-
ple, a study has shown that certain applications may benefit from using multicast
communication instead of unicast, depending on the nature of communication [3].
If part of the application transmits the same data to multiple recipients then
multicast may be a good approach. If the application on the other hand transmits
user-specific data, which is highly dependent on each unique user, then usually a
unicast solution is better. It is not as simple as just picking a transport protocol,
there are multiple factors to consider while choosing. Aside from the type of data
transmitted and the number of recipients, there is another important factor to
consider while choosing transport protocol – reliability. The reliability involves the
integrity of the data, the guarantee of data being delivered, and the flow of data
being regulated to the appropriate rate. Different protocols address the aspect of
reliability in their own way; an example of this is the reliability features included in
UDP (User Datagram Protocol) compared to TCP (Transmission Control Protocol)
where UDP is the least reliable protocol. However, in order to use multicast the
application needs to communicate with UDP because multicast is not supported
by TCP. More details about TCP, UDP and Multicast can be found in section
1.2.1-1.2.2.
The purpose of this thesis is to evaluate whether an improvement could be
reached by using multicast instead of unicast in financial instruments applications.
The improvements regard CPU usage, network traffic, and information availability
– number of clients using the application. The previous implementation has compo-
nents emulating multicast through TCP by sending the same information multiple

1
CHAPTER 1. INTRODUCTION

times to the subscribing clients. The new implementation replaces these compo-
nents with genuine multicast using UDP. The two implementations are compared
to each other to see which one performed best under the circumstances. Multicast
is not flawless and certain prerequisites are required to enable the use of it – all in
which is discussed in this thesis.

1.1 Motivation
SunGard is a company providing software to financial institutions (i.e. investment
banks and hedge funds) for integrated sales, trading, risk management and oper-
ations/distribution across multiple asset classes. These software are built up by a
collection of components, such as Arena Market Server (AMS), and PRIME that is
a client application. AMS works as a layer between an exchange for financial instru-
ments and the client application shown in Figure 1.1. The information transmitted
between these applications is required to be delivered and has to be delivered with
as low latency as possible.

Figure 1.1. Front Arena overview

SunGard has developed a tool that can measure the performance of the AMS;
this tool is called MESMA (Multi Exchange, Simulation, Measurement and Anal-
ysis platform). MESMA has the ability to simulate an exchange and the clients

2
1.2. BACKGROUND

communicating with AMS. SunGard has performed extensive testing with MESMA
to evaluate the performance of AMS and the network traffic. A selection of these
tests was performed in Intel’s lab environment [20]. The benefits of performing
tests at Intel’s lab are the possibility to use high-end hardware with configurations
similar to the production setups used by customers. The tests showed that the
performance of the system (CPU resources) and the latency was highly dependent
on the number of clients and the activity each of the clients conducted. SunGard
has therefore issued a request to explore other communication protocols used be-
tween AMS and the client applications. The communication protocol used at the
time when this thesis started was strictly TCP and SunGard wanted to see if an
improvement could be reach by using multicast (UDP) instead. The objective of
this thesis is to investigate if the use of multicast could improve the performance
and the availability of financial instrument applications. To verify the hypotheses
of this thesis a quantitative methodology is exercised comparing test results from
the original implementation, based on TCP, and a prototype based on multicast.

1.2 Background
This section describes the foundation of the different protocols, frameworks, test
applications, and core applications of Front Arena that are used throughout this
thesis in order to implement the use of multicast. There are two different name
conventions to describe the different layers of protocols OSI model and TCP/IP
(DoD) model. Table 1 shows the difference and TCP/IP is the name convention
used throughout this thesis.

OSI TCP/IP (DoD) Protocol(s)


Layer 7 (Application), Application layer DHCP, DHCPv6, DNS, FTP,
Layer 6 (Presentation), HTTP, IMAP, IRC, LDAP,
Layer 5 (Session) MGCP, NNTP, BGP, NTP,
POP, RPC, RTP, RTSP, RIP,
SIP, SMTP, SNMP, SOCKS,
SSH, Telnet, TLS/SSL,
XMPP, and more
Layer 4 (Transport) Transport layer TCP, UDP, PGM, DCCP,
SCTP, RSVP, and more
Layer 3 (Network) Internet layer IP (IPv4/IPv6), ICMP,
ICMPv6, ECN, IGMP, IPsec,
and more
Layer 2 (Data Link), Network Interface ARP/InARP, NDP, OSPF,
Layer 1 (Physical) Tunnels (L2TP), PPP, Me-
dia access control (Ethernet,
DSL, ISDN, FDDI, DOCSIS),
and more

3
CHAPTER 1. INTRODUCTION

1.2.1 Transport layer


In the Internet Protocol suite there are multiple layers of communication where each
layer has its own level of abstraction. The layer that handles the communication
between applications is the Application layer, whose name derives from the type
of communication endpoints [1]. The Transport layer uses the Application layer as
an abstraction to achieve process-to-process communication, where a process is a
running program. The Transport layer suite contains many protocols for communi-
cation between processes. The two protocols that concern this thesis are TCP and
UDP.
TCP and UDP are among the most frequently used protocols for communica-
tion. Although these two protocols have some similarities, they are far from the
same. What is separating them has to do with the way they deal with reliability.
UDP is a connectionless, unreliable transport protocol and does not add anything to
the service of IP (Internet Protocol). TCP on the other hand, is a stream-oriented
protocol meaning that two applications communicating using TCP have to estab-
lish a connection and maintain it during the entire session. The stream-oriented
protocols, such as TCP, have a lot of positive features included such as congestion
control, flow control, and delivery acknowledgement that are not included in UDP
[1].

UDP (User Datagram Protocol)


UDP has like all the other Transport layer protocols some responsibility. The most
important responsibility is to create process-to-process communication for the ap-
plication layer; UDP completes this task with the help of port numbers [14]. UDP
does not have features such as congestion control or delivery acknowledgement.
However, there is a low level feature for error control based on a checksum which
is calculated before sending the datagram and appended to the 16 bits checksum
field in the UDP header [14]. The checksum is then recalculated at the receiving
end and compared with the checksum field. If the two checksums are different an
error has occurred and the packet is dropped without the sender or recipient being
notified [1].
As mentioned earlier, UDP does not add anything to the service of IP, so why
would a process use this protocol? The answer to this question is the simplicity,
low overhead, and the ability to use multicast, which is not possible through TCP.
Another strong feature of UDP is that the process of sending a datagram requires
little interaction between the recipient and the sender. This is due to the fact that
the communication is connectionless and it does not concern the sender whether
the datagram was delivered or not [2].

TCP (Transmission Control Protocol)


TCP is the most essential protocol in the Transport layer. TCP has gone through
many revisions during the past few decades and is therefore highly established and

4
1.2. BACKGROUND

stable. As with UDP, TCP offers process-to-process communication for the applica-
tion layer, which is accomplished the same way as with UDP by port numbers. The
difference is that TCP is a stream-oriented protocol that requires an open connec-
tion throughout the entire session [1]. UDP and TCP have separate ways of dealing
with reliability. As explained earlier, UDP only has the checksum for error control
and nothing else. TCP on the other hand provides an array of features to ensure
the safety and regulation of data being transmitted. Due to the fact that TCP
allows streaming of bytes it can offer byte oriented flow control. This is a strong
feature to allow the flow of data to be sent in the correct pace resulting in the
loss of data to be minimized. TCP also includes features for error control, delivery
acknowledgement, retransmission of lost data, and out-of-order control. The error
control includes mechanisms such as checksum calculation [15].
TCP can guarantee packets to be delivered (unless the connection is lost) and
the flow of these packets to be regulated. There is however a price for this reliability.
These features contribute to an overhead of data being transmitted between the two
applications. For example, if a packet is lost somewhere between the server and a
client, the server has to resend the packet after a given timespan due to missing
acknowledgement. The server sends the same packet for a predefined number of
times or until it is confirmed as delivered by the client [2]. If data is frequently
lost the need to retransmit data increases, which leads to increased traffic on the
network, increased CPU consumption, and higher latency.

1.2.2 Internet layer and transmission methods


Unicast
Unicast is the concept of sending data to only one destination with a unique network
address as depicted in Figure 1.2. This kind of transmission is supported by
both TCP and UDP with the exception that TCP is stream-oriented and the data
transmitted is sent in bytes rather than datagrams. If an application needs to send
the same data to multiple recipients it has to be done individually with a unique
copy of the data. This may strain the network if the number of recipients is high
and the data to be sent is large or sent frequently.

Multicast
Multicast is based upon one-to-many, many-to-many, or one-to-one communication
[17], perhaps the most known communication method is the one-to-many, depicted
in Figure 1.3.
Multicast is, as mentioned, supported by UDP but not by TCP; this is due to
the fact that it does not suit some of the essential features of TCP. Features such
as:

• Flow control - How could the multicast decide the flow of data in order minimize
the loss of data sent to the recipients? The flow of data is based on the feedback

5
CHAPTER 1. INTRODUCTION

Figure 1.2. Unicast (Router/Switch)

of the recipients and if the sender were to receive this kind of reports from
multiple recipients there is a possibility of feedback implosion [6].

• Retransmission - What would happen in the case that a data was not delivered
to a recipient? An option would be to retransmit the lost data on the multicast
channel but that would be inefficient because not every recipient may need the
data. If one was to choose this approach and the loss of data was a repeating
event, the retransmissions could swamp the network [5].

There are reliable multicast protocols that are under research such as PGM
(Pragmatic General Multicast) which include some of these features; however, PGM
is not a standard protocol and was not tested in this thesis but is discussed in section
1.3.5 [19] [12].
If multicast were applied to the same example as described in the Unicast section
the following scenario would occur. The initial step is for the acquired recipients to
subscribe to the multicast group at which the application sends data through. The
subscription is needed for the router to identify each individual destination address
so that the datagram can be delivered. The recipients can then start to receive
datagrams sent by the application on that specific multicast group. The advantage
of this approach is that the datagram is not being copied until it has arrived at
the intermediate router where two or more recipients do not share the same path.
The use of multicast has been shown to reduce the amount of data traversing the

6
1.2. BACKGROUND

Figure 1.3. Multicast using routers, and switches with IGMP snooping enabled

network with a factor of O( 2 (n)), where n is the number of clients, for suitable
p

applications [4].
The problem with multicast is still the absence of reliability, meaning that there
is no guarantee of the data transmitted being delivered or delivered in the correct
order. However, if the reliability is not as important for portions of the systems
communication as the aspect of reducing bandwidth usage then the option of using
multicast is at hand and could improve the performance three.
Another downside of using multicast is the importance of a correct configured
network environment (LAN). A typical example of this would be an environment,
built up by switches, where IGMP snooping was not configured or if IGMP snoop-
ing was not included in the switches. By default, switches forward every multicast
packet on all ports (Multicast flooding) because they are not aware of the subscrib-
ing hosts and therefore rely on the recipients to handle the message(s). Multicast
flooding can strain the network and each individual host because it requires the
host to process packets they have not solicited. However, newer switches come with
a feature called IGMP snooping that allows the switch to listen in on the IGMP
conversation (e.g. membership request) between hosts and routers. This feature
enables the switches to maintain a map of host and the corresponding multicast
stream and packets are forwarded only to the required port. Figure 1.4 illus-
trates the absence of IGMP snooping and Figure 1.3 illustrates the use of IGMP
snooping.

7
CHAPTER 1. INTRODUCTION

Figure 1.4. Multicast using routers, and switches with IGMP snooping disabled

The task of deciding whether to use multicast or not is difficult. It depends


upon many factors such as what type of data is being transmitted, the number
of recipients, whether the data is unique for individual recipients or the same for
everyone, the quantity of data transmitted, and perhaps the most important factor,
what level of reliability is required. These questions are explored in this thesis and
thoroughly tested.

1.2.3 AMS – Arena Market Server


AMS is the technical platform used for the entire Market Server applications devel-
oped by SunGard. AMS communicates directly with a primary exchange or with
an intermediate AMS. The user communicates with an AMS using one of the Front
Arena clients described in the section 1.2.4.
AMS is delivered in many different packages and the three most common appli-
cations used by the customers are:

• AMAS - Arena Market Access Server

• AIMS - Arena Internal Market Server

• AMSC - Arena Market Server Concentrator

8
1.2. BACKGROUND

AMAS communicates directly with an exchange such as NASDAQ-INET or EU-


REX. An AIMS is somewhat similar to AMAS except that it does not communicate
with an exchange directly but handles the internal communication between multiple
AMAS instances. AMSC works as a layer between AMAS and the traders in order
to exclude certain information from an AMAS. The purpose of AMSC is to mini-
mize the duplicated data being transmitted and help distributing TNP multicast.
An AMS setup is depicted in Figure 1.5.

Figure 1.5. AMS environment

An AMS is usually setup with the relationship of one AIMS and multiple AMAS
instances, one for each exchange. It is also possible to install multiple AMS in-
stance/component on a single machine to fully utilize the resources. The commu-
nication between the client applications and AMS is handled using TNP, which is
described in section 1.2.5.

1.2.4 Front Arena clients


For the traders to view and place orders there are three client applications to use,
some features are shared among all three applications, whereas some are unique for
a specific application. These client applications are:

• OMNI

9
CHAPTER 1. INTRODUCTION

• PRIME

• SOFT BROKER

The flow of data is depicted in Figure 1.6 where the example illustrates the
event of a trader entering an order.

Figure 1.6. Enter order data flow

The picture shows how the client application, in this case OMNI, places an order
request that in turn is passed on to the exchange. Afterwards the exchange sends
a reply message, later processed back to the client by AMS.

1.2.5 TNP – Transaction Network Protocol


TNP is an application layer protocol developed by SunGard that contains all the
functionality needed to communicate between two peers, for example an OMNI and
an AMAS. Figure 1.7 illustrates the TNP connectivity between three clients a
server. The transport protocol used within TNP has until now been TCP.
TNP offers the capability to send and receive information in the form of TNP-
message(s). TNP-message(s) have a hierarchic structure to enable multiple levels of
message(s) where one message can contain other message(s). The relevant part of
these message(s) is that some of them are transmitted using what is internally called
multicast or MC. However, this multicast technique is an application protocol to
emulate multicast with the exception of using TCP rather than a genuine multicast
transport protocol such as UDP. The multicast aspect of this implementation is
achieved by sending the information to a collection of subscribers on each network
socket/connection that emulates genuine multicast. The more connected clients
subscribing on an information flow, the more effort (CPU) is spent on transmitting
data.
Example of information sent through TNP multicast (TCP):

10
1.2. BACKGROUND

Figure 1.7. Previous version of TNP

• Public orders - Buy or sell orders from all exchange members, same as Market
prices but with better granularity.

• Private orders - Detailed information like Public orders but generated from buy
or sell order placed inside the domain of a member.

• Market prices - Price, volume, and order depth for which a goods or service is
offered by an exchange (sell and buy). Public orders aggregated by price, a
compact view.

• Price details - Information message generated upon a match between two orders
(e.g. last price, opening price, and closing price).

For example, the three clients in Figure 1.7 are subscribing to Market prices
information. The server would construct a list containing the three clients (con-

11
CHAPTER 1. INTRODUCTION

nections) and send the Market price information to each connection to simulate
multicast.

1.2.6 MESMA – Multi Exchange, Simulation, Measurement and


Analysis platform
MESMA is a tool developed by SunGard for measuring the performance of AMS.
This tool has been used to evaluate the necessity of using multicast for parts of the
communication in TNP.
MESMA simulates an exchange where it sends and receives information relevant
to a Front Arena client. MESMA also simulates multiple clients that send requests
and receive updates. The structure of MESMA is depicted in Figure 1.8 in contrast
to a real world setup.

Figure 1.8. MESMA overview

This software replaces the exchange so that the latency of unit under test, band-
width, throughput, along with other aspects, e.g. CPU-load can be evaluated. This
approach allows for a controlled interaction between the server and the client, which
is not possible with the alternative of using recorded market data and playback be-
cause of the inconsistency. The measurements are performed on each individual
transaction during an experiment – there is no average or sampling because the
behavior of the system is different under load.
The time is measured in microseconds, using the Intel CPU instruction RDTSC
(Read Time-Stamp Counter). This enables a very accurate measuring of each trans-

12
1.3. PREVIOUS RESEARCH

action and it does not create any dependencies to any external appliances. Every
transaction is measured by the time it takes to send the transaction and receive its
acknowledgement. Measured times are bucketed, i.e. counted and stored as a pair
– number of occurrences per measured time.
What makes MESMA such a powerful tool is the ability to specify parameters
used in each test-case, parameters such as the number of clients, number of order
books, load across each order book, and the duration of the tests. This feature
enables each test to be consistent and repeatable which is a requirement for this
thesis.

1.3 Previous research


1.3.1 Area of research
The previous research that has been evaluated during this thesis revolves around
two major areas, multicasting in multimedia[5] and online auctions [3]. The amount
of research on multicast in financial systems is limited which can be problematic
because there are not many studies to compare with. However, much of the re-
search done within the previous mentioned fields can be applied to financial sys-
tems because they derive from the same problem, transporting user-unspecific data
to multiple recipients. These two research fields offer unique insight to the problem
because they are dealing with two different aspects of the problem. Multimedia
covers the problem of transmitting large amount of data whereas the problem of
applying multicast in an online auction focuses more on the reliability of trans-
mitting data. The problem, which this thesis concerns, is located somewhere in
between these two research fields where both reliability and the ability to transmit
much data with low latency are important.

1.3.2 Problems with multicast


The problems associated with the use of multicast often refer to the reliability issues.
Because multicast is not supported by TCP the reliability features (if needed) have
to be implemented in the application. The use of multicast may be supported by
the physical and data-link layer but more computational power is needed in the
application layer for multicast to be as reliable as TCP [9]. If these reliability
features are poorly implemented then the use of multicast might not be worth it
due to the risk of high latency. However, if the latency is not the greatest concern
then a solution with poorly implemented reliability features can still be sufficient
enough because it does not strain the network as much as TCP.
Another issue raised by the multimedia research is the fact that all routers do not
support forwarding multicast datagrams. Older routers do not support multicast
packets, however, there is an exploit called Tunneling that can be used by these
routers to avoid this problem. The Tunneling protocol is basically an encapsulation
of the packets so multicast-unaware routers can forward multicast packet between

13
CHAPTER 1. INTRODUCTION

two multicast-aware routers [5]. Routers that do not support multicast go hand
in hand with the problem that network switches have when handling multicast, as
previously mentioned as Multicast flooding. It is not unjustified for people to think
that multicast is an immature and unreliable technology; there is no guarantee that
a network supports multicast or routers/network switches with IGMP snooping
(efficient multicast). Fortunately, more and more routers are being upgraded to
handle multicast packets and more and more network switches ship with IGMP
snooping as an option.

1.3.3 Online auction


The study [3] conducted by comparing the use of multicast and unicast in an online
auction clearly states that the load on the network is lower with the use of multicast.
The tests were performed in an isolated network (free of traffic), meaning that all the
network resources were available for the online auctions. Two different modes were
evaluated, unicast and multicast. Different test cases were then performed using
a various number of clients (bidders) to see if or when the load on the network
would be different for the two modes. To get an extra perspective of the problem,
the study also included another parameter, simulated network traffic used to mimic
an environment with a lot of traffic. The results show that when the number of
bidders reached 20 the traffic rate decreased rapidly and the packet loss increased for
the unicast mode meaning that unicast is vulnerable to high workload. The other
tests using a heavily loaded network showed that the multicast mode only suffered
minor packet loss and the unicast mode performed even worse regarding traffic
rate [3]. This indicates that multicast is a more stable solution even on a heavy
loaded network with respect to traffic rate, reliability and network load. However,
as mentioned before, only unicast has the possibility to increase the reliability (with
TCP) at a small cost.

1.3.4 Multimedia
There are studies [5] pointing toward the fact that the use of multicast may benefit
certain applications especially in the field of multimedia. A canonical example of
this was performed in March, 1992 at the meeting of Internet Engineering Task
Force (IETF) in San Diego where a live audio meeting was performed [7]. The test
was conducted with 20 participants placed on three different continents spanning
over 16 time zones. The results of this experiment and similar research lead to a
better insight of the capabilities and issues involved in using multicast [7] [8]. In
the past, multicast was considered a service with limited use and applicable to few
areas, but research has shown that multicast is a much desired service which is
needed for efficient group applications [?].
The ideal solution according to the study of ’A multimedia multicasting problem’
[5] would be for the sender to treat the multicast group as a whole and not on an
individual basis. The issue with this is that either some recipient’s retransmission

14
1.3. PREVIOUS RESEARCH

request is ignored or the sender wastes resources performing retransmissions to all


the recipients [5].
There is extensive research to find solutions to these problems. In this the-
sis there are a limited amount of reliability features implemented but nevertheless
important as a discussion topic and perhaps future research. Examples of these
solutions are:

• Enable the transmitter and recipients to cooperate in handling with lost data. A
recipient who discovers a loss can multicast a retransmission request to the
group and whoever has the missing data can multicast it back to the group
[5] [10].

• Try to solve the problem of flow and congestion control by reducing the need to
send error detection back to the transmitter by using FEC (Forward Error
Correction) [11]. FEC is a technique used for controlling error in data trans-
mission over unreliable or noisy communication channels. The sender and
recipient reserve enough resources in order to minimize the risk of receiving
more data than can be dealt with [13].

The conclusion from the study of ’The multimedia multicasting problem’ [5] is
that there is not a single solution to the problems of reliable multicast communi-
cation but rather a combination. The mixture of solutions is highly dependent on
each individual system and its requirements.

1.3.5 PGM (Pragmatic General Multicast)


PGM is a reliable multicast transport protocol [18] able to provide reliable packets
to multiple recipients. The reliability is achieved using techniques such as NAK
(Negative-Acknowledgement), FEC (Forward Error Correction), and Congestion
Control. NAK is a technique where the recipient examines the sequence of packets
and if there are missing packets a unicast message is sent to the sender requesting
the missing packets(s). FEC is a technique used to repair damaged packets by
encoding each message in a redundant way using error-correction code (ECC). The
receiver is able to detect a limited number of errors in the packet and often able to
correct these errors without the need for retransmission. The Congestion Control
technique is used to limit the amount of data transmitted as high transmission rate
could lead to lost data. PGM is an IETF experimental protocol and therefore not
a standard, however, implementations have been made in some operating systems
such as Windows.
PGM is relevant to this thesis because most or all reliability features it contains
can be implemented at the application layer but instead being transmitted using
UDP. It is interesting to see how well this thesis implementation performs using
only parts of the reliability features of PGM [12] [19] [18].

15
Chapter 2

Problem description

This chapter states the problem at hand as well as what the goals are, what limita-
tions exist, and the requirement that needs to be taken into considerations in order
to achieve comprehensive results.

2.1 Problem definition


The purpose of this project is to investigate the benefits and disadvantages of using
UDP multicast communication as a substitute to TCP in SunGard’s Arena Market
Server (AMS). The current layout of the system is constructed by a given number of
clients that are connected to an AMS (AMAS, AIMS or AMSC). AMS (only AMAS)
is in turn connected to an exchange such as NASDAQ-INET where it receives and
sends information in the form of order books and order placement.
The connection between the exchange and AMS is dependent on the communi-
cation protocol that the exchange server utilizes, some may use multicast and others
use unicast. The communication between AMS and the clients is currently through
TCP, which is problematic in some cases. The recommendation SunGard has re-
garding the number of clients per AMS is 20, which works fine for most customers,
however, if more than 20 clients use an AMS at the same time the applications
performance deteriorate/latency increases.
A larger number of clients could result in a flooded connection and decreased flow
of data between AMS and the clients [1]. The option of using multicast is therefore
to be investigated to see if the performance is affected. As described earlier in the
section 1.2.5, there are multiple types of message(s) sent between a client and its
corresponding AMS and not all of them are suited for multicast transmission. The
message(s) transmitted using multicast are called MC(multicast)-messages, such as
Market prices, Price details, and Public orders. These messages, as described in the
previous chapter, are not transmitted using IP/Multicast, which may be confusing
due to its name. The name derives from the Application layer multicast and not
the Transport layer, hence the name MC.
This thesis covers the following questions:

17
CHAPTER 2. PROBLEM DESCRIPTION

• What are the benefits of using multicast instead of unicast (TCP) and if there
are any, how big of an impact will they have?

• What are the negative aspects of using multicast instead of unicast and how can
they be minimized?

• Is it possible to increase the number of clients utilizing the Market Server by


switching to multicast?

2.2 Goals
The far most important goal of this thesis is to increase the knowledge in the field
of Transport layer, especially the technology of multicast. This thesis investigates
the effects of using multicast for parts of the communication in AMS and what the
benefits/disadvantages are.

2.3 Limitations
A limitation to this thesis is that the experiments performed are based on very
specific software and the results may not apply to similar software. The same goes
for most of the libraries used in this system; all tests and implementations are
based on Windows environment. An interesting aspect would have been to test the
solution on platform independent software with a wider range of use.

2.4 Requirements
This section deals with the requirement of the tests accuracy, test reliability, and
the model itself.

2.4.1 Test accuracy


The requirement of the tests conducted in this thesis is that they are accurate and
that variation is minimized. The accuracy was achieved by using, as mentioned
before, the Intel CPU instruction RDTSC (Read Time-Stamp Counter). RDTSC
enables micro second measuring necessary to see the time difference of transmissions
from both implementations. To minimize the variation between test runs each test
suit was performed multiple times and the average was calculated to increase the
accuracy.

2.4.2 Test reliability


MESMA should perform identical test with the same conditions for both the new
implementation and the previous one. The tests should only be affected by the
parameters provided in each test suit and not by any external factors, with the

18
2.4. REQUIREMENTS

exception of the network activity. The influence of external factor impacting the
network activity was decreased by performing the tests in an isolated VLAN (Virtual
LAN).

2.4.3 Model
The requirements of the model are to act identical to the previous implementation
and contain the same functionality. The information transmitted should not un-
der any circumstances, be modified by the model with the exception of adding a
sequence number to the messages needed for reliability features.

19
Chapter 3

Methodology

The purpose of this chapter is to describe how this study was conducted. The Model
section describes the transition to using multicast and what makes it possible to
compare transmissions using TCP to multicast. The Hypotheses section has the
objective of describing the hypotheses of the model and what the expected outcome
of this study is. The final section of this chapter, Verification, deals with the process
of how to test each hypothesis.

3.1 Model
The objective of this study was to test if the use of multicast could improve per-
formance of financial instrument applications. In order to test the hypotheses in
section 3.2 a model was created. The purpose of the model was to enable the use
of multicast and make it possible to compare the two transmission technologies.
The first step in creating this model was to extend TNP with features needed to
send messages using Multicast. These features are described in the Implementation
chapter. The next step was to ensure that the model behaved like the original
implementation e.g. transmitting message m from point A to point B with the
original implementation had the same outcome as the model. The model had the
same functionality as the original implementation meaning that it could perform
test using either TCP or multicast. When the model was completed MESMA could
compare the two technologies by running test using the model, switching between
the protocols. The extended version of TNP consisted of new network sockets using
multicast and functionality to manage these sockets, Figure 3.1 illustrates a scaled
down version of the model.

3.2 Hypotheses
This section describes the hypotheses that were evaluated during this thesis. There
are three factors that are considered in the following four hypotheses. The three
factors are time, computing power (CPU-usage), and the load on the network.

21
CHAPTER 3. METHODOLOGY

Figure 3.1. Extended TNP

• Hypothesis 1 - The time it takes to send data with the multicast implementation
does not exceed the time it take to send the same data to a client list (MC)
with the old implementation.

• Hypothesis 2 - The load on the network at which the client and AMS is com-
municating through decreases with the new solution.

• Hypothesis 3 - The CPU-usage of the new implementation does not exceed the
amount used by the old implementation.

• Hypothesis 4 - The number of clients able to utilize the service of one AMS
increases without exceeding the load on the network or the need to use more
CPU as for the old implementation.

22
3.3. VERIFICATION

3.3 Verification
To test the hypotheses MESMA was setup to run either the original implementation
or the new implementation by a setting specifying which one to use. The tests were
performed on an isolated network (VLAN) where interfering network traffic was
reduced to a minimum.
Hypothesis 1 is the most difficult hypothesis to test because the data is randomly
created between each test case. However, the test framework (MESMA) is created
with this problem in mind and is therefore equipped with the ability to parameterize
these variables so that tests can be replicated. By performing tests with the same
parameters for the two MESMA setups the result could be evaluated and hopefully
the hypothesis could be verified. At the end of each test the latency of the data was
compared and if the latency is equal or lower, compared to the old implementation,
Hypothesis 1 is verified.
Hypothesis 2 is not going to be evaluated by the results of each MESMA test
case but can instead be evaluated by the built in Performance Monitor included
in Windows. The Performance Monitor registers the load on the network while
running a test. It is important that no other application is running at the same
time because there is no way of isolating the monitoring of network to a single
application and this could therefore impact the results. If the load on the network
is lower for the new implementation, compared to the old implementation, then
Hypothesis 2 is verified.
Hypothesis 3 is similar to hypothesis 2 because it requires the use of Performance
Monitor in Windows. The monitor is also capable of registering the CPU activity
of the applications currently running. It is therefore important not to run other
applications, as it would impact the result. Hypothesis 3 can be verified if the CPU
usage of the new implementation is lower or equal to the old implementation.
Hypothesis 4 can be evaluated by performing tests with MESMA using varying
number of clients. To verify the hypothesis the number of clients used with MESMA
utilizing the new TNP has to be greater than MESMA utilizing the old TNP while
at the same time yield the same result in terms of latency.

23
Chapter 4

Implementation

In this chapter all the steps needed to implement the use of multicast with UDP
are described. That includes how the model was created and what the end result
of the model would look like. Other subjects that are described in this chapter are
the existing libraries and how those are used and/or extended with functionality
needed to test the hypotheses.

4.1 Overview
Most of the multicast functionality was placed in TNP library (described in section
1.2.5 ). TNP is a library constructed as a communication layer between clients
and server applications (e.g. AMAS and AIMS). The first step towards multicast
communication was to extend the TNP library with three major components. The
first component was network sockets able to communicate with multicast. The
second component was new states to handle these messages and the recovery process.
The third component was a basic recovery mechanism, NAK, where the receiving
client can determine if there is a missing or out-of-order message by comparing the
received message´s sequence number with that of the previously received message.
TNP was not the only component that changed; changes were also made to the
Market Server (MSRV). The Market Server had to be extended with functional-
ity to enable the use of multicast, functionalities such as the initiation-phase and
house-keeping of connections. The initiation-phase is the phase that guides the
client towards using multicast communication instead of TCP, such as what mul-
ticast groups certain information is distributed on and the address/port to find
those multicast groups. The house-keeping task is mainly focused on how to divide
different kinds of information on each multicast group, and how many multicast
groups to be used. The changes made in the Market Server are described in greater
details in section 4.5. Figure 4.1 illustrates the relationship between TNP and the
components using it.
The functionality used to aggregate messages from TNP to the client remains
the same for TCP and multicast so there were no changes made at the client side

25
CHAPTER 4. IMPLEMENTATION

Figure 4.1. Communication Layer

applications.

4.2 Multicast enabled sockets


In the original version of TNP there was only one type of network socket, a TCP
enabled socket. The way sockets work for different protocols vary depending on
the protocol properties. The socket implementation for multicast is very basic; the
two-way connection is not needed due to UDP being stateless. The following points
illustrate the necessary steps taken to create a socket communication that can send
and receive data using UDP:

1. The transmitter and recipient each create a UDP enabled socket.


2. The transmitter connects the socket to an address and port.
3. The recipient binds the socket to the same address and port.

With these points in mind, TNP was extended with the new socket and all the
functionality needed to send and transmit data through multicast.
TNP can handle multiple simultaneous asynchronous input/output operations
from many file descriptors (socket, file, etc.) with an API called Input/Output

26
4.2. MULTICAST ENABLED SOCKETS

Completion Port (IOCP). The benefit of using IOCP is resource conservation; in-
stead of dedicating a complete thread to handle I/O from a file descriptor, IOCP
can handle multiple file descriptors with just one thread. Figure 4.2 illustrates
how IOCP generally works serving four clients with four threads handling comple-
tions. IOCP receives a notification, or what is in this context called a completion.
These completions are generated from each file descriptor bound to the IOCP object
from events such as received data, sent data, or established connections. After the
IOCP receives one of these completions it can perform the appropriate action such
as aggregate the data received to another part of the application.
The amount of work needed to integrate the newly created multicast socket into
IOCP was small. In the eyes of IOCP, whether the completion was a result from
data received by a TCP socket or a UDP socket does not matter, the completion
looks the same. It was therefore an easy assignment to implement the use of other
sockets and have them bind to the same IOCP and work seamlessly. Another great
feature with IOCP is that multiple threads can handle the completions generated
from multiple file descriptors. The completions are stored in a thread-safe queue
where each thread can poll completion and deal with the given task individually.
TNP contains an extension of the socket class called TNPConnection. TNPCon-
nection can either be using TCP or multicast, and contains all the functionality
needed to establish a connection, send/receive data, set socket properties, and much
more.
TNPConnection has placeholders for other connection objects, which basically
means that two TNPConnection objects can be linked. This setup becomes clearer
when addressing the recovery mechanism in section 4.4. In the recovery process,
lost messages are transmitted over TCP and not multicast so that the other clients
are not disrupted by old messages. When entering the retransmission process the
UDP connection asks its linked TCP connection object for assistance to fetch the
lost message(s).
TNPConnection objects works differently if it is a server side or a client side
connection. For the server there is one TCP connection for each client connected
and multiple UDP connections depending on the number of multicast groups used.
TNPConnection at the server are not linked (as mention earlier) because the connec-
tions do not collaborate or complete any tasks together. At the client application
there are one TCP connection and multiple UDP connections depending on the
number of multicast groups used. The difference is that the TCP connection is
linked to all the UDP connections and considered the parent connection. The TCP
connection handles all the communication that is not within the scope of multicast,
such as initializing the use if multicast and fetching recovery messages. Figure 4.3
illustrates the setup of connections before the use of multicast.
Figure 4.4 illustrates the new setup with multicast and how the two different
connections are linked together.
To show how IOCP and TNPConnections are linked together another class has
to be explained, the TNPConnectionManager class. TNPConnectionManager can
handle TNPConnections using TCP and multicast so it only requires one TNPCon-

27
CHAPTER 4. IMPLEMENTATION

Figure 4.2. IOCP – The setup used in this figure illustrates IOCP running on a
dual processor box with four worker threads handling completions from four clients.
The client context represents the data included in each I/O operations such as the
buffer, buffer size, and other connection properties. These properties are needed to
create the next asynchronous I/O operation without duplicating the data, e.g. only
allocate the buffer at the beginning and not for each I/O operation.

nectionManager object for each instance (server or client).


TNPConnectionManager uses the previous mentioned IOCP by binding the
socket of each TNPConnection object to its handler. TNPConnectionManager can
be seen in Figure 4.4 surrounding each instance of connection objects.
The design decision of separating the responsibility to the three classes was made
due to the nature of the current application. The current implementation was only
to be extended and not completely remade; therefore TNPConnectionManager class
had to work for both TNPConnection objects using TCP and multicast. This also
applied to the TNPConnection class where it had to work with both sockets using

28
4.3. MESSAGE STATES

Figure 4.3. Connection setup prior multicast

TCP and multicast.

4.3 Message states

When processing a message received by the client it is important to keep track of


the properties of the message. The concept of states is not new to this solution
and existed prior to the use of multicast. However, the creation of a recovery
mechanism required two new states. The first state was created handling the process
of receiving a multicast message; much of the finesse and technique from receiving
a TCP message was reused and extended. A second state was created in order
to handle the process of recovering missing messages. Both of these states are
described in section 4.3.1 and 4.3.2 where Figure 4.5 gives an overview of the two
new states.

29
CHAPTER 4. IMPLEMENTATION

Figure 4.4. Connection setup with multicast

4.3.1 Receive message state

The new multicast message was equipped with a sequence number used to keep
track of which message to expect and for recovery purposes. Details about the
recovery process can be found in next section 4.4.
The purpose of the receiving state is to evaluate the message and check of error.
The first step is to check if the recipient is in the client list, meaning that the
client is eligible to receive the message. This check is necessary because there is
no way for the Market Server to filter out ineligible users with UDP. The Market
Server appends a client list to the message and each client evaluates whether or not
to aggregate the message. If the client is not eligible to receive the message the
message is discarded and the current sequence number incremented. If the client
is eligible to receive the message the sequence number of the message is evaluated.
By subtracting the sequence number of the previously received message´s to the
sequence number of the current message the next step can be calculated. The red
area in Figure 4.5 illustrates the state of receiving a multicast message.

30
4.3. MESSAGE STATES

Figure 4.5. State diagram over clients receiving messages

4.3.2 Recovery State


There are three key components that make up the recovery state. The first compo-
nent is the part that issues the request to retransmit missing messages. This task
is completed using the linked TCP Connection by fetching the message(s) from the
server and aggregate those as normally. The second part is the state of receiving
and storing messages that arrive during the recovery process. It would be highly
inefficient to discard these messages and then issue another recovery once the first
recovery is complete. The message are therefore stored and dealt with once the
recovery process completes. This leads up to the third component, the part that
handles the stored messages. The messages stored during the recovery go through
the regular process of checking whether or not a message is in the correct order, or
if the client should receive it. So when the linked TCP Connection has aggregated
the recovered messages it informs the UDP Connection of its completion. The UDP
Connection iterates over all the stored messages and checks the sequence number
and the client list. If (it so happens that) there is another gap/missing message the
client remains in the recovery state and issues a new request to fetch the missing

31
CHAPTER 4. IMPLEMENTATION

message(s). All other checks work exactly the same as for the receive message state
(section 4.3.1 ). When the client has processed all the stored messages in the con-
tainer without issuing another recovery request the client leaves the recovery state
and enters the receiving message state. The red area in Figure 4.5 illustrates the
state that handles the process of recovering a multicast message.

4.4 Recovery mechanism


The recovery mechanism implemented in this study works on basic fundaments. As
stated in section 1.2.5 the TNP-Message was extended with the sequence number.
The use of this sequence number is very simple, every time the Market Server
issues a request to send a multicast message TNP appends the incremented value
from a counter (unique for each multicast group) before transmitting the message.
The message transmitted is stored for a given amount of time for each UDP connec-
tion. On the receiving end (clients) there is an identical counter with the same value
as the server side counter (before the transmission). When a client receives a new
message it checks and compares it with its own counter; if the message sequence
number is not equal to the client´s current sequence number + 1 the message is
out of order. The following list shows every outcome and the appropriate action to
take when comparing the sequence numbers:

• (Message sequence number – Client sequence number) less than 1 - This


is an old message; discard message and continue. When an multicast message
is routed there is a possibility that the message travels different paths result-
ing in the same message arriving multiple times to the same recipient, thus
the need to check for old messages is justified.

• (Message sequence number – Client sequence number) equal to 1 - This


is an in-order message; aggregate the message as usual.

• (Message sequence number – Client sequence number) greater than 1


- This message indicates that there are message(s) missing; request messages
with sequence number from (User sequence number + 1) to (Message sequence
number – 1) and store the current message until recovery completes. Store all
messages arriving during the recovery process.

When a request to recover message(s) with an interval of sequence numbers


has occurred, the UDP connection asks its parent (TCP connection) to handle the
recovery. What this means is that the parent connection issues a request to the
server to send the missing message(s) over TCP. The UDP connection aggregates
the recovery messages requested from the Market Server and in parallel queues new
messages arriving during the recovery process. The queued message is aggregated
when the entire set of missing or out of order message has arrived, then the recovery
process is complete and the client enters the normal receive message state.

32
4.4. RECOVERY MECHANISM

Figure 4.6 and the following bullet points describe the recovery process in
greater details.

Figure 4.6. Recovery process

1. The server sends a multicast message received by the client.


2. The client evaluates the sequence number and realizes that there is one or a
sequence of missing/out-of-order message(s).
3. The UDP connection asks its parent connection (TCP) to perform a recovery
of the missing message(s)
4. The parent connection issues a request to the server asking for the missing
message(s).
5. The server fetches the missing message(s) from the specific multicast group
that stores each multicast message for a specific time.
6. The server sends the message(s) to the client parent connection through TCP.
7. The message(s) are aggregated to the client.

33
CHAPTER 4. IMPLEMENTATION

8. The parent connection informs the UDP connection (child) that the recovery
is completed.
9. The UDP connection aggregates the message(s) received during the recovery-
phase.
10. Messages arrive as normal.

4.5 Market Server


The previous changes mostly apply to the client application and the client side of
TNP. New functionality was added and the final part was to glue it together with the
server. The changes that were implemented at the Market Server was a set of new
instructions, including the process of opening multicast groups, initializing clients,
creating the client list, and to handle the event of a client using an old version of
TNP (not multicast enabled). The changes made to TNP (server functionality)
include the process of storing and recover multicast messages.
When transmitting data over multicast there is always a risk of losing data,
the decision was therefore made to create multiple multicast groups where different
types of message(s) can be sent on their own group. There is no documented test
showing that using multiple groups will decrease lost data but there is another factor
to this choice, it is easier for the clients to control the type of message being received.
The Market Server is creating the multicast groups and waiting for a client to enter
a subscription. When the client enters a subscription for information sent using
the old version of multicast the server would instead instruct the client to listen on
the respective multicast group. The client opens the Connection instructed by the
server and follows a couple of steps to ensure that multicasting is possible (PING).
After the client has verified that multicast is possible the client can start receiving
the same information as before (TCP). Figure 4.7 illustrates the steps taken in
order to start sending data using multicast.

1. The Market Server initializes by opening the requested multicast groups, dur-
ing the tests three groups were created – Market prices, Price details, and
Public orders.
2. A client connects to the Market Server.
3. The client sends a request to start a subscription.
4. The Market Server informs the client that the requested subscription can be
reached on multicast group X.
5. The client opens the connection to start receiving from multicast group X.
6. The client sends a message to the market Server informing that it has opened
the connection successfully, and includes a request to send a PING message
on the multicast group.

34
4.5. MARKET SERVER

Figure 4.7. Initiation to allow transmission of multicast

7. The Market Server sends the PING message (or regular multicast message),
interpreted by the client as a verification or that multicast is possible.

8. Messages are aggregated to the client application.

If the client on the other hand uses an older version of TNP, which does not
have the features needed to use multicast, the server adds the client to the TCP
multicast list and sends MC messages as before using TCP. This also applies to the
event that the multicast verification failed (PING). The Market Server consistently
sends out heartbeats on each multicast group and if a client does not receive the
heartbeat within a given time span the client considers the multicast transmission
as failed and shuts down; a multicast message updates the heartbeat expiration
timer.

35
Chapter 5

Experiment

This section covers how the new implementation was tested against the old version,
everything from the hardware setup, computer environment, test cases, and test
case parameters.

5.1 Environment
When the tests were conducted two physical computers and two VMs (Virtual
Machine) were used, all placed inside an isolated VLAN. The purpose of the VLAN
was to eliminate as much of the network interference coming from outside sources
to achieve the best result. An overview of the environment and the names of each
machine can be seen in Figure 5.1.

• Tokra01 Tokra01 was the first VM whose purpose was to simulate half of the
passive clients. The passive clients only receive message and do not enter any
orders. This simulates a client subscribing for information.

• Tokra02 Tokra02 has the same purpose as Tokra01, which was to simulate the
other half of the passive clients.

• WhiteSands WhiteSands had the task of running the server application (AMAS)
and distribute the data to the clients, both passive and active.

• WorldCup WorldCup had the task of running the emulated exchange (MESMA)
forwarding continuous flow of information/data to the running AMAS on
WhiteSands. WorldCup also hosted two active clients used as measuring sta-
tions and for placing orders. The two active clients are needed because the
messages are delivered at different times for different clients when transmit-
ting multicast message using TCP, this is due to the each multicast message
is sent for each individual TCP client and the number of clients affects the
time of arrival.

37
CHAPTER 5. EXPERIMENT

Figure 5.1. Test environment

5.1.1 Hardware
• Tokra01 VM, 2 x 2.40GHz E7-2870 Intel Xeon 2 core, 10 GB 1066 MHz DDR3,
vmxnet3 Ethernet Adapter Gigabit network connection, Windows Server 2008
(64-bit).

• Tokra02 VM, 2 x 2.40GHz E7-2870 Intel Xeon 2 core, 10 GB 1066 MHz DDR3,
vmxnet3 Ethernet Adapter Gigabit network connection, Windows Server 2008
(64-bit).

• WhiteSands Physical server, 2 x 2.93GHz X5670 Intel Xeon 6 Core, 24 GB 1333


MHz DDR3, Intel(R) Gigabit ET Quad Port Server Adapter, Windows Server
2008 (64-bit).

• WorldCup Physical server, 2 x 2.80GHz X5560 Intel Xeon 4 Core, 24 GB 1333


MHz DDR3, Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client),
Windows Server 2008 (64-bit).

38
5.2. TEST SUITS

5.1.2 Software
Software used to develop the implementation and used during the running of the
test are listed below, most all used with standard settings.

• Microsoft Visual Studio 2008 9.0.30729.1 SP

• Microsoft .NET Framework 3.5 SP1

• Microsoft SQL Server 2008 R2 (64-bit) 10.51.2500.0

• Front Arena AMAS-INET 6.9.1

5.2 Test suits


In this section the test suits are described with details of how the tests were con-
ducted.

5.2.1 Information transmitted


The types of information transmitted during the tests are Market prices, Public
orders, and Private orders. Every exchange (MESMA exchange) has members that
can place orders; the exchange does not distinguish between Public and Private
orders. The concept of Private and Public orders is defined within each member,
where each order placed inside the domain of a member is considered private and
defined as public by all the other members.
The types of messages sent, as mentioned in section 1.2.5, are Market prices,
Public orders, and Price details. Private orders are always transmitted using TCP
and when comparing multicast to TCP Market Prices and Public orders are trans-
mitted by the protocol specified in the test cases (TCP or multicast). MESMA
generates Public orders by emulating an external client entering an order that were
accepted (buy or sell). The number of Private orders to be sent during a test run
is parameterized. Private orders are always placed by the first (active) client; each
accepted Private order generates Market prices messages to inform the other clients
of the update. The passive client does not respond to the multicast messages that
are received, except in the case that a message is missing and a recovery is needed.

5.2.2 Parameters
The range of test cases, defined by the parameters, is wide and each test suite was
constructed with the purpose to test one of the hypotheses. The parameters used
during the test for this thesis were:

• Number of passive clients The amount of clients is an important factor when


testing multicast and was needed to verify Hypothesis 2 and 4. Half of the
test clients ran on Tokra01 and the other half on Tokra02.

39
CHAPTER 5. EXPERIMENT

• Public orders per second The Public orders are used to generate Market prices
emulating a real exchange and the load across the passive and active clients.

• Private orders per second Private orders are more complex than Public or-
ders and have a greater impact on the Market Server (more CPU needed).
The reason Private orders are used within MESMA is to get a better approx-
imation of how a real world environment would perform/behave. The Private
orders are usually 5% of the total orders, meaning that clients enter 5% of the
orders (market participation).

• Number of runs The parameter to run each test multiple times is not part of
MESMA, however, this was achieved by scripting each test suit to run multiple
times. During the test conducted in this thesis, five test runs were performed
for each test suite to get an average.

5.2.3 Varying number of clients


To be able to test Hypothesis 1-4, tests were performed using varying number of
clients. The tests performed used the two active clients and passive clients ranging
from 0 to 40 with an increment of two, meaning 0, 2, 4, and up to 40. These tests ran
for 20 seconds each, meaning 20 seconds of transmitting messages and measuring
the latency. The load used for the test was 5000 Public and 250 Private orders per
second (simulating 5% market participation, Private orders). This was tested for
both TCP and multicast for each set of clients count five times to minimize the
impact of variation.

5.2.4 Increased amount of transmitted data


Another test suite that was conducted was with an increased amount of data trans-
mitted, for different amount of clients. This test suit was needed to get another
dimension of the problem, whether or not the load on the network would make the
application unstable, causing more retransmissions.

40
Chapter 6

Results and discussion

6.1 Results
The results from each test case described in Chapter 5 are presented in this section
through graphs and tables. How the results were reached and calculated is described
in this Chapter.

6.1.1 Test case using varying number of clients


The result from test case using varying number of clients (section 5.2.3 ) is presented
in Table 1. The values are calculated from the test runs comparing multicast and
TCP with the aspect of how long it takes to send a multicast message and for it to
be acknowledged. The “ Clients” column is described using (A)ctive and (P)assive
clients. As described in previous section the TCP implementation uses a list of
clients when transmitting multicast messages. It was important to investigate when
the first client in the list received the message as well as the last client; therefore
the test consisted of timers for both these clients, and the result is divided in two
categories (first and last) for both the protocols. The test was performed five times
for each pair of client number and protocol calculating the average. The results in
Table 6.1 are also displayed in Figure 6.1 and 6.2 for easier interpretation.
The amount of data transmitted by WhiteSands (running the Market Server)
and its CPU-usages are displayed in Table 6.2. The table shows the result from
test conducted using TCP and multicast. The values are calculated the same way
as with the latency by taking the average out of five test runs for each pair of client
number and protocol. The CPU usage and the amount of data transmitted was
measured, as previously mentioned in section 3.3, by capturing the CPU usage and
data transmitted conducted over a given set of time points. The set of time pointes
are equal to the length of each test and the values for each time point was combined
and the average calculated. The results in Table 6.2 are also displayed in Figure
6.3 and 6.4 for easier interpretation.
Table 6.3 shows the amount of retransmissions performed during the test runs
(multicast). The retransmissions are sent in batches meaning that if there are

41
CHAPTER 6. RESULTS AND DISCUSSION

# Avg. first Avg. last Avg. first Avg. last


Clients client client client client
(A + P) latency in µs latency in µs latency in µs latency in µs
(multicast) (multicast) (TCP) (TCP)
2+0 311,4 314,0 319,0 320,3
2+2 311,2 313,6 318,0 328,4
2+4 308,0 311,0 319,6 337,0
2+6 310,0 313,0 325,8 348,8
2+8 307,2 310,4 326,2 348,0
2 + 10 306,2 309,4 328,4 359,6
2 + 12 309,6 312,6 338,4 374,6
2 + 14 307,0 309,8 340,0 381,0
2 + 16 310,8 313,6 332,4 378,6
2 + 18 309,8 312,8 344,6 395,4
2 + 20 320,2 323,4 344,6 401,6
2 + 22 314,4 317,4 351,6 414,2
2 + 24 313,4 316,4 358,6 428,8
2 + 26 314,8 317,2 378,2 453,2
2 + 28 317,6 320,6 386,6 469,4
2 + 30 313,8 316,6 459,1 539,0
2 + 32 316,8 319,6 584,2 723,2
2 + 34 319,6 322,6 704,8 807,8
2 + 36 317,4 320,4 919,6 1025
2 + 38 319,0 322,0 2160 2274
2 + 40 319,4 322,6 3331 4218
Table 6.1. This table shows the result from running test using varying number of
clients with 5000 Public orders/s for a period of 20 seconds at 5% market partic-
ipation (250 Private orders/s). The results displayed in the table are the average
latency measured from the time it takes to send a multicast message and receive an
acknowledgement (first and last client). The "# clients" column indicates the number
of (A)ctive and (P)assive clients where the number of active clients is fixed to two in
all tests.

multiple messages missing in a sequence then those messages are sent as one batch.
However, these data points are equal to each individual recovered message and not
the amount of batches. The results in Table 6.3 are also displayed in Figure 6.5
for easier interpretation.

42
6.1. RESULTS

Figure 6.1. Average latency measured by the first client (TCP and multicast)
using varying number of clients.

Figure 6.2. Average latency measured by the last client (TCP and UPD/Multicast)
using varying number of clients.

43
CHAPTER 6. RESULTS AND DISCUSSION

# Avg. data Avg. CPU Avg. data Avg. CPU


Clients transmitted usage in % transmitted usage in %
(A + P) in MB out of total in MB out of total
(multicast) (multicast) (TCP) (TCP)
2+0 1,58 14,2 2,94 14,5
2+2 1,56 14,0 6,00 16,6
2+4 1,64 16,1 8,73 16,9
2+6 1,66 13,7 11,8 19,3
2+8 1,74 14,2 14,7 19,0
2 + 10 1,81 15,6 17,7 19,9
2 + 12 1,84 14,8 20,5 21,0
2 + 14 1,93 14,8 24,1 21,6
2 + 16 1,98 14,6 26,4 23,4
2 + 18 2,00 15,2 30,1 23,4
2 + 20 2,03 15,0 32,9 25,0
2 + 22 2,10 16,0 35,2 25,6
2 + 24 2,23 14,6 37,5 26,1
2 + 26 2,24 14,2 40,6 26,4
2 + 28 2,35 15,3 44,0 30,1
2 + 30 2,40 18,1 46,6 34,3
2 + 32 2,41 18,9 48,3 29,7
2 + 34 2,47 15,4 53,1 31,6
2 + 36 2,62 15,1 55,0 29,5
2 + 38 2,64 18,7 57,5 33,9
2 + 40 2,99 15,5 60,7 35,0
Table 6.2. This table shows the result from running test using varying number of
clients with 5000 Public orders/s for a period of 20 seconds at 5% market participation
(250 Private orders/s). The results displayed in the table are the average amount
of data (bytes) transmitted and the average CPU usage (% of total capacity) of
WhiteSands (running the Market Server) over five test runs performed for each set
of client sizes.

6.1.2 Test case with increased amount of transmitted data

This test case was conducted using varying amount of transmitted data whilst
comparing TCP against multicast. Table 6.4 shows the results from transmitting
10000 Public orders/s over a period of 20 seconds at 5% market participation (500
Private orders/s). The number of clients varies from 10 to 40 (+ 2 active) with an
interval of 10. The results in Table 6.4 are also displayed in Graph 6.6 for easier
interpretation.

44
6.2. DISCUSSION

Figure 6.3. Average CPU usage of WhiteSands throughout a test, measured for
both TCP and multicast.

Figure 6.4. Average data transmitted by WhiteSands throughout a test, measured


for both TCP and multicast.

6.2 Discussion
6.2.1 Increase of clients
The results presented in Table 6.1 and456.4 show that the total number of clients
utilizing one AMS can be higher without the need of more resources by using
CHAPTER 6. RESULTS AND DISCUSSION

# Clients (A + P) Avg. Retransmissions


2+0 0,00
2+2 0,00
2+4 2,40
2+6 27,0
2+8 102
2 + 10 17,0
2 + 12 108
2 + 14 6,00
2 + 16 46,4
2 + 18 247
2 + 20 231
2 + 22 76,4
2 + 24 325
2 + 26 479
2 + 28 286
2 + 30 290
2 + 32 239
2 + 34 782
2 + 36 1414
2 + 38 73,0
2 + 40 5103
Table 6.3. This table shows the result from running test using varying number of
clients with 5000 Public orders/s for a period of 20 seconds at 5% market participation
(250 Private orders/s). The results displayed in the table are the average amount of
retransmissions over five test runs performed for each set of client sizes.

UDP/Multicast. The first explanation that comes to mind is that the network
capacity has reached its limit. The amount of data transporting through the Mar-
ket Server and the clients has become the bottleneck. However, if one examines the
amount of data transmitted through a complete test, take TCP using 40 (A)ctive
and 2 (P)assive clients; the amount of data is 60,7 MB over 15 seconds which is
roughly 4,05 MB/s or 32,0 Mbps. The capacity of the network used during the
test was 1 Gbps and is more than enough to handle the amount of data 42 clients
transmit at test with 5000 Public orders/s and 250 Private orders/s. Another ex-
planation to this behaviour is that the resources at the VMs become the bottleneck
and the congestion control that TCP uses prevents data to be transmitted faster.
If a client does not receive enough resources the client informs the Market Server,
at the Transport Layer, that it has to decrease the transmission rate. However, in
order for this to be the case the TCP protocol must require more CPU power to
process a single datagram (message). A way to verify this behaviour would be to
increase the hardware resources at the VMs, or exchange the VMs with dedicated

46
6.2. DISCUSSION

Figure 6.5. Average number of retransmissions throughout a test, measured when


testing multicast.

# Avg. first Avg. last Avg. first Avg. last


Clients client client client client
(A + P) latency in µs latency in µs latency in µs latency in µs
(multicast) (multicast) (TCP) (TCP)
2 + 10 310,8 312,4 332,4 361,8
2 + 20 311,8 313,2 347,2 396,0
2 + 30 316,6 319,4 403,0 493,2
2 + 4+ 317,0 317,8 2913 3055
Table 6.4. This table shows the result from running test with increase amount of
message/data transmitted with 10000 Public orders/s for a period of 20 seconds at
5% market participation (500 Private orders/s). The results displayed in the table
are the average amount of retransmissions over five test runs performed for each set
of client sizes.

hosts. Section 6.2.5 will discuss more about hardware limitations.

6.2.2 Latency
The results presented in Table 6.1 show that the average latency is lower for the
multicast implementation. The latency was lower in all cases for both the first and
the last client. The reason behind this is that the messages are only transmitted

47
CHAPTER 6. RESULTS AND DISCUSSION

Figure 6.6. Average latency measured by the last client (TCP and UPD/Multicast)
with increase amount of data transmitted.

once for multicast whereas for TCP the messages have to be sent individually to
each client which causes a delay between each transmission. The time complexity
to send a given message for the multicast implementation is O(1) or constant and
for the TCP implementation the time complexity is O(n), where n is the number
of recipients.
An interesting note to the latency is the number of retransmissions presented in
Table 6.3. What caused the number of retransmissions is difficult to understand
as the data transmitted occurred over a local network (VLAN) where no losses are
expected. An explanation to this behaviour might be the receiving buffer where it
at some points during the tests becomes full and a message being received is simply
discarded. The impact on the latency, due to retransmissions, does not seem to
be too great. However, if the reason behind this behaviour depends on the lack
of resources at the receiving end then this problem could be more severe when
running the client application on hardware with fewer resources. The number of
retransmissions is irregular and could be explained by retransmissions causing more
retransmissions due to a retransmission requires more resources from the server and
client.
The same result was reached when performing test using an increased amount
of data transmitted as presented in Table 6.4. Hypothesis 1 stated that the time

48
6.2. DISCUSSION

to send data with the multicast implementation does not exceed the time it takes
to send the same data with the old implementation. Both the results presented in
Table 6.1 and 6.4 clearly indicate that this hypothesis is true.

6.2.3 Performance
The performance was tested by comparing the average CPU-usage of the White-
Sands server running an AMS. The results presented in Table 6.2 show that the
test runs from multicast implementation performed better than that of the TCP
implementation. The difference is not great which can be explained by the fact
that the work needed to send a message to each client (TCP implementation) is not
noticeable. However, when the multicast implementation has to perform retrans-
missions it adds some extra work and increases the CPU-usage, hence the small
difference might be the result from the extra work required to send each message
individually levels out with the extra work to handle retransmissions. The results
presented in Table 2 support the fact that Hypothesis 3 is true and that the
CPU-usage has not exceeded that of the TCP implementation.

6.2.4 Data transmitted


The results presented in Table 6.2 show that the amount of average retransmit-
ted data from WhiteSands server is lower for the multicast implementation. The
reason behind this is, as explained earlier, the fact that each message only has to
be sent once. The amount of data transmitted increases linearly to the number of
clients/recipients for the TCP implementation due to each connection is handled
separately. Hypothesis 2 can be verified from the test results presented in Ta-
ble 6.2 whereas the data passing over the network has not increased with the new
implementation. An issue with multicast presented in section 1.3.2 was how net-
work routers/switches handles UDP datagrams depending on whether they support
IGMP snooping or not. The network routers/switches used in the tests conducted
in this study supported IGMP snooping so the impact on Multicast flooding could
not be investigated.

6.2.5 Future work


This section addresses future work that is of interest and could improve this area
of research covered by this Master’s thesis.

Increased hardware resources


The test results presented in Table 6.1 indicate that the performance of the TCP
implementation decreases at around 30-40 clients. The reason behind this behaviour
might be the lack of hardware resources and tests performed on better servers could
perhaps improve the result of the TCP implementation as well as for multicast. An-
other negative aspect of the hardware resources is that all the clients are working

49
CHAPTER 6. RESULTS AND DISCUSSION

on virtual machines. When multiple virtual machines work in parallel the resources
could be distributed uneven or unfair that may cause unpredictable behaviour. To
eliminate the risk of unpredictable behaviour the use of dedicated servers could in-
crease the performance and latency of both implementations. Increasing the hard-
ware resources could also eliminate some of the retransmission occurring in the
multicast implementation. A qualified guess or explanation regarding the amount
of retransmission could be traced down to the receiving buffer. If more hardware
resources were given to the VMs hosting the clients, then the time to process a
message would be reduced and the probability of ending up with a full buffer would
also decrease. This would of course apply to the TCP implementation leading to
better results in both cases. An easy way to test this issue would be to decrease the
receiving buffer size and see what impact it would have on the amount of recoveries.

Increased reliability
The implementation in this Master’s thesis only consisted of very basic recovery
mechanism. This aspect could easily be extended with more advanced mechanisms
such as Congestion Control and FEC. A solution containing more of the reliability
feature that is present in PGM would be interesting to see. If there is a limit at which
the reliability mechanisms decrease the latency to the point that other protocols are
more suitable. The results presented in Table 6.3 show that recovery of messages
occurs and the number of recoveries could easily be reduced by implementing a
Congestion Control mechanism. However, the reason behind the low latency in the
multicast implementation is due to little overhead that comes with UDP. If all the
reliability features in TCP would be included then perhaps the low latency would
be lost. There are reasons why there exists minimal overhead protocol such as UDP,
and that is the low latency.

Other areas to apply multicast


This Master’s thesis only covers the use of multicast in a specific field of applica-
tion, financial instruments. An interesting aspect would be to implement the same
solutions to other fields to see if the same results apply to those.

Disable IGMP snooping


The test conducted in this study was performed on a network where routers/switches
supported IGMP snooping. An interesting aspect would be to perform the same
test on a network where the routers/switches did not support IGMP snooping and
instead had to use Multicast flooding.

Comparing PGM
PGM is not a standard at the moment and a reason behind that might be that
there is no demand for it. Perhaps PGM is not as efficient or has the low latency as

50
6.3. CONCLUSION

UDP and that those who really demand the low latency rather stick to UDP with
simple or no reliability because that is more efficient and supported. However, a
thorough examination of PGM and its capability, applied to the same are as this
Master’s thesis, lies within reasonable boundaries for future work. Perhaps it is not
suitable for this specific area but there might be others where reliability is of higher
interest.

6.3 Conclusion
This thesis aimed to test what benefits there are of using multicast (UDP) instead of
unicast (TCP) in financial systems. The negative aspects of using multicast instead
of unicast were also dealt with in this research. This thesis also aimed to test if it
was possible to increasing the number of clients utilizing the server application by
switching to multicast. This was achieved by creating a model able to send data
with multicast and include a low level reliability mechanism.
The model was then used to alter the architecture of the Market Server and
TNP. In this new architecture the Market Server instructed the clients to open
the needed network sockets/connections to enable data to be sent using multicast.
TNP had to include new states of receiving data in order to allow the possibility
that a message did not arrive or arrived in the wrong order. This was achieved
using a sequence number attached to each message and compared at the arrival.
To the other end of this problem the TNP server side had to store the most recent
transmitted messages to ensure that when a request for a missing message occurred
it could be retransmitted again.
To answer the first question - What are the benefits of using multicast instead
of unicast (TCP) and if there are any, how big of an impact will they have?
The results from the test comparing the latency showed that the implementation
of multicast was faster than of the implementation using TCP. The CPU-usage and
the amount of data transmitted were also lower for the multicast implementation
for all sizes of clients utilizing the Market Server.
To answer the second question - What are the negative aspects of using multicast
instead of unicast and how can they be minimized?
The negative aspects of using multicast instead of unicast is not shown in the
result but rather described in section 1.2.2. The first issue is availability, multicast
is not necessarily supported in all networks and additional configurations may be
needed in order to support it. If network routers/switches does not use or support
IGMP snooping hosts connected to the network may become victims of receiving
unnecessary data. The last issue has not been discussed in section 1.2.2 and it
concerns the security, everyone can listen on multicast data and applications have
to be aware of this and avoid transmitting sensitive data using multicast.
To answer the last question - Is it possible to increase the number of clients
utilizing the Market Server by switching to multicast?
The result presented in Table 6.1, 6.2 and 6.4 all show that increasing the

51
CHAPTER 6. RESULTS AND DISCUSSION

number of clients utilizing the server application has a lower impact on the per-
formance and the resources required for the multicast implementation. The TCP
implementation is more sensitive to increasing the number of clients as can espe-
cially be seen in Figure 6.6 when running test with 30 and 40 clients at the same
time.

52
Bibliography

[1] Forouzan, BA., ’TCP/IP Protocol Suite’, 4th edn. 1221 Avenue of the Amer-
icas, New York, NY 10020, US, 2010.

[2] Stallings, W., ’Computer networking with internet protocols and technology’,
1th edn. Upper Saddle River, NJ Prentice Hall, US, 2004.

[3] Liu, H., Wang, S., Fei, T., ’Multicast-based online auctions: a performance
perspective’, Benchmarking: An International Journal, vol. 10, no. 1, pp.
54-64, 2003.

[4] Jacquet, P., Rodolakis, G., ’Multicast Scaling Properties in Massively Dense
Ad Hoc Networks’, 11th International Conference on Parallel and Distributed
Systems, INRIA and Ecole Polytechnique, France, Vol. 2, pp. 93-99, July
2005.

[5] Pasquale, J., Polyzos, G., Xylomenos, G., ’The multimedia multicasting prob-
lem’, Multimedia Systems, vol. 6, no. 1, pp. 43-59, 1998.

[6] Crowcroft, j., Paliwoda, K., ’A multicast transport protocol’, Computer Com-
munication Review, vol. 18, no. 4, pp. 247-256, 1998.

[7] Casner, S., Deering, S., ’First IETF Internet Audiocast’, ACM SIGCOMM
Computer Communication Review, Vol. 22, No. 3, pp. 92-97, 1992.

[8] Prasada, B., Sabri, S., ’Video conference systems’, Proceedings of the IEEE,
Vol. 73, No. 4, pp. 671-688, 1985.

[9] Hughes, L., ’Survey of multicast address handling techniques for Ethernet
communication controllers’, Microprocessors and Microsystems, Vol. 19, No.
9, pp. 563-568, 1989.

[10] Kurose, JF., Pingali, S., Towsley, D., ’A Comparison of sender-initiated and
receiver-initiated reliable multicast protocols’, ACM SIGMETRICS confer-
ence on Measurement and modeling of computer systems, Vol. 22, No. 1, pp.
221-230, 1994.

53
BIBLIOGRAPHY

[11] Biersack, EW., ’Performance evaluation of forward error correction in ATM


networks’, Conference proceedings on Communications architectures proto-
cols, Vol. 22, No. 4, pp. 248-257, 1992.

[12] Gemmell, J., MontGomery, T., Speakman, T., Bhaskar, N., Crowcroft, J.,
’The PGM reliable multicast protocol’, IEEE Network, Vol 17, No. 1, pp.
16-22, 2003.

[13] Deering, S., Estrin, D., Shenker, S., Zappala, D., Zhang, L., ’A new resource
reservation protocol’, IEEE Network, Vol. 7, No. 5, pp. 8-18, 1993.

[14] Postel, J., ’User Datagram Protocol’, RFC 768. [Online]. Available: http:
//www.rfc-editor.org/rfc/rfc768.txt, Aug. 1980.

[15] Information Sciences Institute, ’Transmission Control Protocol’, RFC 793,


[Online]. Available: http://www.rfc-editor.org/rfc/rfc793.txt, Sep.
1981.

[16] Leiner, B., ’Critical Issues in High Bandwidth Networking’, RFC 1077, [On-
line]. Available: http://www.rfc-editor.org/rfc/rfc1077.txt, Nov. 1988.

[17] Quinn, B., Almeroth, K., ’IP Multicast Applications: Challenges and So-
lutions’ RFC 3170, [Online]. Available: http://www.rfc-editor.org/rfc/
rfc3170.txt, Sep. 2001.

[18] Speakman, T., Crowcroft, J., Gemmel, J., et al., ’PGM Reliable Trans-
port Protocol Specification’ RFC 3208, [Online]. Available: http://www.
rfc-editor.org/rfc/rfc3208.txt, Dec. 2001.

[19] Google, ’OpenPGM’, Webpage available at https://code.google.com/p/


openpgm/. Webpage visited: 2013-06-26.

[20] SunGard, ’Leveraging technology to lower total cost of ownership’, Webpage


available at http://sungard.com/~/media/Campaigns/FinancialSystems/
CMIB/MarketInsightsTrading/WPFrontArena_Intel_benchmarking.ashx.
Webpage visited: 2013-06-26.

54
Abbreviations and Terms

AIMS – Arena Internal Market Server

AMAS – Arena Market Access Server

AMS – Collective name for AIMS, AMAS, and AMSC

AMSC – Arena Market Server Concentrator

Checksum – The checksum is calculated by taking the sum of predefined sections


depending on the protocol and then inverting the bits. For UDP it is the
pseudo-header, the UDP header and the data coming from the application

Datagram – A self-contained, independent entity of data carrying sufficient infor-


mation to be routed from the source to the destination computer

Exchange – An organized market which offers services for stock brokers, traders
to trade stocks, bonds, and other securities

FEC – Forward error correction

Front Arena – Company producing financial instrument software

I/O Completion Port - An API able to perform multiple asynchronous input/output


operations simultaneously

Market Server – General term for AIMS, AMAS, and AMSC

MESMA - Multi Exchange, Simulation, Measurement and Analysis platform

NAK or NACK – Negative-acknowledgement character

Network socket – An endpoint of an inter-process communication flow across a


computer network

OSI model – Is a model uses abstract layers to characterize and standardize in-
ternal functions of a communication system

Performance Monitor – Tool used to capture computer’s resources such as CPU


usage, disk I/O, and network traffic

55
BIBLIOGRAPHY

PGM – Pragmatic General Multicast

RDTSC - Read Time-Stamp Counter

TCP – Transmission Control Protocol

TCP/IP (DoD) model - Is another model, similar to the OSI model, that uses
abstract layers to characterize and standardize internal functions of a commu-
nication system

TNP – Transaction Network Protocol

UDP – User Datagram Protocol

56

You might also like