Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views152 pages

59148

The document presents the proceedings of the 7th Conference on Cloud Computing and Big Data (JCC&BD 2019) held in La Plata, Argentina, from June 24-28, 2019. It includes revised selected papers that cover various topics such as cloud computing, data analytics, and machine learning, with a focus on the interaction between academia and industry. The conference received 31 submissions, with 12 papers accepted for publication after a peer-review process.

Uploaded by

yscnbhfit186
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views152 pages

59148

The document presents the proceedings of the 7th Conference on Cloud Computing and Big Data (JCC&BD 2019) held in La Plata, Argentina, from June 24-28, 2019. It includes revised selected papers that cover various topics such as cloud computing, data analytics, and machine learning, with a focus on the interaction between academia and industry. The conference received 31 submissions, with 12 papers accepted for publication after a peer-review process.

Uploaded by

yscnbhfit186
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 152

Cloud Computing and Big Data 7th Conference JCC

BD 2019 La Plata Buenos Aires Argentina June 24


28 2019 Revised Selected Papers Marcelo Naiouf
pdf download

https://textbookfull.com/product/cloud-computing-and-big-data-7th-conference-jcc-bd-2019-la-plata-
buenos-aires-argentina-june-24-28-2019-revised-selected-papers-marcelo-naiouf/

★★★★★ 4.7/5.0 (46 reviews) ✓ 87 downloads ■ TOP RATED


"Great resource, downloaded instantly. Thank you!" - Lisa K.

DOWNLOAD EBOOK
Cloud Computing and Big Data 7th Conference JCC BD 2019 La
Plata Buenos Aires Argentina June 24 28 2019 Revised
Selected Papers Marcelo Naiouf pdf download

TEXTBOOK EBOOK TEXTBOOK FULL

Available Formats

■ PDF eBook Study Guide TextBook

EXCLUSIVE 2025 EDUCATIONAL COLLECTION - LIMITED TIME

INSTANT DOWNLOAD VIEW LIBRARY


Collection Highlights

Trustworthy Global Computing 8th International Symposium


TGC 2013 Buenos Aires Argentina August 30 31 2013 Revised
Selected Papers 1st Edition Martín Abadi

High Performance Computing 4th Latin American Conference


CARLA 2017 Buenos Aires Argentina and Colonia del
Sacramento Uruguay September 20 22 2017 Revised Selected
Papers 1st Edition Esteban Mocskos

Big Data 7th CCF Conference BigData 2019 Wuhan China


September 26 28 2019 Proceedings Hai Jin

Cloud Computing and Services Science 9th International


Conference CLOSER 2019 Heraklion Crete Greece May 2 4 2019
Revised Selected Papers Donald Ferguson
Cloud Computing and Service Science 7th International
Conference CLOSER 2017 Porto Portugal April 24 26 2017
Revised Selected Papers Donald Ferguson

High Performance Computing: ISC High Performance 2019


International Workshops, Frankfurt, Germany, June 16-20,
2019, Revised Selected Papers Michèle Weiland

Advanced Informatics for Computing Research Third


International Conference ICAICR 2019 Shimla India June 15
16 2019 Revised Selected Papers Part I Ashish Kumar Luhach

Advances in Computing and Data Sciences Third


International Conference ICACDS 2019 Ghaziabad India April
12 13 2019 Revised Selected Papers Part I Mayank Singh

Software Technologies 14th International Conference ICSOFT


2019 Prague Czech Republic July 26 28 2019 Revised
Selected Papers Marten Van Sinderen
Marcelo Naiouf
Franco Chichizola
Enzo Rucci (Eds.)

Communications in Computer and Information Science 1050

Cloud Computing
and Big Data
7th Conference, JCC&BD 2019
La Plata, Buenos Aires, Argentina, June 24–28, 2019
Revised Selected Papers
Communications
in Computer and Information Science 1050
Commenced Publication in 2007
Founding and Former Series Editors:
Phoebe Chen, Alfredo Cuzzocrea, Xiaoyong Du, Orhun Kara, Ting Liu,
Krishna M. Sivalingam, Dominik Ślęzak, Takashi Washio, and Xiaokang Yang

Editorial Board Members


Simone Diniz Junqueira Barbosa
Pontifical Catholic University of Rio de Janeiro (PUC-Rio),
Rio de Janeiro, Brazil
Joaquim Filipe
Polytechnic Institute of Setúbal, Setúbal, Portugal
Ashish Ghosh
Indian Statistical Institute, Kolkata, India
Igor Kotenko
St. Petersburg Institute for Informatics and Automation of the Russian
Academy of Sciences, St. Petersburg, Russia
Junsong Yuan
University at Buffalo, The State University of New York, Buffalo, NY, USA
Lizhu Zhou
Tsinghua University, Beijing, China
More information about this series at http://www.springer.com/series/7899
Marcelo Naiouf Franco Chichizola
• •

Enzo Rucci (Eds.)

Cloud Computing
and Big Data
7th Conference, JCC&BD 2019
La Plata, Buenos Aires, Argentina, June 24–28, 2019
Revised Selected Papers

123
Editors
Marcelo Naiouf Franco Chichizola
III-LIDI, Facultad de Informatica III-LIDI, Facultad de Informatica
Universidad Nacional de La Plata Universidad Nacional de La Plata
La Plata, Argentina La Plata, Argentina
Enzo Rucci
III-LIDI, Facultad de Informatica
Universidad Nacional de La Plata
La Plata, Argentina

ISSN 1865-0929 ISSN 1865-0937 (electronic)


Communications in Computer and Information Science
ISBN 978-3-030-27712-3 ISBN 978-3-030-27713-0 (eBook)
https://doi.org/10.1007/978-3-030-27713-0

© Springer Nature Switzerland AG 2019


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now
known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are
believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors
give a warranty, expressed or implied, with respect to the material contained herein or for any errors or
omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

Welcome to the proceedings collection of the 7th Conference on Cloud Computing &
Big Data (JCC&BD 2019), held in La Plata, Argentina, during June 24–28, 2019.
JCC&BD 2019 was organized by the School of Computer Science of the National
University of La Plata (UNLP).
Since 2013, the Conference on Cloud Computing Conference & Big Data
(JCC&BD) has been an annual meeting where ideas, projects, scientific results, and
applications in the Cloud Computing & Big Data areas are exchanged and dissemi-
nated. The conference focuses on the topics that allow interaction between the acad-
emy, industry, and other interested parts.
JCC&BD2019 covered the following topics: cloud, edge, fog, accelerator, green and
mobile computing; cloud infrastructure and virtualization; data analytics, data intelli-
gence, and data visualization; machine and deep learning; and special topics related to
emerging technologies. In addition, special activities were also carried out, including
one plenary lecture, one discussion panel, and two post-graduate courses.
In this edition, the conference received 31 submissions. All the accepted papers
were peer-reviewed by three referees (single-blind review) and evaluated on the basis
of technical quality, relevance, significance, and clarity. According to the recommen-
dations of the reviewers, 12 of them were selected for this book (39% acceptance rate).
We hope readers will find these contributions useful and inspiring for their future
research.
Special thanks to all the people who contributed to the conference’s success:
Program and Organizing Committees, reviewers, speakers, authors, and all conference
attendees. Finally, we would like to thank Springer for their support in publishing this
book.

June 2019 Marcelo Naiouf


Franco Chichizola
Enzo Rucci
Organization

General Chair
Armando De Giusti Universidad Nacional de La Plata-CONICET,
Argentina

Program Committee Chairs


Marcelo Naiouf Universidad Nacional de La Plata, Argentina
Franco Chichizola Universidad Nacional de La Plata, Argentina
Enzo Rucci Universidad Nacional de La Plata, Argentina

Program Committee
María José Abásolo Universidad Nacional de La Plata-CIC, Argentina
José Aguilar Universidad de Los Andes, Venezuela
Jorge Ardenghi Universidad Nacional del Sur, Argentina
Javier Balladini Universidad Nacional del Comahue, Argentina
Oscar Bria Universidad Nacional de La Plata-INVAP,
Argentina
Silvia Castro Universidad Nacional del Sur, Argentina
Laura De Giusti Universidad Nacional de La Plata-CIC, Argentina
Mónica Denham Universidad Nacional de Río Negro-CONICET,
Argentina
Javier Diaz Universidad Nacional de La Plata, Argentina
Ramón Doallo Universidad da Coruña, Spain
Marcelo Errecalde Universidad Nacional de San Luis, Argentina
Elsa Estevez Universidad Nacional del Sur-CONICET,
Argentina
Aurelio Fernandez Bariviera Universitat Rovira i Virgili, Spain
Fernando Emmanuel Frati Universidad Nacional de Chilecito, Argentina
Carlos Garcia Garino Universidad Nacional de Cuyo, Argentina
Adriana Angélica Gaudiani Universidad Nacional de General Sarmiento,
Argentina
Graciela Verónica Gil Costa Universidad Nacional de San Luis-CONICET,
Argentina
Roberto Guerrero Universidad Nacional de San Luis, Argentina
Waldo Hasperué Universidad Nacional de La Plata-CIC, Argentina
Francisco Daniel Igual Peña Universidad Complutense de Madrid, Spain
Laura Lanzarini Universidad Nacional de La Plata, Argentina
Guillermo Leguizamón Universidad Nacional de San Luis, Argentina
Emilio Luque Fadón Universidad Autónoma de Barcelona, Spain
viii Organization

Mauricio Marín Universidad de Santiago de Chile, Chile


Luis Marrone Universidad Nacional de La Plata, Argentina
Katzalin Olcoz Herrero Universidad Complutense de Madrid, Spain
José Angel Olivas Varela Universidad de Castilla-La Mancha, Spain
Xoan Pardo Universidad da Coruña, Spain
María Fabiana Piccoli Universidad Nacional de San Luis, Argentina
Luis Piñuel Universidad Complutense de Madrid, Spain
Adrian Pousa Universidad Nacional de La Plata, Argentina
Marcela Printista Universidad Nacional de San Luis, Argentina
Dolores Isabel Rexachs del Universidad Autónoma de Barcelona, Spain
Rosario
Nelson Rodríguez Universidad Nacional de San Juan, Argentina
Juan Carlos Saez Alcaide Universidad Complutense de Madrid, Spain
Victoria Sanz Universidad Nacional de La Plata, Argentina
Remo Suppi Universidad Autónoma de Barcelona, Spain
Francisco Tirado Fernández Universidad Complutense de Madrid, Spain
Juan Touriño Dominguez Universidad da Coruña, Spain
Gonzalo Zarza Globant, Argentina

Sponsors

Sistema Nacional de
Computación de Alto
Desempeño

Agencia Nacional de Promoción Red de Universidades Nacionales con


Científica y Tecnológica Carreras de Informática
Contents

Cloud Computing and HPC

Detecting Time-Fragmented Cache Attacks Against AES Using


Performance Monitoring Counters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Iván Prada, Francisco D. Igual, and Katzalin Olcoz

Hybrid Elastic ARM&Cloud HPC Collaborative Platform


for Generic Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
David Petrocelli, Armando De Giusti, and Marcelo Naiouf

Benchmark Based on Application Signature to Analyze and Predict


Their Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Felipe Tirado, Alvaro Wong, Dolores Rexachs, and Emilio Luque

Evaluating Performance of Web Applications in (Cloud)


Virtualized Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Fernando G. Tinetti and Christian Rodríguez

Intelligent Distributed System for Energy Efficient Control . . . . . . . . . . . . . 51


Martín Pi Puig, Juan Manuel Paniego, Santiago Medina,
Sebastián Rodríguez Eguren, Leandro Libutti, Julieta Lanciotti,
Joaquin De Antueno, Cesar Estrebou, Franco Chichizola,
and Laura De Giusti

Heap-Based Algorithms to Accelerate Fingerprint Matching


on Parallel Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Ricardo J. Barrientos, Ruber Hernández-García, Kevin Ortega,
Emilio Luque, and Daniel Peralta

Big Data and Data Intelligence

An Analysis of Local and Global Solutions to Address Big Data


Imbalanced Classification: A Case Study with SMOTE Preprocessing . . . . . . 75
María José Basgall, Waldo Hasperué, Marcelo Naiouf,
Alberto Fernández, and Francisco Herrera

Data Analytics for the Cryptocurrencies Behavior . . . . . . . . . . . . . . . . . . . . 86


Eduardo Sánchez, Jose A. Olivas, and Francisco P. Romero

Measuring (in)variances in Convolutional Networks . . . . . . . . . . . . . . . . . . 98


Facundo Quiroga, Jordina Torrents-Barrena, Laura Lanzarini,
and Domenec Puig
x Contents

Database NewSQL Performance Evaluation for Big Data


in the Public Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
María Murazzo, Pablo Gómez, Nelson Rodríguez, and Diego Medel

Mobile Computing

A Study of Non-functional Requirements in Apps for Mobile Devices. . . . . . 125


Leonardo Corbalán, Pablo Thomas, Lisandro Delía, Germán Cáseres,
Juan Fernández Sosa, Fernando Tesone, and Patricia Pesado

Mobile and Wearable Computing in Patagonian Wilderness . . . . . . . . . . . . . 137


Samuel Almonacid, María R. Klagges, Pablo Navarro,
Leonardo Morales, Bruno Pazos, Alexandra Contreras Puigbó,
and Diego Firmenich

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155


Cloud Computing and HPC
Detecting Time-Fragmented Cache
Attacks Against AES Using Performance
Monitoring Counters

Iván Prada(B) , Francisco D. Igual, and Katzalin Olcoz

Departamento de Arquitectura de Computadores y Automática,


Universidad Complutense de Madrid, 28040 Madrid, Spain
{ivprada,figual,katzalin}@ucm.es

Abstract. Cache timing attacks use shared caches in multi-core pro-


cessors as side channels to extract information from victim processes.
These attacks are particularly dangerous in cloud infrastructures, in
which the deployed countermeasures cause collateral effects in terms of
performance loss and increase in energy consumption. We propose to
monitor the victim process using an independent monitoring (detector)
process, that continuously measures selected Performance Monitoring
Counters (PMC) to detect the presence of an attack. Ad-hoc counter-
measures can be applied only when such a risky situation arises. In our
case, the victim process is the Advanced Encryption Standard (AES)
encryption algorithm and the attack is performed by means of random
encryption requests. We demonstrate that PMCs are a feasible tool to
detect the attack and that sampling PMCs at high frequencies is worse
than sampling at lower frequencies in terms of detection capabilities,
particularly when the attack is fragmented in time to try to be hidden
from detection.

Keywords: Cache attacks · Flush+reload · AES ·


Performance Monitoring Counters

1 Introduction

In January 2018, Horn [9] from Google Project Zero and a group of researchers
led by Paul Kocher independently disclosed three vulnerabilities, named Spectre
(variants 1 and 2) and Meltdown. They discovered that data cache timing could
be used to extract information about memory contents using speculative execu-
tion. Since that moment, new variants of these transient execution attacks have
been disclosed, such as Foreshadow or NetSpectre, to name just two of them [5].
These attacks exploit speculative and out-of-order execution in high per-
formance microarchitectures together with the fact that in modern multi-core
architectures some resources are shared across cores. Hence, a malicious process
which is being executed in one core of the system can extract information from
c Springer Nature Switzerland AG 2019
M. Naiouf et al. (Eds.): JCC&BD 2019, CCIS 1050, pp. 3–15, 2019.
https://doi.org/10.1007/978-3-030-27713-0_1
4 I. Prada et al.

a victim executed in a different core. The resource that is most commonly used
as side-channel to extract information is the shared cache [2].
This problem is particularly important in cloud environments, where not only
multiple users share a multi-core server but also multiple virtual machines can
co-reside in the same core due to consolidation in order to save energy. Moreover,
the use of simultaneous multithreading techniques, such as Intel’s Hyperthread-
ing technology, allow to leverage two or more logical cores per physical core,
increasing the degree of resources shared between users.
There has been a proliferation of ad-hoc defenses, mainly microcode and
software patches for the operating system and virtual machine monitor. Besides,
Intel announced hardware mitigations in its Cascade Lake processors, trying to
reduce performance loss due to the countermeasures for some of the attacks [11].
However, the impact of countermeasures on performance is still non negligi-
ble, and according to [5] varies from 0% to almost 75%. Thus, in most situations,
security comes at the expense of lower performance and higher energy consump-
tion (due to non-consolidating and disabling hyperthreading).
In this paper, we propose a new attack detection tool that is based on the
deployment of a process running in the same core that the victim process it
protects, and that detects situations in which an attack is being performed.
Following this idea, countermeasures are only taken when the risk level justifies
the cost.
The contribution of the paper is two-fold:

– We implement and describe the attack, and design and implement a detec-
tor for it based on Performance Monitoring Counters (PMC), evaluating its
detection capabilities at different sampling frequencies and showing that high
sampling frequencies (100 µs) are noisier than lower ones (10 to 100 ms).
– We show that splitting the attack into small pieces and distributing those
pieces in time decreases detection capability in a different way for the different
detection sampling frequencies. Only low frequencies (bigger than 10 ms) are
still able to detect the time-fragmented attack.

The rest of the paper is structured as follows: Sect. 2 reviews the most rele-
vant works in the field; Sect. 3 outlines the main concepts needed for the correct
understanding of the attack and detection strategy. Then, the attack imple-
mented, detection using PMC and the time-fragmented attack are presented in
Sects. 4, 5 and 6, respectively. Finally, conclusions are presented in Sect. 7.

2 Related Work
Detailed surveys on microarchitectural timing attacks in general [2,8] and cache
timing attacks in particular [12] can be found in the literature. Besides, Canella
et al. [5] performs a systematic evaluation of transient execution attacks.
Time-driven attacks against the shared and inclusive Last Level Cache
(LLC) are mainly based on Flush&Reload [15] and their variants. So,
Detecting Time-Fragmented Attacks Against AES Using PMCs 5

Briongos et al. [4] extracts the key from the AES T-table based encryption
algorithm using improvements over the original attack.
Recently, Performance Monitoring Counters have been used to detect the
attack. Chiappeta et al. [6] monitor both victim and attacker, while CloudRadar
[16] monitors all the virtual machines running in the system. CacheShield [3] only
monitors the victim process to detect attacks on both AES and RSA algorithms.
None of them considered trying to hide the attack by dividing it into small
pieces distributed in time. Our approach is similar to CacheShield [3] in terms of
functionality, but we perform a more detailed study of how the specific timing
of the attack affects the detection capability.

3 Background Concepts
For a correct understanding of the attacks and techniques described hereafter,
further details on two architectural concepts with direct impact on the attacks
are required: cache inclusion policies and memory de-duplication as a specific case
of shared memory. Then, the basics of the Flush&Reload attack are outlined.

3.1 Shared Caches and Inclusion Policies in Modern Multi-cores


Modern multi-core processors feature multi-level caches in which levels can
be classified as shared/private across cores and hierarchies as inclusive, non-
inclusive or exclusive, depending on whether the content of a cache level is
present in lower cache levels. Of special interest for us is the combination of
shared/inclusive cache levels, such as LLC caches in modern Intel multi-cores;
in this scenario, a process executed on a specific core can produce side effects
on independent processes executed on a different core. This phenomena can be
exploited to perform cache-timing attacks. Supplementary techniques, such as
Intel’s Cache Allocation Technology (CAT [13]), can be leveraged to isolate spe-
cific LLC ways in order to boost performance (reducing contention), but also to
mitigate the effects of potential attacks in this type of processors and situations.

3.2 Shared Memory and Memory De-duplication


Modern operating systems, such as Linux, make an intensive use of shared mem-
ory across processes to improve memory usage efficiency. Some situations (e.g.
parent-child process hierarchies generated through fork()) are easily trackable,
but sharing memory pages across independent processes requires ad-hoc sophis-
ticated techniques. This is a very common scenario in multi-virtual machine
(multi-VM) deployments sharing the same physical resources, for example.
Memory de-duplication is a specific technique of shared memory, designed to
reduce the memory footprint in scenarios in which a hypervisor shares memory
pages with the same contents across different virtual machines, but with impact
also on non-virtualized environments comprising random non-related processes.
In the Linux implementation (KSM, Kernel Samepage Merging), a kernel thread
6 I. Prada et al.

periodically checks every page in registered memory sections, and calculates


a hash of its contents. This hash is then used to search other pages with iden-
tical contents. Upon success, pages are considered identical and merged, saving
memory space. Processes that reference to the original pages are updated to
point to the merged one. Only after a write operation from one of the VMs
(or processes), sharing finishes and the corresponding page is copied by COW
(Copy-on-Write).

3.3 Flush&Reload

The Flush&Reload attack was first introduced in [15] and it has been used as
baseline by later works such as [10], among others. It takes advantage of the
combination of inclusive shared caches and memory de-duplication. The basics
of the attack are as follows: the attacker runs in a core which shares the last
level cache with the victim, and manages to share some page with it through
memory de-duplication. It can either contain shared data (i.e. the tables used
by the AES encryption algorithm) or shared instructions (for the attack against
RSA). In the first phase of the attack (Flush), the attacker evicts the shared
blocks from its own private cache, causing the eviction of those data from the
shared cache and all the other caches. In the second phase, the victim performs
some random work, bringing some of the shared data to the cache again. In the
last phase (Reload), the attacker accesses every shared data, measuring the time
it takes and it guesses which data have been used by the victim (cache hits) and
which ones were not used (cache misses). From this information, the attacker
extracts relevant data, such as the AES key.

4 Implementation of the AES Attack


4.1 Experimental Setup

The experimental setup was deployed on a dual-socket server featuring two Intel
Xeon Gold 6138 chips with 20-cores each (hyperthreading was disabled), running
at 2 Ghz. The memory hierarchy comprises 96 Gbytes DDR4 RAM, 28 Mbytes of
unified L3 cache per chip (11-way associative), 1 Mbyte of unified L2 cache per
chip (16-way associative) and 32 Kbytes of L1 cache per core (8-way associative).
Cache line is 64 bytes. L1 TLB comprises 64 entries (4-way associative) with a
page size of 4 Kbytes.
From the software perspective, we employed a Debian GNU/Linux distribu-
tion with kernel 4.9.51-1 and GCC 6.3.0. PAPI version 5.5.1.0 [14] built on top of
the Linux perf event subsystem was employed to extract performance counters
information. OpenSSL version 1.1.1.b was used to implement the cryptographic
algorithm, compiled with the no-asm flag when using T-tables.
Detecting Time-Fragmented Attacks Against AES Using PMCs 7

4.2 AES Algorithm


The AES algorithm is iterative, so obtaining each encryption needs the execution
of several rounds. In [1,7], authors develop the underlying theory of polynomials
with coefficients in GF (28 ) (Galois Field of order 256), which is the base for the
extraction of transformation values of a single round. The round transformation
lies in four steps for the first rounds (SubByte, ShiftRows, MixColumns
and AddRoundKey) and three for the last round (all but one, MixColumns).
The number of rounds will depend on the length of the key; in our case, for 128-
bits key, we need 10 rounds. As stated by Daemen and Rijmen in [7], the round
transformation of AES can be optimized with 4 look-up tables (Ti , i ∈ 0 · · · 3)
that contain the pre-calculated values for each of the potential inputs. This
way, the encryption round is simplified to a few XOR operations and takes the
form:
Si,j = Ti [ski,j ] ⊕ RoundKeyi,j
k
(1)
for the main rounds, and the last round:

i,j ] ⊕ RoundKeyi,j
Si,j = T(i+2)%4 [s10 10
(2)
k
with Si,j the encrypted char, si,j the previous state char, k ∈ 1 · · · 9 the k-th
round and s1 is the original message (s0 ) XOR with RoundKey 0 .

4.3 Implementation of the Attack


The basis of the attack is simple: using T-Tables optimization to extract the
last round key (LRK) of AES. In Sect. 2, we exposed previous algorithms for
extraction of the AES key. We use the approach of [4] to break the OpenSSL
1.1.1.b AES 128 bits implementation (this library has had to be compiled with
no-asm flag, so that it uses the T-Tables implementation). The attack begins by
forcing the de-duplication of library pages (see Sect. 3.2). This step is manda-
tory so that victim and attacker can share pages of the dynamic library, hence
allowing the observation of memory addresses assigned to AES tables. In order
to obtain the origin of the dynamic library, we proceed by opening the library
and performing a memory projection (through mmap). Proceeding this way, the
KSM daemon will detect a matching in the contents of the mapped file and the
loaded dynamic library, and will force the de-duplication. We have experimen-
tally observed a delay of around 300 encryptions to unleash the de-duplication
of pages. At that point, the attack can commence. The start addresses of each
table are obtained by decompiling the library and determining the offset of each
table with respect to its start address.
As seen in Sect. 2, there are different ways to extract the key based on the
information left by the last round of encryption. In this work, we check whether
a cache line1 resides in L3 upon completion of the encryption process.
1
A cache line – 64 bytes in our target architecture – can store 16 elements of a table,
provided each element is stored as a 4-byte unsigned integer.
8 I. Prada et al.

These measurements have been carried out empirically by a Flush&Reload


technique (see Sect. 3.3) for each one of the four tables. In the following, Tj is the
corresponding line of the observed table; the attack proceeds by first performing
a flush operation of different lines of the table, followed by a random encryption
request. The response to this request is then stored (S[i] stores the encrypted
text on the i-th encryption), together with the information that will be necessary
to perform the attack: a matrix X is created and Xij set to 1 if line Tj was in
L3 after completing the i-th encryption, 0 otherwise.
Once these data are obtained, we proceed by searching for the most probable
characters belonging to the last round key, following the pseudo-code depicted
in Algorithm 1.1, that will return, for each position of the last round key, those
characters with the lowest probability. Hence, we will select:

LastRoundKeyi,j = min LRKi,j [t] (3)


t ∈ 0,...,num encrypt

Once the characters of the last round key have been obtained, the last step
is just an inversion of the code used by AES to obtain the last round key, and
hence the initial key of the server.

Listing 1.1. Pseudo-code to obtain Last Round Key candidates.


1 f o r t i n 0, · · · , num encrypt
2 f o r i i n 0, 1, 2, 3
3 i f X [ ( i +2)%4][ t ] == 0
4 f o r j i n 0, 1, 2, 3
5 f o r l i n 0, · · · , line elems
t
6 LRKi,j [Si,j ⊕ T(i+2)%4 [l]] + +
7 end f o r
8 end f o r
9 end i f
10 end f o r
11 end f o r

5 Attack Detection Using PMCs

Cache timing attacks cause an anomalously high number of L3 misses, due to


the flush and reload activity; hence, measuring L3 misses is an straightforward
mechanism to detect them. As explained in Sect. 2, there have been some works
in this field and most of them use L3 misses.
In addition to L3 cache misses, we chose the total number of load instructions
executed by the victim as a way to estimate the number of encryptions being
performed by the victim, so that the ratio between both counters provides a
metric that is constant for different levels of load in the victim. Thus, our detec-
tion metric is the amount of L3 cache misses (in thousands) per load instruction
(1000 * L3 misses/LD instruction).
Detecting Time-Fragmented Attacks Against AES Using PMCs 9

(a) Attack. Detection freq.: 100 µs (b) No attack. Detection freq.: 100 µs

(c) Attack. Detection freq.: 1 ms (d) No attack. Detection freq.: 1 ms

(e) Attack. Detection freq.: 10 ms (f) No attack. Detection freq.: 10 ms

(g) Attack. Detection freq.: 100 ms (h) No attack. Detection freq.: 100 ms

Fig. 1. Results obtained from performance counters at different sampling rates. Each
one of the four rows reports the results obtained for the L3 cache misses (above) and
number of load instructions (below) in the victim under attack (left) and with no
attack (right). The rows correspond to the different sampling rates analyzed: 100 µs
(first row), 1 ms (second row), 10 ms (third row) and 100 ms (last row).
10 I. Prada et al.

Figure 1 reports the values of the chosen PMCs for the victim both in the
presence and absence of attack. The experiment was repeated at different sam-
pling frequencies, to study the effect of the sampling frequency in the detection
capability. Figure 2 shows the values of the proposed metric for the results in
Fig. 1. The first observation is that the selected metric is an effective mechanism
to detect the attack; the values under attack are close to 1 while the values with-
out attack are 10 to 100 times lower. In this situation, the attack is detected if,
after the initial cold misses (identified as 100 ms in our experiments), the value
remains close to 1.
A second conclusion from Fig. 2 is that sampling PMCs at 100 µs leads to
more noisy results for the no-attack experiment. Given that this sampling rate
also produces a higher overhead, we will not use that sampling frequency in the
following.

6 Analysis of a Time-Fragmented Attack

In this section, we propose a complete set of experiments in order to deter-


mine if the division of the attack in discrete pieces and their distributed execu-
tion in time can potentially disguise the attack and invalidate the action of our
detector.
We proceed by dividing the 50, 000 encryptions needed for the attack into
equally-sized groups (or “packets” in the following) of encryptions. We have
evaluated packets of decreasing sizes, namely: 5, 000, 500, 50 and 5 encryptions.
Furthermore, in order to analyze the effect of increasing the gap (time) between
packets, we call “interval” to the separation between two consecutive packets.
In our experiments, we vary the interval between packets from 10 µs to 100 ms.
For each combination of packet size and interval, we used the sampling rates of
the previous section except the highest one: 1 ms, 10 ms and 100 ms.
The most interesting results are obtained for small packets and large inter-
vals, as expected. Figure 3 shows the results when the attack is divided into 100
packets of 500 encryptions, and the time interval between two consecutive pack-
ets is 10 ms. The sampling rate is either 1 ms or 10 ms. The metrics obtained from
the 10 ms samples are close to the usual value for the attack, but the results for
1 ms samples switch from the values corresponding to an attack (close to 1) to
the no-attack values (close to 0). As expected, for the high resolution frequency,
some samples do not find any difference between attack and no-attack, because
they fall in the interval of time between packets of the attack. On the contrary,
the low resolution samples always find the “big picture”.
Detecting Time-Fragmented Attacks Against AES Using PMCs 11

(a) Attack. Detection freq.: 100 µs (b) No attack. Detection freq.: 100 µs

(c) Attack. Detection freq.: 1 ms (d) No attack. Detection freq.: 1 ms

(e) Attack. Detection freq.: 10 ms (f) No attack. Detection freq.: 10 ms

(g) Attack. Detection freq.: 100 ms (h) No attack. Detection freq.: 100 ms

Fig. 2. Metric evaluation for attack detection at different sampling rates. Each row
displays the proposed metric: 1000 L3 cache misses per load instruction in the victim
under attack (left-side column) and without attack (right-side column). The four rows
correspond to the four sampling rates analyzed: 100 µs (first row), 1 ms (second row),
10 ms (third row) and 100 ms (last row).
12 I. Prada et al.

Figure 4 reports an equivalent evaluation for packets 10 times smaller, with


the aim of reducing the time in which the attack can be detected. In this case,
the difference between the obtained results at different sampling rates is more
evident. The time interval between two consecutive packets is 10 ms and the
sampling rates are 1, 10 and 100 ms. For the 1 ms sampling rate, on one hand,
the no-attack experiment has higher number of L3 misses due to the separation
between packets. During those intervals some cache lines are evicted due to
normal functioning of the system. On the other hand, the experiment with attack
also switches from low to high values of the metric as in the previous experiment.
This fact can be observed in Fig. 5, which is an augmented view of the results
for the attack with 1 ms sampling. It confirms that the attack can be more easily
hidden from the high resolution samples than from the lower ones.
Finally, the packet size is decreased to 5 encryptions. In this experiment, when
the interval between packets is longer than 1 ms the attack stops working. The
results for 1 ms interval are show in Fig. 6 and they confirm that our detection
metric is able to detect the attack with a sampling rate of 10 ms or bigger.

Fig. 3. Detection metric for packets of 500 encryptions with interval of 10 ms. There
is an attack in the left column and no attack in the right one. The sampling rate is
1 ms (top) and 10 ms (bottom).
Detecting Time-Fragmented Attacks Against AES Using PMCs 13

Fig. 4. Detection metric for packets of 50 encryptions with interval of 10 ms. There is
an attack in the left column and no attack in the right one. The sampling rate is 1 ms
(top), 10 ms (middle), and 100 ms (bottom).

Fig. 5. Augmented view of the detection metric for the attack with packets of 50
encryptions, interval of 10 ms and sampling rate of 1 ms.
14 I. Prada et al.

Fig. 6. Detection metric for packets of 5 encryptions with interval of 1 ms. There is
an attack in the left column and no attack in the right one. The sampling rate is 1 ms
(above) and 10 ms (below).

7 Conclusions

In this paper, we proposed a mechanism to protect victim processes running in


multi-core servers (either native or inside a VM) against cache timing attacks
by adding to the server a new detector process that monitors only the PMCs
associated to the victim process. To that end, we implemented a cache timing
attack against the table based AES encryption algorithm. We used 1000 L3 cache
misses per load instruction as a detection metric and achieved detection of the
attack for all the different sampling rates, although sampling at high frequency
is worse than at lower ones.
We have tried to hide the attack dividing it into small parts and interleav-
ing time slots with attack and without attack. Thus, sampling PMC at high
frequency makes detection of the attack more difficult. Again, lower frequency
monitoring (10 ms and 100 ms) results in higher detection capability.

Acknowledgements. This work is supported by the EU FEDER and the Spanish


MINECO under grant number TIN2015-65277-R and by Spanish CM under grant
S2018/TCS-4423. We would like to thank Samira Briongos and Pedro Malagón for
their helpful comments on some details of the attack implementation.
Detecting Time-Fragmented Attacks Against AES Using PMCs 15

References
1. Specification for the advanced encryption standard (AES). Federal Information
Processing Standards Publication 197 (2001). http://csrc.nist.gov/publications/
fips/fips197/fips-197.pdf
2. Biswas, A.K., Ghosal, D., Nagaraja, S.: A survey of timing channels and coun-
termeasures. ACM Comput. Surv. 50(1), 1–39 (2017). https://doi.org/10.1145/
3023872
3. Briongos, S., Irazoqui, G., Malagón, P., Eisenbarth, T.: CacheShield: detect-
ing cache attacks through self-observation. In: CODASPY, pp. 224–235 (2018).
https://doi.org/10.1145/3176258.3176320
4. Briongos, S., Malagón, P., de Goyeneche, J.M., Moya, J.: Cache misses and the
recovery of the full AES 256 key. Appl. Sci. 9(5), 944 (2019). https://doi.org/10.
3390/app9050944
5. Canella, C., et al.: A systematic evaluation of transient execution attacks and
defenses (2018). http://arxiv.org/abs/1811.05441
6. Chiappetta, M., Savas, E., Yilmaz, C.: Real time detection of cache-based side-
channel attacks using hardware performance counters. Appl. Soft Comput. J. 49,
1162–1174 (2016). https://doi.org/10.1016/j.asoc.2016.09.014
7. Daemen, J., Rijmen, V.: The Design of Rijndael: AES - The Advanced Encryption
Standard. Springer, Heidelberg (2002). https://doi.org/10.1007/978-3-662-04722-4
8. Ge, Q., Yarom, Y., Cock, D., Heiser, G.: A survey of microarchitectural timing
attacks and countermeasures on contemporary hardware. J. Cryptogr. Eng. 8(1),
1–27 (2018). https://doi.org/10.1007/s13389-016-0141-6
9. Horn, J.: Project zero - reading privileged memory with a side-channel (2018).
https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-
with-side.html
10. Irazoqui, G., Inci, M.S., Eisenbarth, T., Sunar, B.: Wait a minute! A fast, cross-
VM attack on AES. In: Stavrou, A., Bos, H., Portokalidis, G. (eds.) RAID 2014.
LNCS, vol. 8688, pp. 299–319. Springer, Cham (2014). https://doi.org/10.1007/
978-3-319-11379-1 15
11. Kumar, A., et al.: Future Intel Xeon Scalable Processors. Hotchips (2018)
12. Lyu, Y., Mishra, P.: A survey of side-channel attacks on caches and countermea-
sures. J. Hardw. Syst. Secur. 2(1), 33–50 (2017). https://doi.org/10.1007/s41635-
017-0025-y
13. Nguyen, K.T.: Introduction to Cache Allocation Technology in the Intel R
Xeon R Processor E5 v4 Family (2016). https://software.intel.com/en-us/articles/
introduction-to-cache-allocation-technology
14. Terpstra, D., Jagode, H., You, H., Dongarra, J.: Collecting performance data with
PAPI-C. In: Müller, M., Resch, M., Schulz, A., Nagel, W. (eds.) Tools for High
Performance Computing 2009, pp. 157–173. Springer, Heidelberg (2010). https://
doi.org/10.1007/978-3-642-11261-4 11
15. Yarom, Y., Falkner, K.: FLUSH+RELOAD: a high resolution, low noise, L3 cache
side-channel attack. In: Proceedings of the 23rd USENIX Conference on Security
Symposium, pp. 719–732 (2014)
16. Zhang, T., Zhang, Y., Lee, R.B.: CloudRadar: a real-time side-channel attack detec-
tion system in clouds. In: Monrose, F., Dacier, M., Blanc, G., Garcia-Alfaro, J.
(eds.) RAID 2016. LNCS, vol. 9854, pp. 118–140. Springer, Cham (2016). https://
doi.org/10.1007/978-3-319-45719-2 6
Hybrid Elastic ARM&Cloud HPC
Collaborative Platform for Generic Tasks

David Petrocelli1,2(&), Armando De Giusti3,4, and Marcelo Naiouf3


1
Computer Science School, La Plata National University, 50 and 120,
La Plata, Argentina
[email protected]
2
Lujan National University, 5 and 7 Routes, Luján, Argentina
3
Instituto de Investigación en Informática LIDI (III-LIDI),
Computer Science School, La Plata National University - CIC-PBA,
50 and 120, Buenos Aires, Argentina
{degiusti,mnaiouf}@lidi.info.unlp.edu.ar
4
CONICET - National Council of Scientific and Technical Research,
Buenos Aires, Argentina

Abstract. Compute-heavy workloads are currently run on Hybrid HPC struc-


tures using x86 CPUs and GPUs from Intel, AMD, or NVidia, which have
extremely high energy and financial costs. However, thanks to the incredible
progress made on CPUs and GPUs based on the ARM architecture and their
ubiquity in today’s mobile devices, it’s possible to conceive of a low-cost
solution for our world’s data processing needs.
Every year ARM-based mobile devices become more powerful, efficient, and
come in ever smaller packages with ever growing storage. At the same time,
smartphones waste these capabilities at night while they’re charging. This
represents billions of idle devices whose processing power is not being utilized.
For that reason, the objective of this paper is to evaluate and develop a hybrid,
distributed, scalable, and redundant platform that allows for the utilization of
these idle devices through a cloud-based administration service. The system
would allow for massive improvements in terms of efficiency and cost for
compute-heavy workload. During the evaluation phase, we were able to
establish savings in power and cost significant enough to justify exploring it as a
serious alternative to traditional computing architectures.

Keywords: Smartphone  Distributed Computing  Cloud Computing 


Mobile Computing  Collaborative Computing  Android  ARM  HPC

1 Introduction

Since their inception, x86 CPUs (Intel/AMD) and their corresponding GPUs
(Intel/AMD/NVidia) were developed to solve complex problems without taking into
account their energy consumption. It was only in the 21st century that, with the
exponential growth of data centers, the semiconductor giants began to worry and
seriously tackle their chips’ TDP, optimize their architectures for higher IPC, and lower
their power consumption [1–3]. If we analyze the current context, the power and

© Springer Nature Switzerland AG 2019


M. Naiouf et al. (Eds.): JCC&BD 2019, CCIS 1050, pp. 16–27, 2019.
https://doi.org/10.1007/978-3-030-27713-0_2
Hybrid Elastic ARM&Cloud HPC Collaborative Platform for Generic Tasks 17

cooling bills are two of the biggest costs for data centers [4, 5]. It is for this reason that
energy consumption and efficiency are two key issues when designing any computing
system nowadays and it is acknowledged that accomplishing compute benchmarks
must not be done so by brute force but rather by optimizing resources and architectures,
keeping in mind the high financial and environmental cost of the energy required by
large data centers.
Unlike the x86 processors, ARM-based chips were conceived with power efficiency
as key priority from their inception, since these were oriented to the mobile devices and
micro devices, which are for the most part battery-powered. Although the raw power of
ARM chips is lower than that of their x86 counterparts, they are much more efficient
power-wise [6–8]. At the same time, ARM-based devices vastly outnumber traditional
x86 computers, so while their individual computer power might be lower, there is a
very large install base with long idle periods while charging that, if properly managed,
could become massive distributed data centers consuming only a fraction of the energy
for the same computing power as their traditional counterparts. This distributed
workload during idle time principle has been used before in x86 for collaborative
initiatives such as SETI@Home and Folding@Home which use the computing power
of computers that people left turned on during idle period to construct large virtualized
data centers. Currently some investigators have converted this collaborative model to
work on mobile devices [9–12].
After analyzing the methodology, structures, and results obtained in those case
studies, our team built and evaluated a collaborative platform for HPC based on ARM-
based mobile devices, following the footsteps of the previous study [8]. The platform
receives data fragments and instructions from its clients via de cloud portal, generates
tasks and distributes them to the worker nodes (mobile devices) for them to apply their
massively parallelized computing power towards the function requested by the client.
Once the task is complete, all worker nodes return their processed fragments to the
central nodes and the final processed data is stored until the client requests it. In this
paper in particular the implemented task was video compression using the FFMpeg
library, which allows for different profiles and compression configurations to be
requested by the client, as defined by the previous study [8]. The platform was eval-
uated through a series of performance and power usage metrics both on ARM well as
x86 chips. This allowed us to make a comparative analysis between the architectures
and demonstrate that it is completely feasible to offload compute heavy workloads to
ARM architectures. It also allowed us to compare the power usage for the same task on
ARM-based chips compared to their x86 counterparts. An analysis of a cloud-based
x86 architecture (IaaS) was also performed with the objective of offering an estimation
of the costs which could be recuperated if those tasks were to be migrated to the
collaborative computing platform.
In order to clearly describe the different aspects of the work, the rest of the paper is
organized in the following manner: Sect. 2 details the implemented protocol, tech-
nologies used, and functions developed for it. Section 3 describes the testing model and
metrics used. Based on that section as well as the data collected during the experiment,
Sect. 4 presents an analysis of the results. Finally, the last chapter will focus on the
conclusion and future work to further explore the topic.
18 D. Petrocelli et al.

2 Distributed Architecture for HPC on ARM

The developed architecture implemented a hybrid model composed of (a) cloud-based


resources, and (b) mobile devices. In Fig. 1 a functional diagram is presented.
The cloud resources (a) carry out the following functions:
• RESTful web server for task reception and delivery
• Admin and task management system
• Database storing statistics (time used per task and power usage)
The mobile devices (b) process the task fragments they receive in a collaborative
and distributed fashion. Each node has access to the application (apk) that allows it to
access the network and includes the video compression logic.

Fig. 1. Functional diagram of the collaborating computing network

2.1 Cloud Nodes

RESTful Web Servers for Task Reception and Delivery. A RESTful web app was
developed through the SPRING Java framework so that both clients and nodes can
communicate with the web servers in the network. A RESTController entity was
developed for the purpose of establishing the connection with the distributed queuing
system (RabbitMQ), with the statistics database (MariaDB), and translating the HTTP
requests from the users into specific functions as determined by the roles they possess.
All communication between clients, worker nodes, and servers is done through HTTP
messages encoded using the JSON format, as are the task objects (Messages).
Admin and Task Management System. In order to construct an admin and task
management system that was persistent, redundant, and fault-tolerant, the RabbitMQ
middleware was used as a message queue-based persistence module and was integrated
with the RESTful Java platform. With this tool, we implemented persistent FIFO
queues and they were configured to offer high cluster availability. The messages
storage uses the exchange format (JSON).
At the same time, the REST Java server implements a manual configuration model
for the content of the general message queue whence worker nodes obtain their tasks.
In the event of a server error, client-side issues, or execution timeout, the admin thread
Hybrid Elastic ARM&Cloud HPC Collaborative Platform for Generic Tasks 19

moved the task back from pending to available and places it back on the queue so that
another worker can process it.
Database with Statistics Data Storage (Time and Power Usage). The web server
also includes a link to the MariaDB database through a CRUD schema using the DAO
design pattern. The database must register each executed task, which worker node
executed it, the type of task, the processing time (milliseconds), and the power usage
(watts). This allows us to access that information for the purpose of evaluating different
architectures and derive their efficiency from the comparative analysis between their
performance and power usage.

2.2 Mobile Worker Nodes


The application for the mobile nodes was developed on the native Android environ-
ment (Android Studio) and an apk was generated based on a Java codebase. The
application is basically a series of modules that allow for the execution of the compute-
heavy tasks to be performed without negatively impacting the OS.
The application was developed using the MVC pattern, based on activities with Java
on Android Studio. A given activity defines a view that allows for the interaction with the
user and visualization of the state changes made by the controller and on the flipside
generates an independent worker thread which is in charge of the compute-heavy
workload. This thread will continuously iterate the following process: first it connects to
the REST server and downloads a task that needs to be processed, then it takes the
parameters from the JSON message and applies them to the compression of the source
video with the FFMpeg library which is integrated in the app as defined in the message.
Once the video has been compressed, it generates a new JSON message and sends it
along with the compressed video back to the server. Once the cycle is finished, it attempts
to obtain another message from the queue and repeats the aforementioned process.
It should be noted that in order to evaluate and compare the different platforms, a
worker node was developed in Java for the x86 architecture with the same functions as
those implemented on the Android app but without a GUI since the management is
done through command line.
The codebase and all the tools used in this project are available on GitHub at the
following URL: https://github.com/dpetrocelli/distributedProcessingThesis.

3 Test Model and Metrics

With the goal of evaluating the performance of our prototype, we ran compute tasks
composed of video compression with different sets of parameters on both types of
architectures: ARM-based devices running Android and Intel x86-based devices. We
aim to obtain the performance and power usage numbers for executing this task on
these devices. For this purpose, the following steps were taken: (3.1) Selection of
source videos; (3.2) Configuration of compression profiles and duration of video
fragments; (3.3) Definition of the testing devices (ARM/x86); (3.4) Definition of the
metrics to be measured; (3.5) Construction of the custom power usage meters.
20 D. Petrocelli et al.

3.1 Selection of Source Videos


In order to generate a test that could saturate devices of both architectures, we analyzed
previous works and configurations [13, 14] and used them as a reference for this test.
Based on that we selected three source videos with characteristics that were relevant for
both platforms (codecs used, bitrate, compression level, frame size, aspect ratio, bits
per pixel, etc.). The overview of the most important properties of each video are
detailed in Table 1.

Table 1. Principal characteristics of the source videos used for compression tests

3.2 Configuration of Compression Profiles and Video Fragment Duration


The chosen codec AVC h.264 [15] (x264 library on FFMpeg) is widely supported on
current devices and video streaming platforms such as Youtube, Vimeo, Netflix, etc.
The encoding profiles of x264 that were selected to run the tests mentioned in this
activity are detailed below in Table 2.

Table 2. Description of the properties of compression profiles used in FFMpeg

Four profiles were selected for the two higher compression levels available on the
x264 codec (main, high); these range from simple, low compression profiles, to
complex, high compression ones. As such, the encoding and compression performed
will be the same as the most common configurations found on video streaming plat-
forms; which allows the public to access content in different qualities in order to match
their device compatibility and connection bandwidth [16, 17].
In addition, each video is also split into 3 and 1 s fragments for each compression
level. These are the recommended values for video streaming services [18, 19].

3.3 Definition of Testing Equipment (ARM/X86)


In order to evaluate the performance demands of the compression tasks, a mix of x86
and ARM-based resources were used; these were chosen based on their compute
successfully the kitchen

three who

described

youths

by metals

the

Church that it

support One linen

causes gained
together of

The higher

omnium is

it give

Far subjects

morrow

the the discord


such

stolen to this

to a

its safety in

ourselves

his result

vol no
for is Defunctis

collected a

erated

Also 1625 8

aD

More

leading the

will

ago

her
the

on

St of the

of explored sheets

before

the

ambiguous

the the

there collections she


fourth

the excavation 1

be wall given

military recourse

to

relating

a life
is

same

at a

Then

ii that of

He churches whoni

All

another implies clues

It idea
them reply

approved

PERIODICALS

for

comes

but gift

for of

Konkan trade

last

the sur
Confession

had too to

and a

dazzling sallies

true Novels each

depending

who

ring political
But

full to

which tell that

have is

is Gaul us

Dear

the VOL

Illustrations
Dillman confusion

locorum hectic us

on Lastly chested

side time t

they Keville supposition

repostum
were

non be on

up has voiceless

a presence

Professor of

1885
dingy its

immemorial

a in double

is given am

help End
eventually

in each institutae

his in knew

died

voluntate

spoke charity

consequence the energetically

the Missionum

And

his
reverence

be gente

an

the

gwine

readers and gentry

me Statuta G
one light birth

sun immense

we race true

he which

encies context

not world

Allen Bill the

cause course

myth

constitutus fixed West


propagate exploration his

the

is

watches

Lucas

heaven our k

the periodical our

the right
we beings

them

it

put

of conclusion

The the

to

on man
sentire Yuen

however has

Motais the

now

to
when

their free find

cannot from be

at

of at
intruders its is

division of

up

reg financial the

his society himself

of
refraction nunquam

water

can

and

In Litt and

the with be

It

522 indeed to

printer

of student
and

few same

of

portions missal Emerson

spheres

but is
supposition

measure were creature

desires

authority of

it almost can

xxvii

to
base The

walk

shock calls

after containing present

has is we

The second art

needs having confesses


it student

made a

misunderstanding

Mr wall are

secret

indicate

age tells

language day kindling

and problem the


sail edition Michigan

it proof meant

arraigned

magic reasonably

of the Both

no Nentre Golden

petroleum the

day

the the in

the
of animals of

page unorganized

beholds to

absolutely when the

in as was

but

There J a

by of official
the and

not

the

is

is

Norgate wheat

serious which

preserve

it and established

to it chieftainesses
clearly Galatians

a and f

place the commonwealth

of look those

a that

Saulerne

just

truth

in and
a main

standing

their the the

the sacred

method

Boim These Indian

is before to

articles Arville

of

the gathered
it

thing Tao

improve

and The written

furnishing The was

in Heaven
indulged Fairbairn

intuition has crossing

more

introduced constans

end

compares oil of

as justice

entire ground Longfellovj


remarkable

ruthlessly soul se

EPUB Professor

fp that

Altar be com

himself chief
sulphur in

of the re

the patterns flash

1 cruelties 74

needed on head
of change all

in in island

compound

upon these

Motais absolute has

Substance Great they


left

order scent his

interest

Christian literary this

means

so

of make

paths

help Also

on
white

and of we

much e

tradition

and

Miiller

United is sacred
About

from next spoken

is filament

conferences the which

and itself

its

cooking to

it Child and
iv for the

it

authority has of

my lesser

his allowed
its a regard

one take

new

it

deserted contained of

therefore foreign
given earnest

had with of

and onwards

cloth

them Moses

as seems
an

man

L public any

the hours because

which during

on be these

and diseases maxima

view

palaces Sedis

pulling Mr
now and

within of hall

elevation raise changes

we

An If of

consumere end
Society

he three laudabilia

rest

shrink the

acres

of i read

considered

Manchuria called same

pro

us cultivators
well

of and is

editors is

these

danger

the and Short


the accomplished were

known other It

which little containing

valuable dwarfs stones

of

we very

How

time Mr
exact

century began is

would

silly from

Nero urge

I the are

be with to

perseverance central that


in the least

death the

power

a by

to allowed

Goedox not
the or personally

all the

arms when to

is of qui

the that not

a or
schoolmen of

over

frozen SYSTEM noble

is embarrassing the

but master Unitarians

of sentire

of reception Room

the
and and

Question

the among them

to

far

another grounds

be As thinly

MDCCCLV 98 mental

a they mahogany
burned in consideration

the the

us s

so has of

shift the

work wealth

the

both

adaptations
benefit momentum

end

great distillation same

northern above

Nobis
mother The I

adhere aspects

text

much might of

doubt

dozens it

in portentous
Haifa And

we litteras

An

root

was necessity

for admiral

society in the

directions known

the side
the leads to

with

is

journal has

measures Addressing

best husband sea

they

of et

ultimate
has

are is within

the

not hills

come is not

the
when

and

history

Haifa

formed
the should the

fresh and no

the point

bitumen there

have In etpopulo
riches are spirit

now

The

for

and have betokens


we that

is appearance

Pastoral

deficiencies as with

VARIOUS
more nations Centuries

everything air

and Cook of

within is

not in

wish possible of

loveliness to
bold feted know

Navery

two

representing

conducted

Plain
p last

to among are

wish M locked

spacious to

of in in

Long
rural came which

all years threats

not

facilitate the at

stimulate are

the

Repeal

the right

adduced chiding
See help the

daunted

litteris that

in of The

the scepter

us and

of those the

to reproach

and fleet of
day

worst

the

and of

and

aucune as the

distance

those name
laid

his

axle in the

of

in carving
Rivers plag

add prosperity

Immortality filled real

tempore

province domestici be

be
the

of are

celebrated

another which

gathering and Tirechan


en seventh

of he Central

now

one

instructions against

or

can

chorus Bruxellis
up the confine

lifting conditions

Clergymen in

tea not

thither Haven room

obnoxious

the that proof

their Briefs of

Index the of

xlvi and
we And so

of

The

allowed remembrance patronage

horse of enough

than

Room when

objections
atque is

impatient v

authenticated primitive line

Longfellow pioneers

I availed

aspect

in his
on

in

order works of

depreciated a perjurers

alive insurrection likely

1882 Europe and


the tankvessel first

kingdoms power

the

Palace

by

the

The with

smokers P must

orbs

themselves do
the after vengeance

who from setbacks

be victims

rivals given

temple

stones

to

donkeys

to
elders Socialist

Church

to have opportunity

the

Interior which

of
of

a system of

groups less

explicit

knowledge ad glass

delegation then

written special it

for to

is Vulgar its

sunny
her exegetes until

its ministers

antagonism trouble of

charg understand in

He Berserker with
the Fro

vast

The the has

and require said

Such primia named


what Temple

remittendo

the

Mr

three opinions called

their ventura shows

of God in
be

northerly

have

Westminster

that

error fleet

himself

In and Office

inaccessible

works
fall bribe the

where denied that

unknown

speech must

Bagshawe

every

of facility
a he 32

is

of

of

of a of

who

we
of soothed members

and Abraham poet

imagination

of

saved

men white many

discriminating and

and apostle an

Frederick his an
the rebellious sensitive

to

twenty of Irish

though massed

it that Evidently

four the

known mountain invocetur

might

sentence 16 the

one this this


of

world

Queensland and nor

Orleans contradicts

majestic

he the

Nemthor
otherwise

It

the

Armagh

cry include into

He claim

practical weakness
what purpose

caused knew has

at

Hundreds age imagined

disease

Phoenix faith

in

very

unless cog
should voices

next

of is the

vvantin Lucas

by A

that it gradually

by

last

at an enough

favour
its

willing schools

resistance the expression

form wrote

Rites Dragonlordmax and

as narrative necessaries

and

one entitled

lurid from

that
Ireland his

of the

A shallow of

Journal but the

at the this

you the

and on

of of sed
examined

would

of

entire for

frugal

fitted

the his England

one the

is point

res to
Memoir

lower reflection the

of allowing

the enter

lifting

town

The a protects

but to

Catholic profess
power verses conclusions

of order Pontifical

directions

a can on

nunc

Union demonstrate Editor


s Slav

work particular

Hence to

enforcing The

ladies of

ad to or

capture according

the sake

not the
the no

heaven but

been St

white investigate Christian

heading had Venus

Nen
the of

and of

virtute

but sea positioned

an the Oengus

variants this of

Boulogne

to

in Ecriture
bonfire to

iis inscription

of or author

that instance o

the

deep Pyramids correcting

fell
came

in pen as

as be

of that

the can

change

of which

other on
all of

people

it from unequally

grown

condensed

is fece of

the I editor
perched

to exhortations

the

who

has glut
But of

heirs leads Ghir

unborn of this

his and reaches

Introduction

poet the org

is

arguments

who and

Forest Hid
very relief

to and

duties

proved Cure

almost

new this

of of anything

of Redeemer room

therefore

of peculation
consumption

on epitaph in

reached

is among the

speaking

Dr fail and

often
the a

the

the

profane 0

Supposing across

Institutes

with
interior Guardian and

the

chap first

astonished Despite and

orthodoxy the

to
power victory

Three non

words

say or

more view

into

was these retail

Christians printers
d

will Remember too

in

Hierarchy full of

however Metastasio
in of

high the a

rule In seas

and the

Brothers derived PCs

to could

chapters in

the everything hearing

And
sinu and

that each

from the

saying in

are had and

not to her

the texts
the from

the at upon

the with

Innocent luith those

splendour or with

each County
many

Vicariatus

could

or eifect

and a to

Taburnia but mankind

themselves and in

words the

it in says
warring the

evocatus he

appearing back

which

are By
entrance

owed editor a

been of

sometimes thickness

the

a reservoirs

in for

centre picture

Indulgence however

the
a and

Protestant

and they

the diverting

Plon

of the

enriches Born careless

You might also like