Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
23 views62 pages

Module 4

Aneka is a cloud application platform designed for developing cloud computing applications, offering a service-oriented architecture with various services categorized into Fabric, Foundation, and Application Services. It supports dynamic resource management, application execution, and user management, while providing a uniform interface through its Platform Abstraction Layer. Aneka facilitates building cloud environments via infrastructure and logical organization, enabling scalability and efficient resource allocation for distributed computing.

Uploaded by

vaibhavtheboss18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views62 pages

Module 4

Aneka is a cloud application platform designed for developing cloud computing applications, offering a service-oriented architecture with various services categorized into Fabric, Foundation, and Application Services. It supports dynamic resource management, application execution, and user management, while providing a uniform interface through its Platform Abstraction Layer. Aneka facilitates building cloud environments via infrastructure and logical organization, enabling scalability and efficient resource allocation for distributed computing.

Uploaded by

vaibhavtheboss18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Module - 4

Aneka: Cloud Application Platform, Framework Overview,


Anatomy of the Aneka Container, From the Ground Up: Platform Abstraction Layer,
Fabric Services, Foundation Services, Application Services, Building Aneka Clouds,
Infrastructure Organization, Logical Organization, Private Cloud, Deployment Mode,
Public Cloud Deployment Mode, Hybrid Cloud Deployment Mode, Cloud
Programming and Management, Aneka SDK, Management Tools.
Aneka: Cloud Application Platform
What Aneka is?
• Aneka is a software platform for developing cloud computing
applications.
• Aneka is a pure PaaS solution for cloud computing.
• Aneka is a cloud middleware product that can be deployed on a
heterogeneous set of resources: Like: a network of computers, a multi
core server, data centers, virtual cloud infrastructures, or a mixture of all
• Aneka implements a service-oriented architecture (SOA), and services.
These are the fundamental components of an Aneka Cloud.
• Services operate at container level and, except for the platform
abstraction layer, they provide developers, users, and administrators
with all features offered by the framework.
Aneka’s Capabilities
Aneka
Framework
Overview with
Diagram
Fig.5.2
A collection of interconnected containers constitute the Aneka Cloud: a
single domain in which services are made available to users and
developers.
• The container features three different classes of services:
o Fabric Services,
o Foundation Services,
o Application Services.
• Fabric services take care of infrastructure management for the Aneka
Cloud
• Foundation Services take care of supporting services for the Aneka
Cloud
• Application Services take care of application management and
execution respectively
Various Services offered by Aneka Cloud Platform are:
Elasticity and scaling: By means of the dynamic provisioning service, Aneka supports
dynamically upsizing and downsizing of the infrastructure available for applications.
Runtime management: The run time machinery is responsible for keeping the infrastructure up
and running and serves as a hosting environment for services.
Resource management: Aneka is an elastic infrastructure in which resources are added and
removed dynamically according to application needs and user requirements
Application management: A specific subset of services is devoted to managing applications.
These services include scheduling, execution, monitoring, and storage management.
User management: Aneka is a multi tenant distributed environment in which multiple
applications, potentially belonging to different users, are executed. The framework provides
an extensible user system via which it is possible to define users, groups, and permissions.
QoS / SLA management and billing: Within a cloud environment, application execution is
metered and billed. Aneka provides a collection of services that coordinate together to take
into account the usage of resources by each application and to bill the owning user
accordingly.
Anatomy of the Aneka container
• The Aneka container constitutes the building blocks of Aneka Clouds and
represents the runtime machinery available to services and applications.
• The container is the unit of deployment in Aneka Clouds, and it is a
lightweight software layer designed to host services and interact with the
underlying operating system and hardware.
• The Aneka container can be classified into three major categories:
• Fabric Services
• Foundation Services
• Application Services
• These services stack resides on top of the Platform Abstraction Layer (PAL)
(Refer Diagram-5.2) it represents the interface to the underlying operating
system and hardware.
• PAL provides a uniform view of the software and hardware environment in
which the container is running
From the Ground Up: Platform Abstraction Layer
• In a cloud environment each operating system has a different file system organization and
stores that information differently.
• It is the Platform Abstraction Layer (PAL) that addresses this heterogeneity problem with
Operating systems and provides the container with a uniform interface for accessing the
relevant information, thus the rest of the container can be operated without modification on
any supported platform
• The PAL provides the following features:
• Uniform and platform-independent implementation interface for accessing the hosting platform
• Uniform access to extended and additional properties of the hosting platform
• Uniform and platform-independent access to remote nodes
• Uniform and platform-independent management interfaces
• Also The PAL is a small layer of software that comprises a detection engine, which
automatically configures the container at boot time, with the platform-specific component
to access the above information and an implementation of the abstraction layer for the
Windows, Linux, and Mac OS X operating systems.
• Following are the collectible data that are exposed by the PAL:
• Number of cores, frequency, and CPU usage
• Memory size and usage
• Aggregate available disk space
• Network addresses and devices attached to the node
Fabric services
• Fabric Services define the lowest level of the software stack representing
the Aneka Container.
• They provide access to the Resource-provisioning subsystem and to the
Monitoring facilities implemented in Aneka.
• Resource-provisioning services are in charge of dynamically providing new
nodes on demand by relying on virtualization technologies
• Monitoring services allow for hardware profiling and implement a basic
monitoring infrastructure that can be used by all the services installed in
the container
• The two services of Fabric class are:
o Profiling and monitoring
o Resource management
Profiling and monitoring
• Profiling and monitoring services are mostly exposed through following
services
o Heartbeat, o Monitoring, o Reporting
✓ The Heart Beat makes the information that is collected through the PAL available
✓The Monitoring and Reporting implement a generic infrastructure for monitoring
the activity of any service in the Aneka Cloud;
• Heartbeat Functions in detail
• The Heartbeat Service periodically collects the dynamic performance information
about the node and publishes this information to the membership service in the
Aneka Cloud
• It collects basic information about memory, disk space, CPU, and operating system
• These data are collected by the index node of the Cloud, and makes them available
for reservations and scheduling services that optimizes them for heterogeneous
infrastructure
• Heartbeat works with a specific component, called Node Resolver, which is in
charge of collecting these data.
Reporting & Monitoring Functions in detail
• The Reporting Service manages the store for monitored data and makes
them accessible to other services for analysis purposes.
• On each node, an instance of the Monitoring Service acts as a gateway to
the Reporting Service and forwards to it all the monitored data that has
been collected on the node
Many Built-in services use this channel to provide information,
important built-in services are:
• The Membership Catalogue service tracks the performance information of nodes.
• The Execution Service monitors several time intervals for the execution of jobs.
• The Scheduling Service tracks the state transitions of jobs.
• The Storage Service monitors and obtains information about data transfer such as
upload and download times, file names, and sizes.
• The Resource Provisioning Service tracks the provisioning and life-time information
of virtual nodes
Resource management
• Aneka provides a collection of services that are in charge of managing resources.
• These are
Index Service (or Membership Catalogue)
Resource Service, Resource Provisioning Service
Membership Catalogue features
• The Membership Catalogue is Aneka’s fundamental component for resource
management
• It keeps track of the basic node information for all the nodes that are connected or
disconnected.
• It implements the basic services of a directory service, where services can be searched
using attributes such as names and nodes
Resource Provisioning Service Features
• The resource provisioning infrastructure built into Aneka is mainly concentrated in
the Resource Provisioning Service.
• It includes all the operations that are needed for provisioning virtual instances
(Providing virtual instances as needed by users).
• The implementation of the service is based on the idea of resource pools.
• A resource pool abstracts the interaction with a specific IaaS provider by exposing a
common interface so that all the pools can be managed uniformly.
Foundation services

• Foundation Services are related to the logical management of the


distributed system built on top of the infrastructure and provide
supporting services for the execution of distributed applications.

• These services cover:


• Storage management for applications

• Accounting, billing and resource pricing

• Resource reservation
Storage management
Aneka offers two different facilities for storage management:
• A centralized file storage, which is mostly used for the execution of compute-
intensive applications,
• A distributed file system, which is more suitable for the execution of data-
intensive applications.
• As the requirements for the two types of applications are rather
different.
• Compute intensive applications mostly require powerful processors
and do not have high demands in terms of storage, which in many cases
is used to store small files that are easily transferred from one node to
another. Here, a centralized storage node is sufficient to store data
• Centralized storage is implemented through and managed by Aneka’s Storage
Service
• The protocols for implementing centralized storage management are supported
by a concept of File channel. It consists of a File Channel controller and File
channel handler.
• File channel controller is a server component whereas File channel handler is a
client component (that allows browsing, downloading and uploading of files).
• In contrast, Data-intensive applications are characterized by large data files
(gigabytes or terabytes, peta bytes) and here processing power required by tasks is
not more.
• A distributed file system is used for storing data by using all the nodes belonging
to the cloud.
• Data intensive applications are implemented by means of a distributed file
system. Google File system is best example for distributed file systems.
• Typical characteristics of Google File system are:
• Files are huge by traditional standards (multi-gigabytes).
• Files are modified by appending new data rather than rewriting existing data.
• There are two kinds of major workloads: large streaming reads and small random reads.
• It is more important to have a sustained bandwidth than a low latency.
Accounting, billing, and resource pricing
• Accounting Services keep track of the status of applications in the Aneka Cloud.
The information collected for accounting is primarily related to infrastructure
usage and application execution
• Billing is another important feature of accounting.
• Billing is important since Aneka is a multitenant cloud programming platform in which the
execution of applications can involve provisioning additional resources from commercial
providers.
• Aneka Billing Service provides detailed information about each user’s usage of resources, with
the associated costs.
• Resource pricing is associated with the price fixed for different types of
resources/nodes that are provided for the subscribers. Powerful resources are
priced high and less featured resources are priced low.
• Two internal services used by accounting and billing are Accounting service and
Reporting Service
Resource reservation
• Resource Reservation supports the execution of distributed
applications and allows for reserving resources for exclusive use by
specific applications.
• Two types of services are used to build resource reservation:
• The Resource Reservation
• The Allocation Service
• Resource Reservation keeps track of all the reserved time slots in the
Aneka Cloud and provides a unified view of the system. (provides
overview of the system)
• The Allocation Service is installed on each node that features execution
services and manages the database of information regarding the
allocated slots on the local node.
• Different Reservation Service Implementations supported by Aneka
Cloud are:
• Basic Reservation: Features the basic capability to reserve execution slots on
nodes and implements the alternate offers protocol, which provides alternative
options in case the initial reservation requests cannot be satisfied.

• Libra Reservation: Represents a variation of the previous implementation that


features the ability to price nodes differently according to their hardware
capabilities.

• Relay Reservation: This implementation is useful in integration scenarios in


which Aneka operates in an inter cloud environment.
Application services
• Application Services manage the execution of applications and
constitute a layer that differentiates according to the specific
programming model used for developing distributed applications on top
of Aneka
• Two types of services are:
1. The Scheduling Service :
• Scheduling Services are in charge of planning the execution of distributed
applications on top of Aneka and governing the allocation of jobs composing an
application to nodes.
• Common tasks that are performed by the scheduling component are the
following:
• Job to node mapping
• Rescheduling of failed jobs
• Job status monitoring
• Application status monitoring
2. The Execution Service
• Execution Services control the execution of single jobs that compose applications. They
are in-charge of setting up the runtime environment hosting the execution of jobs.
• Some of the common operations that apply across all the range of supported models are:
• Unpacking the jobs received from the scheduler

• Retrieval of input files required for job execution

• Sandboxed execution of jobs

• Submission of output files at the end of execution

• Execution failure management (i.e., capturing sufficient contextual information useful


to identify the nature of the failure)

• Performance monitoring

• Packing jobs and sending them back to the scheduler


• The various Programming Models Supported by Execution Services of Aneka Cloud
are:
1. Task Model. This model provides the support for the independent “bag of tasks”
applications and many computing tasks. In this model application is modelled as a
collection of tasks that are independent from each other and whose execution can be
sequenced in any order
2. Thread Model. This model provides an extension to the classical multithreaded
programming to a distributed infrastructure and uses the abstraction of Thread to
wrap a method that is executed remotely.
3. Map Reduce Model. This is an implementation of Map Reduce as proposed by
Google on top of Aneka.
4. Parameter Sweep Model. This model is a specialized form of Task Model for
applications that can be described by a template task whose instances are created by
generating different combinations of parameters.
Building Aneka clouds
• Aneka Cloud can be realized by two methods:
1. Infrastructure Organization
2. Logical Organization
• Infrastructure based organization of Aneka Cloud is given in the following figure-
5.3:
• The working mechanism of this model:
• It contains Aneka Repository, Administrative Console, Aneka Containers & Node
Managers as major components.
• The Management Console manages multiple repositories and select the one that best suits
the specific deployment
• A Repository provides storage for all the libraries required to layout and install the basic Aneka
platform, by installing images of the required software in particular Aneka Container through
node managers by using various protocols like FTP, HTTP etc.
• A number of node managers and Aneka containers are deployed across the cloud platform to
provision necessary services, The Aneka node manager are also known as AnekaDaemon
• The collection of resulting containers identifies the final AnekaCloud
Core Components
• Aneka Repository: This acts as a central storage and management point for Aneka.
It holds resources, applications, and configurations. It can be accessed via standard
protocols like HTTP and File Share. Updates to the cloud infrastructure, such as new
applications or policies, are pushed from or managed through this repository.
• Management Console: This is the user interface or control panel for administrators
and users to interact with the Aneeka cloud. From here, they can deploy
applications, monitor resources, manage users, and perform other administrative
tasks. It's connected to the cloud, allowing it to send commands and receive
information.
• Aneka Containers: These are the fundamental units for running applications within
the Aneka cloud. Think of them as isolated environments where applications and
their dependencies are packaged and executed. They are managed by the Node
Manager.
• Node Manager: Each physical or virtual machine participating in the Aneka cloud
(referred to as a "node") has a Node Manager. Its responsibilities include:
• Managing and coordinating Aneka Containers on its respective node.
• Communicating with the rest of the cloud infrastructure (e.g., the Management Console, other
Node Managers) to receive tasks, report status, and share resources.
• Allocating resources to Aneka Containers.
In essence, Aneka provides a framework for:

• Distributed Computing: Applications can be broken down and run across


multiple interconnected machines.

• Resource Management: It efficiently allocates and manages computing


resources (CPU, memory, storage) to running applications.

• Scalability: The architecture allows for easy addition or removal of nodes to


scale computing power up or down as needed.

• Application Deployment and Orchestration: It simplifies the deployment and


management of applications in a cloud environment through the use of
containers and node managers.
Logical organization
The logical organization of Aneka Clouds can be very diverse, since it strongly depends
on the configuration selected for each of the container instances belonging to the
Cloud.
Here is a scenario that has master-worker configuration with separate nodes for storage,
the Figure 5.4. portray
The master node comprises of following services:
o Index Service (master copy)
o Heartbeat Service
o Logging Service
o Reservation Service
o Resource Provisioning Service
o Accounting Service
o Reporting and Monitoring Service
o Scheduling Services for the supported programming models
Here Logging service, Heartbeat service and Monitoring service are considered as
Mandatory services in all the block diagrams.
• Similarly the Worker Node comprises of following services:
o Index Service
o Execution service
o Allocation service
o And mandatory ( Logging, Heartbeat and monitoring services)
• The Storage Node comprises of :
o Index service
o Storage Service
o And mandatory ( Logging, Heartbeat and monitoring services)
• In addition all nodes are registered with the master node and
transparently refer to any failover partner in the case of a high-
availability configuration
1. Master Node:

•Primary Role: The Master Node is the central control point of the Aneka cloud. It holds the "master"
copies of several critical services, indicated by "Index (master)".
•Key Services on Master Node:
•Scheduling: Manages the distribution of tasks and workloads to available worker nodes.
•Accounting: Tracks resource usage and potentially billing information.
•Reporting: Generates reports on system status, performance, and resource utilization.
•Reservation: Handles the reservation of resources for future tasks or specific users.
•Provisioning: Manages the creation and configuration of new resources (e.g., virtual
machines, containers).
•Mandatory: Logging service, Heartbeat service and Monitoring service are core services that are
essential for the operation of the Aneka cloud.
•Failover Mechanism: If the primary Master Node fails, a backup can take over its role to ensure
continuous operation of the cloud. This implies a hot standby or active-passive setup where another
node (possibly another Master Node in a ready state or a designated failover server) can assume
the master role.
2. Storage Nodes:
• Primary Role: Storage Nodes are responsible for storing data within the Aneka
cloud. They hold "slave" indexes related to storage.

• Key Services on Storage Node:


• Storage: Provides the actual storage capacity for data and files.

• Mandatory: These are essential services for data storage and retrieval.

• Relationship with Master Node: They are connected to the Master Node, implying
that the Master Node coordinates data access and management with the Storage
Nodes.

• Redundancy: The diagram shows multiple Storage Nodes, suggesting a distributed


storage system that offers redundancy and scalability for data.
3. Worker Nodes:
•Primary Role: Worker Nodes are where the actual computational tasks and applications run. They
hold "slave" indexes related to execution and resource allocation.
•Key Services on Worker Node:
•Execution: Executes the applications and tasks assigned by the Master Node.
•Allocation: Manages the allocation of local resources (CPU, memory) to the running tasks.
•Mandatory: These are core services required for task execution on the worker node.
•Relationship with Master Node: They are directly connected to the Master Node, which
dispatches tasks to them for execution.
•Scalability: The presence of multiple Worker Nodes (indicated by the dashed line representing
more nodes) highlights the scalability of the Aneka cloud. You can add more worker nodes to
increase the processing capacity.
Overall Flow and Interactions:
1. The Master Node acts as the brain, receiving requests (e.g., from a management
console, though not shown in this diagram) and determining how to fulfill them.
2. It uses its Scheduling service to distribute tasks to available Worker Nodes.
3. Worker Nodes execute these tasks, using their Allocation service to manage
their local resources.
4. Data needed by tasks running on Worker Nodes, or results generated by them,
are stored and retrieved from Storage Nodes, coordinated by the Master Node.
5. All critical services on the Master Node are designed with Failover to ensure high
availability of the cloud.
Aneka Cloud Deployment Models
• All the general cloud deployment models like Private cloud deployment mode, Public cloud
deployment mode and Hybrid Cloud deployment mode are applicable to Aneka Clouds also.
Private cloud deployment mode
• A private deployment mode is mostly constituted by local physical resources and
infrastructure management software providing access to a local pool of nodes, which might
be virtualized.
• Figure 5.5 shows a common deployment for a private Aneka Cloud. This deployment is
acceptable for a scenario in which the workload of the system is predictable and a local
virtual machine manager can easily address excess capacity demand.
• Most of the Aneka nodes are constituted of physical nodes with a long lifetime and a static
configuration and generally do not need to be reconfigured often. The different nature of the
machines harnessed in a private environment allows for specific policies on resource
management and usage that can be accomplished by means of the Reservation Service.
• For example, desktop machines that are used during the day for office automation can be
exploited outside the standard working hours to execute distributed applications. Workstation
clusters might have some specific legacy software that is required for supporting the
execution of applications and should be executed with special requirements.
Note: In the master node: Resource Provisioning, Application Management & Scheduling and Resource Reservation are
the primary services.
Public cloud deployment mode
• Public Cloud deployment mode features the installation of Aneka master and worker
nodes over a completely virtualized infrastructure that is hosted on the infrastructure
of one or more resource providers such as Amazon EC2 or GoGrid.
• Figure 5.6 provides an overview of this scenario.
• The deployment is generally contained within the infrastructure boundaries of a single
IaaS provider. The reasons for this are to minimize the data transfer between different
providers, which is generally priced at a higher cost, and to have better network
performance. In this scenario it is possible to deploy an Aneka Cloud composed of
only one node and to completely leverage dynamic provisioning to elastically scale
the infrastructure on demand. A fundamental role is played by the Resource
Provisioning Service, which can be configured with different images and templates to
instantiate.
• Other important services that have to be included in the master node are the
Accounting and Reporting Services. These provide details about resource utilization
by users and applications and are fundamental in a multitenant Cloud where users
are billed according to their consumption of Cloud capabilities.
Note: Reporting, Billing, Accounting, Resource Provisioning and Application Management & Scheduling are the
primary services in master node
Hybrid cloud deployment mode
• The hybrid deployment model constitutes the most common
deployment of Aneka. In many cases, there is an existing computing
infrastructure that can be leveraged to address the computing needs
of applications. This infrastructure will constitute the static
deployment of Aneka that can be elastically scaled on demand when
additional resources are required.
• An overview of this deployment is presented in Figure 5.7. This
scenario constitutes the most complete deployment for Aneka that is
able to leverage all the capabilities of the framework:
• Dynamic Resource Provisioning
• Resource Reservation
• Workload Partitioning (Scheduling)
• Accounting, Monitoring, and Reporting
• In a hybrid scenario, heterogeneous resources can be used for different
purposes. As we discussed in the case of a private cloud deployment,
desktop machines can be reserved for low priority work- load outside the
common working hours. The majority of the applications will be executed
on work- stations and clusters, which are the nodes that are constantly
connected to the Aneka Cloud. Any additional computing capability
demand can be primarily addressed by the local virtualization facilities,
and if more computing power is required, it is possible to leverage
external IaaS providers.
Cloud programming and management
• Aneka's core purpose is to be a scalable middleware for distributed
applications.
• It offers a comprehensive SDK for developers (APIs) and powerful
Management Console tools for administrators, simplifying application
development and management.
• Application development and management constitute the two major
features that are exposed to developers and system administrators.
• Aneka provides developers with a comprehensive and extensible set of
APIs and administrators with powerful and intuitive management tools.
• The APIs for development are mostly concentrated in the Aneka SDK;
management tools are exposed through the Management Console
Aneka SDK
Aneka's SDK offers APIs for three main development areas:

1.Application Development: Utilizing existing Aneka features and middleware


services.

2.New Programming Models: Implementing custom programming paradigms.

3.New Services: Integrating new functionalities into the Aneka Cloud.

• The SDK supports these through its Application Model (for applications and new
programming models) and Service Model (for general service infrastructure).
Application Model
The Aneka Application Model provides a fundamental abstraction for developing
and executing distributed applications in the Cloud.
This means developers don't have to deal with the low-level complexities of
distributed systems directly. Instead, they interact with higher-level programming
interfaces that make it easier to write cloud applications.
Abstraction for Developers: It offers a simplified view for developers, allowing
them to focus on the application logic rather than the underlying complexities of
distributed execution in the cloud.
Key Components and Their Roles:
1. Programming Model:

• Abstraction (Developer View): This identifies the view developers have. It defines
the set of APIs (Application Programming Interfaces) that are common to all
programming models supported by Aneka. This allows developers to represent and
program their distributed applications.

• Runtime Support (Execution View): This identifies the underlying mechanisms


that support the execution of these programs on top of Aneka. The programming
model is specialized based on the needs and features of each specific
programming model (e.g., Task, Thread, MapReduce).
2. ApplicationBase <M> Class:
•This is a crucial concept. Every distributed application running on Aneka is an
instance of ApplicationBase<M>.
•M in ApplicationBase<M> is a type identifier that specifies the particular type of
"Application Manager" used to control and manage that application. This indicates
a close relationship between the application's definition and its execution
manager.
•Developer's View: The ApplicationBase class (and its specializations) constitutes
the developer's view of a distributed application on Aneka. It provides the
interfaces and constructs developers used to define their cloud applications
3. Application Managers:
•Internal Components: Unlike the ApplicationBase which is for developers, Application
Managers are internal components of Aneka.
•Control and Monitoring: Their primary role is to interact with Aneka Clouds to
control and monitor the execution of the applications. They handle tasks like
scheduling, resource allocation, fault tolerance, and progress tracking behind the
scenes.
•Specialization: Application Managers are the "first element of specialization of the
model" and vary according to the specific programming model used. This means
there isn't one generic Application Manager; instead, there's likely a TaskManager,
ThreadManager, MapReduceManager, etc., each optimized for its respective
programming model.
•Applications as a Set of Tasks: Regardless of the specific programming
model, an Aneka distributed application is fundamentally viewed as a collection
of tasks. The collective execution of these tasks defines the overall application
execution on the Cloud.
•Two Main Application Categories: Aneka further categorizes applications into
two types,
1. Applications whose tasks are generated by the user and

2. Applications whose tasks are generated by the runtime infrastructure.


Category One: Tasks Generated by the Users
•Description: This is described as the "most common" category. Applications in this category
are composed of units of work (tasks) that are submitted by the user. The runtime infrastructure
(Aneka) is responsible for managing and executing these tasks.
•Supported Programming Models: This category serves as a reference for several
programming models supported by Aneka, including:
•Task Model: For applications broken down into independent tasks.
•Thread Model: For applications using Aneka's distributed thread concept.
•Parameter Sweep Model: For applications that run the same code multiple times with
different input parameters.
•WorkUnit Class: Applications belonging to this category are a collection of WorkUnit objects.
•Each WorkUnit can have its own input and output files.
•The transfer of these files is transparently managed by the runtime (meaning the
developer doesn't have to explicitly handle file transfers between nodes).
•Specific WorkUnit Types:
•For the Task Model, the specific WorkUnit class is AnekaTask.
•For the Thread Model, the specific WorkUnit class is AnekaThread.

•Inheritance and Manager Interface:


•All applications falling into this category inherit from or are instances of
AnekaApplication<W, M>.
•W here represents the specific type of WorkUnit class being used (e.g., AnekaTask
or AnekaThread).
•M identifies the type of Application Manager that is used to implement the
IManualApplicationManager interface. This implies that these managers provide
methods for manual control over tasks/threads (e.g., adding, starting, monitoring).
Category Two: Units of Work Generated by the Runtime Infrastructure (Rather Than the
User)
•Description: This category covers scenarios where the runtime infrastructure itself generates
the units of work, not the user directly. This is typically for higher-level programming models
where the user defines the overall problem, and Aneka internally breaks it down into executable
units.
•Key Example: MapReduce: The primary example given is MapReduce.
•In MapReduce, the application units (Map and Reduce phases) are not defined by a common
WorkUnit class.
•Instead, developers define their distributed applications in terms of two functions: map and
reduce.
•The MapReduceApplication class provides a specific interface for developers to specify the
Mapper<K,V> and Reducer<K,V> types (where K and V represent key-value pairs for
input/output) and the required input files.
•No Common Base Types for WorkUnit: Because the runtime generates the units of work, there
isn't a single common WorkUnit class like in the first category. Different programming models in this
category might have different requirements and expose different interfaces.
•Manager Interface: For this category, applications are instances of ApplicationBase<M>, where M
implements IAutoApplicationManager. This suggests that the managers for these types of
applications handle the units of work more automatically, given the higher-level abstraction.
It clearly differentiates between:
•"Manual" models: Where developers have fine-grained control over individual tasks or threads.
•"Auto" models: Where developers work at a higher level of abstraction, and the Aneka runtime
handles the internal decomposition and execution of work units automatically (like in MapReduce).
This distinction highlights Aneka's flexibility in accommodating different types of distributed
applications.
Manual Category:
•Description: This refers to applications where the developer explicitly
defines and submits the individual "units of work" (tasks or threads) to Aneka.
•Base Application Type:
•AnekaApplication<W,M>: This is the application class that developers
interact with. W signifies the specific type of WorkUnit (e.g., AnekaTask,
AnekaThread), and M refers to the manager type.
•ManualApplicationManager<W>: This indicates that the manager used for
this category implements an interface for manual control over the work units
(e.g., IManualApplicationManager as mentioned in the text).
•Work Units?: Yes. This category explicitly deals with discrete WorkUnit
objects (like AnekaTask or AnekaThread).
•Programming Models: This category supports:
•Task Model: For independent, concurrent tasks.
•Thread Model: For distributed execution of thread-like constructs.
•Parameter Sweep Model: For running the same task with different
inputs.
2. Auto Category:
•Description: In this category, the Aneka runtime environment automatically
generates the underlying units of work based on a higher-level application
definition provided by the user. The user doesn't explicitly define individual
tasks.
•Base Application Type:
•ApplicationBase<M>: This is the general base application type. M refers to a
manager type that implements an interface for automatic management (e.g.,
IAutoApplicationManager as mentioned in the text).
•Work Units?: No. Because the runtime generates the units internally, there
isn't a direct "Work Unit" class that the developer interacts with in the same
way as the "Manual" category.
•Programming Models:
•MapReduce: This is the primary example given, where the user defines
map and reduce functions, and Aneka handles the distribution and
execution of the Map and Reduce phases.
Service Model
The Aneka Service Model defines the fundamental requirements for implementing and hosting
services within an Aneka Cloud environment.
1. The IService Interface and Container:
•IService Interface: The core requirement for any service hosted in an Aneka container is that it
must be compliant with the IService interface. This interface defines the contract that all Aneka
services must adhere to.
•Container: The "Container" is the runtime environment provided by Aneka that hosts these
services. It provides the necessary infrastructure for services to run and interact within the cloud.
2. Core Functionalities of an IService (and services hosted in a container):
The IService interface (and by extension, services hosted in the container) provides capabilities
related to:
•Name and status: Services have identifiable names and report their current operational status.
•Control operations: Standard life cycle methods are provided for managing the service:
•Start(): To begin service operation.
•Stop(): To halt service operation.
•Pause(): To temporarily suspend service activity.
•Continue(): To resume a paused service.
•Message handling: Services handle communication through the HandleMessage() method. This is
where the core logic for processing incoming requests and messages resides.
Types of Services and Their Interactions:
•Specific Services: Aneka includes specific services that directly interact with end-users.
Examples given are:
•Resource Provisioning Services: Responsible for allocating and managing cloud
resources.
•Resource Reservation Services: For reserving specific resources in advance.
•Service Life Cycle and Message Processing:
•Control Operations vs. Core Logic: The Start, Stop, Pause, and Continue operations are
primarily used by the container to manage the service's life cycle.
•Core Logic in HandleMessage(): The actual business logic and message-processing
functionalities of the service are contained within the HandleMessage() method.
•Request-Driven: Each operation requested to a service is triggered by a specific message.
•Callback for Results: Results from these operations are communicated back to the caller
through messages (a callback mechanism).
Types of Services and Their Interactions:
•Specific Services: Aneka includes specific services that directly interact with end-users. Examples
given are:
•Resource Provisioning Services: Responsible for allocating and managing cloud resources.
•Resource Reservation Services: For reserving specific resources in advance.
•Service Life Cycle and Message Processing:
•Control Operations vs. Core Logic: The Start, Stop, Pause, and Continue operations are
primarily used by the container to manage the service's life cycle.
•Core Logic in HandleMessage(): The actual business logic and message-processing
functionalities of the service are contained within the HandleMessage() method.
•Request-Driven: Each operation requested to a service is triggered by a specific message.
•Callback for Results: Results from these operations are communicated back to the caller
through messages (a callback mechanism).
Service Life Cycle (As described in Figure 5.9)
It describes the different states a service instance can transition through:
•Unknown or Initialized State: A service instance begins in one of these states. This occurs when the
service is created by invoking its constructor during the configuration of the container.
•Start() Method: Once the container starts, it iteratively calls the Start() method on each service.
•Starting State: As a result of Start(), the service enters a Starting state during its setup process.
•Running State: After successful setup, the service transitions to the Running state. This is the normal
operational state where the service is active and can process messages.
•Error Handling: If an exception occurs during the Starting or Running phase, the service is expected
to fall back to the Unknown state, signaling an error.
•Pause() and Continue() for Pausing:
•Calling Pause() moves the service into a Pausing state, eventually reaching a Paused state.
•Calling Continue() moves the service into a Resuming state, restoring its activity to the Running
state.
•Note: Not all services necessarily support pausing/continuing. The current Aneka framework
implementation might not feature any service with these capabilities.
•Stop() for Shutdown:
•When the container shuts down, or Stop() is called, services move first into a Stopping state.
•Stopped State: Eventually, they reach the Stopped state, where all resources initially allocated
have been released.
Simplifying Service Implementation:
•ServiceBase Class: Aneka provides a ServiceBase class as a base for simplifying service
implementation. This means developers can extend ServiceBase rather than directly implementing
IService from scratch.

•Implementation of Basic Properties: It handles the implementation of the fundamental


properties exposed by the IService interface (e.g., Name, Status).

•Control Operations with Logging & State Control: It provides a proper implementation for the
control operations (Start, Stop, Pause, Continue) along with:
•Logging capabilities: To record events and debugging information.
•State control: To manage the transitions between service states (Unknown, Starting, Running,
Paused, Stopped).

•Built-in Infrastructure for Service-Specific Clients: ServiceBase includes infrastructure that


helps in automatically generating or supporting clients specifically tailored to interact with a
particular service. This simplifies the client-side development for consuming Aneka services.

•Support for Service Monitoring: It provides built-in mechanisms to monitor the service's health
and performance.
MANAGEMENT TOOLS
Aneka is a pure PaaS implementation and requires virtual or physical hardware to be deployed. Aneka’s
management layer, also includes capabilities for managing services and applications running in the Aneka
Cloud.
• Infrastructure management
Aneka leverages virtual and physical hardware in order to deploy Aneka Clouds. Virtual hardware is
generally managed by means of the Resource Provisioning Service, which acquires resources on demand
according to the need of applications, while physical hardware is directly managed by the Administrative
Console by leveraging the Aneka management API of the PAL.
• Platform management
The creation of Clouds is orchestrated by deploying a collection of services on the physical infra-structure
that allows the installation and the management of containers. A collection of connected containers defines
the platform on top of which applications are executed. The features available for platform management are
mostly concerned with the logical organization and structure of Aneka Clouds.
• Application management
Applications identify the user contribution to the Cloud. This is an important feature in a cloud computing
scenario in which users are billed for their resource usage. Aneka exposes capabilities for giving summary
and detailed information about application execution and resource utilization.

You might also like