Distributed Computing Systems: March 2023
Distributed Computing Systems: March 2023
net/publication/369304390
CITATIONS READS
0 5,288
1 author:
Velibor Božić
General Hospital Koprivnica
708 PUBLICATIONS 274 CITATIONS
SEE PROFILE
All content following this page was uploaded by Velibor Božić on 24 May 2025.
Edge computing. This involves processing data on devices at the edge of a network, such as
IoT devices, instead of sending it to a central server or cloud for processing.
Fog computing. This is similar to edge computing, but involves a network of devices that
work together to process data, rather than individual devices acting independently.
Distributed databases. This involves distributing data across multiple nodes in a network,
providing redundancy and improving performance.
Each of these types of distributed computing systems has its own strengths and
weaknesses, and is suited to different applications and use cases.
Client-server architecture
Client-server architecture is a distributed computing model where a central server provides
services to multiple clients over a network. In this architecture, the client is a user or an
application that requests resources or services from the server. The server, on the other
hand, provides the requested services or resources to the client.
Here are the key features of client-server architecture:
Client: A client is a user or an application that requests resources or services from the
server. Clients can be desktop applications, web browsers, mobile apps, or any other
device that can connect to the network.
Server: A server is a central computer or a group of computers that provide services or
resources to the clients. Servers can be physical or virtual machines that have the
capability to store and process data, and perform complex computations.
Network: A network is a connection that links the client and the server, allowing them to
communicate and exchange data. The network can be a local area network (LAN), wide
area network (WAN), or the internet.
Protocol: A protocol is a set of rules and standards that define how clients and servers
communicate with each other over the network. Some common protocols used in client-
server architecture include HTTP, TCP/IP, and FTP.
Scalability: Client-server architecture is highly scalable, as it allows the addition of more
clients or servers to the network as needed, without affecting the overall performance of
the system.
Security: Security is a critical concern in client-server architecture, as the server must
ensure that only authorized clients can access the resources or services that it provides.
Various security measures, such as authentication and encryption, are implemented
VELIBOR BOŽIĆ
Research proposal
to protect the system from unauthorized access. Client-server architecture is widely used in
a variety of applications, including web applications, database systems, email servers, and
file servers.
Peer-to-peer architecture in details
Peer-to-peer (P2P) architecture is a distributed computing model in which multiple
computers, or peers, are connected to each other to share resources and data without the
need for a central server. In a P2P architecture, all peers are equal and can act as both a
client and a server.
Here are the key features of peer-to-peer architecture:
Peers: Peers are computers or devices that are connected to each other over a network,
such as the internet. Peers in a P2P network can share resources, such as processing
power, storage space, and bandwidth, with each other.
Network: A P2P network is a decentralized network where peers communicate with each
other directly, without the need for a central server or any other intermediary.
Protocol: P2P networks typically use protocols that enable peers to discover and
communicate with each other. Some common P2P protocols include BitTorrent,
Gnutella, and Napster.
Scalability: P2P architecture is highly scalable, as the addition of more peers to the
network increases its capacity and performance.
Security: Security is a challenge in P2P networks, as they are vulnerable to various
attacks, such as distributed denial of service (DDoS) attacks and malware infections. P2P
networks often employ various security measures, such as encryption and
authentication, to protect against such attacks.
Peer-to-peer architecture is widely used in various applications, including file sharing,
content distribution, and decentralized communication networks. Some popular
examples of P2P applications include BitTorrent for file sharing, Skype for voice and
video communication, and Bitcoin for cryptocurrency transactions.
Grid computing
Grid computing is a distributed computing model in which multiple computers or
resources work together to solve a large problem or perform a complex task. In a grid
computing environment, resources such as computing power, storage, and applications are
VELIBOR BOŽIĆ
Research proposal
distributed across multiple locations and connected to a network. Grid computing is often
used in scientific research, engineering, and data-intensive applications.
Here are the key features of grid computing:
Resources: Grid computing relies on a network of resources, including computers,
storage devices, and software applications, that work together to solve a problem. These
resources can be located anywhere in the world and can be added or removed from the
network as needed.
Middleware: Middleware is software that enables the resources to communicate and
work together. It provides a layer of abstraction between the resources and the
applications that use them, allowing them to operate seamlessly across different
platforms and operating systems.
Network: A grid computing network is a collection of resources that are connected to
each other over a network. These networks can be local or global and can span across
multiple organizations or institutions.
Task scheduling: Grid computing uses task scheduling algorithms to distribute the
workload across the network of resources. These algorithms ensure that the resources
are utilized efficiently and that the workload is balanced across the network.
Scalability: Grid computing is highly scalable, as it allows the addition of more resources
to the network as needed, without affecting the overall performance of the system.
Security: Security is a critical concern in grid computing, as the resources may be
distributed across multiple locations and organizations. Various security measures, such
as encryption and authentication, are implemented to protect the system from
unauthorized access.
Grid computing is widely used in various applications, including scientific research,
weather forecasting, financial modeling, and drug discovery. Some popular examples of grid
computing include the Large Hadron Collider at CERN, which uses a grid of computers to
process data from particle collisions, and Folding@home, which uses a distributed network
of computers to study protein folding and other molecular dynamics.
VELIBOR BOŽIĆ
Research proposal
Cloud computing
Cloud computing is a distributed computing model in which users access computing
resources, such as servers, storage, applications, and services, over the internet. In a cloud
computing environment, users do not need to own or maintain their own computing
infrastructure, but instead can use shared resources that are provided by a third-party
service provider.
Here are the key features of cloud computing:
On-demand self-service: Users can provision computing resources, such as servers,
storage, and applications, on demand, without the need for human interaction with the
service provider.
Broad network access: Users can access cloud computing resources over the internet,
from any device with an internet connection.
Resource pooling: Cloud computing resources are pooled together, and users share the
same physical infrastructure. Resources can be dynamically allocated to meet the needs
of users, without requiring users to know the location or details of the underlying
infrastructure.
Rapid elasticity: Cloud computing resources can be rapidly scaled up or down to meet
changing demands. This enables users to easily adjust the amount of resources they
need, without incurring additional costs or experiencing downtime.
Measured service: Cloud computing resources are monitored and metered, and users are
only charged for the amount of resources they consume.
Service models: Cloud computing services can be divided into three main service models:
Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service
(SaaS). These service models provide different levels of control and management over
the underlying infrastructure.
Cloud computing is widely used in various applications, including data storage and
processing, application hosting, web hosting, and software development. Some popular
examples of cloud computing include Amazon Web Services (AWS), Microsoft Azure, and
Google Cloud Platform (GCP).
VELIBOR BOŽIĆ
Research proposal
Edge computing
Edge computing is a distributed computing model in which data processing is
performed at the edge of the network, closer to where the data is generated, rather than in
a centralized cloud or data center. In an edge computing environment, data is processed and
analyzed locally, on devices such as routers, gateways, and IoT devices, before being sent to
a central location for further processing or storage.
Here are the key features of edge computing:
Local processing: Edge computing enables data to be processed locally, on devices
located closer to where the data is generated. This reduces latency and bandwidth
usage, and can improve application performance and responsiveness.
Decentralized architecture: Edge computing is a decentralized architecture that
distributes computing resources closer to where they are needed. This can reduce the
load on centralized cloud or data center resources and improve overall system scalability
and resiliency.
Edge devices: Edge computing relies on a variety of edge devices, such as routers,
gateways, and IoT devices, that are capable of processing and analyzing data at the edge
of the network. These devices are typically low-power and resource-constrained, and
may operate in harsh or remote environments.
Data security and privacy: Edge computing can improve data security and privacy by
keeping sensitive data closer to its source, and reducing the need to transmit sensitive
data over the network.
Real-time processing: Edge computing enables real-time data processing and decision-
making, by reducing latency and enabling data to be processed closer to where it is
generated.
Edge computing is widely used in various applications, including industrial automation,
smart cities, healthcare, and autonomous vehicles. Some popular examples of edge
computing include autonomous vehicles that use edge devices to process sensor data in
real-time, and smart cities that use edge devices to monitor traffic and optimize traffic
flow.
VELIBOR BOŽIĆ
Research proposal
Fog computing
Fog computing is a distributed computing model that extends the capabilities of
cloud computing to the edge of the network. It enables data processing and storage to be
distributed across a network of devices, from the edge to the cloud, and allows applications
and services to be executed at the most appropriate location in the network.
Here are the key features of fog computing:
Distributed architecture: Fog computing is a distributed architecture that distributes
computing resources across the network, from the edge to the cloud. This allows data
processing and storage to be performed closer to where it is generated, and reduces the
need to transmit data over the network.
Heterogeneous devices: Fog computing relies on a variety of devices, such as routers,
gateways, edge servers, and IoT devices, that are capable of processing and analyzing
data at the edge of the network. These devices are typically low-power and resource-
constrained, and may operate in harsh or remote environments.
Proximity: Fog computing takes advantage of the proximity of devices to the source of
data, to reduce latency and improve application performance. Data is processed and
analyzed at the edge of the network, and only relevant data is sent to the cloud for
further processing and analysis.
Resource management: Fog computing requires efficient resource management to
ensure that computing resources are available where they are needed, and that
applications and services are executed at the most appropriate location in the network.
This involves monitoring resource usage, optimizing resource allocation, and ensuring
service level agreements are met.
Security: Fog computing requires robust security mechanisms to protect data and
devices in the network. This includes encryption, authentication, access control, and
intrusion detection and prevention.
Fog computing is widely used in various applications, including industrial automation,
smart cities, healthcare, and smart homes. Some popular examples of fog computing include
smart homes that use edge devices to control lighting, heating, and security systems, and
industrial automation systems that use edge devices to monitor and control manufacturing
processes.
VELIBOR BOŽIĆ
Research proposal
Distributed databases
A distributed database is a database system that stores data across multiple physical
locations, and enables data to be accessed and manipulated by multiple users and
applications simultaneously. Distributed databases are designed to improve data availability,
reliability, and scalability, and are used in various applications, such as e-commerce, banking,
healthcare, and telecommunications.
Here are some key features of distributed databases:
Data distribution: Distributed databases distribute data across multiple physical
locations, and replicate data to ensure availability and fault tolerance. Data can be
replicated either synchronously or asynchronously, depending on the application
requirements.
Transparency: Distributed databases provide a high level of transparency to users and
applications, so that they can access and manipulate data without being aware of its
physical location or replication status. This is achieved through a variety of mechanisms,
such as distributed query processing, distributed transaction management, and
distributed locking and concurrency control.
Consistency: Distributed databases provide consistency guarantees to ensure that data is
consistent across all replicas. This is achieved through a variety of mechanisms, such as
two-phase commit, distributed snapshotting, and conflict resolution algorithms.
Scalability: Distributed databases are designed to scale horizontally by adding more
nodes to the system as the workload increases. This allows the system to handle
increasing amounts of data and users, and enables the system to provide high availability
and performance.
Security: Distributed databases provide robust security mechanisms to protect data from
unauthorized access and manipulation. This includes encryption, access control,
authentication, and auditing.
Some popular examples of distributed databases include Apache Cassandra,
MongoDB, and Amazon DynamoDB. These databases are widely used in various applications,
such as e-commerce, social media, gaming, and financial services.
VELIBOR BOŽIĆ
Research proposal
PROPORTIONAL GROWTH
Proportional growth is a type of growth in which the rate of increase is directly
proportional to the current size of the object or system. In other words, the larger the object
or system, the faster it grows. This type of growth is also known as exponential growth, and
is often described by an exponential function.
Proportional growth can be seen in many natural and man-made systems, such as
populations of organisms, investments, and technology. For example, the growth of a
population of bacteria is proportional to the number of bacteria already present, as each
bacterium can reproduce and create more bacteria. Similarly, the growth of an investment
can be proportional to the amount of money invested, as the interest earned on the
investment increases with the size of the investment.
Proportional growth can be described mathematically using an exponential function,
which has the form:
y = a * e^(b * x)
where y is the final size of the object or system, a is the initial size, b is the growth rate, and x
is the time period over which growth occurs. The constant e is the base of the natural
logarithm, and represents the rate of continuous growth.
Proportional growth can lead to rapid increases in size or quantity, which can have
both positive and negative consequences. For example, the rapid growth of a population can
lead to overcrowding, resource depletion, and environmental degradation. Similarly, the
rapid growth of a technology can lead to increased productivity and efficiency, but can also
result in job losses and social upheaval.
ADAPTABILITY, INSENSITIVITY TO ERRORS AND QUALITY MAINTENANCE
PROCEDURES IN CONTEXT OF DISTRIBUTED SYSTEMS
Adaptability, insensitivity to errors, and quality maintenance procedures are
important considerations in the design and implementation of distributed systems. Here's
how each of these factors plays a role:
Adaptability. Distributed systems need to be adaptable to changing environments and
requirements. This means that they must be designed to handle changes in the number of
nodes, network topologies, and workload characteristics. For example, a distributed system
may need to automatically adjust its resource allocation based on changes in network traffic,
or migrate services to new nodes in response to failures or maintenance activities.
VELIBOR BOŽIĆ
Research proposal
Insensitivity to errors. Distributed systems must be able to operate reliably in the face of
errors and failures. This means that they need to be designed with fault-tolerant
mechanisms that can detect and recover from errors, without disrupting the overall system
operation. For example, a distributed database may need to use replication and consensus
protocols to ensure that data is consistent across nodes, even in the event of node failures
or network partitions.
Quality maintenance procedures. Distributed systems must be designed with quality
maintenance procedures to ensure that they operate reliably and efficiently over time. This
includes regular testing, monitoring, and maintenance activities to identify and fix issues
before they become critical. It also includes the use of performance metrics and analysis
tools to optimize system performance and identify potential issues.
The success of a distributed system depends on its ability to be adaptable, insensitive
to errors, and maintain quality over time. By considering these factors in the design and
implementation of a distributed system, developers can ensure that the system operates
reliably and efficiently, even in the face of changing environments and requirements.
VARIOUS EXPANDABLE SYSTEMS IN THE CONTEXT OF THE TOPIC
In the context of distributed systems, there are various expandable systems that can
be used to improve scalability, availability, and performance. Here are some examples:
Load balancers: Load balancers distribute incoming network traffic across multiple nodes
in a distributed system, improving availability and reducing the load on individual nodes.
Load balancers can be either hardware or software-based, and can be configured to use
various load balancing algorithms, such as round-robin or least connections.
Clustered file systems: Clustered file systems allow multiple nodes in a distributed
system to share a common file system, improving scalability and data access
performance. Clustered file systems can be either block-based or object-based, and can
be configured to use various data access protocols, such as NFS or SMB.
Distributed databases: Distributed databases store data across multiple nodes in a
distributed system, improving availability, scalability, and performance. Distributed
databases can be either SQL or NoSQL-based, and can use various data replication and
consistency mechanisms, such as sharding or consensus protocols.
VELIBOR BOŽIĆ
Research proposal
Content delivery networks (CDNs): CDNs distribute static and dynamic content across
multiple edge nodes in a distributed system, improving content delivery performance
and reducing the load on origin servers. CDNs can be either commercial or open source-
based, and can use various caching and routing mechanisms to optimize content
delivery.
Microservices architecture: Microservices architecture decomposes a monolithic
application into small, independent services that can be deployed and scaled
independently in a distributed system. Microservices architecture improves flexibility,
agility, and scalability, and can use various deployment and orchestration tools, such as
Docker or Kubernetes.
These expandable systems can be combined and configured in various ways to
achieve different scalability, availability, and performance goals in a distributed system. The
choice of expandable systems depends on the specific requirements and constraints of the
application, as well as the available resources and expertise.
SERVICE-BASED ARCHITECTURES
Service-based architectures are an approach to designing distributed systems that
rely on loosely coupled, independently deployable services that communicate through
standardized protocols. In a service-based architecture, each service provides a specific
function or capability, and can be developed, tested, deployed, and scaled independently of
other services in the system. Here are some key characteristics of service-based
architectures:
Service discovery. Services in a service-based architecture can be discovered dynamically at
runtime, using various mechanisms such as service registries, service meshes, or DNS-based
discovery. This allows services to be added or removed from the system without requiring
manual configuration changes.
Service composition. Services in a service-based architecture can be combined to form
composite applications, using various composition mechanisms such as service
choreography or orchestration. Service composition allows complex applications to be built
from smaller, more manageable services.
Standardized protocols. Services in a service-based architecture communicate using
standardized protocols such as REST, SOAP, or messaging. Standardized protocols enable
VELIBOR BOŽIĆ
Research proposal
running on the same physical machine or cluster. Additionally, it may not be able to provide
the same level of scalability as a true distributed system, since it is limited by the resources
of the underlying hardware.
MECHANISMS OF DISTRIBUTION, COOPERATION AND COMPETITION
In the context of distributed systems, mechanisms of distribution, cooperation, and
competition refer to the ways in which components of the system work together to achieve
a common goal, while also competing for shared resources and adapting to changing
conditions.
Mechanisms of distribution: These are the mechanisms that enable components of a
distributed system to work together in a coordinated manner, despite being geographically
dispersed and running on different machines. Examples of such mechanisms include
message passing, remote procedure calls, and shared memory.
Mechanisms of cooperation: These are the mechanisms that enable components of a
distributed system to work together towards a common goal, despite their individual
interests and priorities. Examples of such mechanisms include consensus algorithms,
distributed transaction management, and load balancing.
Mechanisms of competition: These are the mechanisms that enable components of a
distributed system to compete for shared resources such as CPU, memory, and network
bandwidth. Examples of such mechanisms include resource allocation algorithms,
contention resolution protocols, and congestion control.
The success of a distributed system depends on the effective use of these
mechanisms to achieve the desired level of distribution, cooperation, and competition. For
example, in a distributed database system, mechanisms of distribution such as replication
and partitioning enable data to be stored and accessed from multiple machines, while
mechanisms of cooperation such as distributed transaction management ensure that
transactions are executed consistently across the system. Meanwhile, mechanisms of
competition such as resource allocation algorithms ensure that each component of the
system has fair access to shared resources such as disk I/O and network bandwidth.
Effective use of these mechanisms requires careful design and implementation of the
distributed system architecture, as well as careful consideration of the trade-offs between
performance, scalability, fault tolerance, and other key system attributes.
VELIBOR BOŽIĆ
Research proposal
STUDY EXAMPLES
Here are some brief descriptions of the distributed systems research projects you
mentioned:
UofT NUMAchine. UofT NUMAchine is a distributed shared-memory system
developed by researchers at the University of Toronto. It is designed to provide a
high-performance computing platform for scientific applications that require large
amounts of shared memory. The system is built using commodity hardware and
software, and uses a combination of hardware and software techniques to provide
scalable and efficient shared memory access.
AT&T GeoPlex. AT&T GeoPlex is a distributed computing system that uses
geographically distributed resources to provide high-performance computing
services. The system is designed to support a wide range of applications, including
scientific simulations, data analysis, and visualization. It uses a combination of
parallel processing, distributed storage, and network technologies to provide high
performance and scalability.
MidArc. MidArc is a middleware system developed by researchers at the University
of Illinois. It is designed to provide a flexible and scalable platform for building
distributed applications. The system supports a wide range of programming models
and provides a range of services, including message passing, distributed objects, and
group communication.
SoftLab. SoftLab is a software development environment designed for building
distributed applications. It is developed by researchers at the University of California,
Berkeley, and is designed to provide a range of tools and services for building and
testing distributed software systems. The system includes a range of libraries and
tools for building distributed systems, as well as a testing framework for evaluating
system performance and scalability.
These research projects represent some of the many ways in which researchers are
exploring the design, implementation, and evaluation of distributed systems. They highlight
the challenges and opportunities of building distributed systems, and provide insights into
the various approaches and techniques that can be used to build efficient, scalable, and
resilient distributed systems.
VELIBOR BOŽIĆ
Research proposal
REFERENCES
1. P. Huang, Y. Cao, and Y. Wang, "A Distributed Task Assignment Scheme for Mobile Edge
Computing," IEEE Transactions on Mobile Computing, vol. 20, no. 1, pp. 72-84, Jan. 2021.
2. D. Farias, M. H. P. Chaves, and A. Loureiro, "Scalable Distributed Deep Learning:
Challenges and Opportunities," IEEE Internet Computing, vol. 25, no. 1, pp. 68-75,
Jan./Feb. 2021.
3. D. Chen, J. Li, and J. Huang, "Elastic Distributed Inference for Deep Learning: A
Comprehensive Survey," ACM Transactions on Parallel Computing, vol. 7, no. 1, Article 2,
Jan. 2021.
4. Z. Zhang, Y. Chen, and M. Chen, "Distributed Computing for Deep Learning: A Review,"
ACM Transactions on Intelligent Systems and Technology, vol. 12, no. 1, Article 12, Jan.
2021.
5. W. Chen, Z. Liu, and Y. Chen, "Resource Allocation in Edge Computing Systems with
Heterogeneous Servers and Mobile Users: A Distributed Learning Approach," IEEE
Transactions on Mobile Computing, vol. 20, no. 3, pp. 1361-1374, Mar. 2021.
6. X. Zhang, Y. Wen, and J. Chen, "A Survey on Resource Allocation in Fog Computing: From
Centralized to Distributed Approaches," IEEE Access, vol. 9, pp. 21852-21870, Jan. 2021.
7. J. M. Fernández, J. M. García, and F. Pérez, "Optimal Distributed Control for Energy
Management in Smart Grids with Microgrids," IEEE Transactions on Industrial
Informatics, vol. 17, no. 2, pp. 1115-1124, Feb. 2021.
8. M. Bahrami, M. H. Javidi, and M. R. Meybodi, "Distributed Cooperative Optimization: A
Comprehensive Survey," IEEE Transactions on Cybernetics, vol. 51, no. 3, pp. 1197-1213,
Mar. 2021.
9. H. Zheng, Z. Liu, and Y. Chen, "Distributed Optimization with Randomized Algorithms for
Large-Scale Multi-Agent Systems," IEEE Transactions on Cybernetics, vol. 51, no. 4, pp.
2006-2018, Apr. 2021.
10. Y. Shang, Y. Zhao, and B. Li, "A Distributed Protocol for Consensus-Based Load Balancing
in Cloud Computing," IEEE Transactions on Cloud Computing, vol. 9, no. 2, pp. 375-386,
Apr./Jun. 2021.
11. H. Li, Y. Li, and D. Li, "A Distributed Fault-Tolerant Consensus Algorithm for Blockchain-
Based Internet of Things," IEEE Internet of Things Journal, vol. 8, no. 7, pp. 5595-5603,
Apr. 2021.
VELIBOR BOŽIĆ
Research proposal
12. S. E. Moradi and A. Tchamkerten, "Distributed Task Assignment and Path Planning in
Cooperative Multi-Robot Systems: A Survey," IEEE Transactions on Automation Science
and Engineering, vol. 18, no. 3, pp. 993-1014, Jul. 2021.
13. Y. Wang, J. Song, and Y. Sun, "A secure access control scheme for edge computing,"
Journal of Parallel and Distributed Computing, vol. 152, pp. 86-93, 2021.
14. Z. Yang, X. Li, Y. Li, and X. Li, "Towards an effective and efficient distributed deep learning
system with communication compression," Neurocomputing, vol. 463, pp. 47-58, 2021.
15. L. Peng, F. Li, Y. Li, J. Jiang, and Z. Wu, "A distributed blockchain-based security
framework for IoT in smart cities," Journal of Network and Computer Applications, vol.
181, 2021.
16. W. Liu, W. Li, X. Zhang, and S. Zhang, "Secure data aggregation in mobile crowdsensing
with blockchain and distributed fog computing," Journal of Network and Computer
Applications, vol. 180, 2021.
17. D. Ma, Y. Zeng, and L. Liu, "Distributed consensus-based deep learning for edge
computing," Journal of Parallel and Distributed Computing, vol. 151, pp. 137-146, 2021.
18. H. Wang, Q. Wang, Y. Xu, and X. Zhang, "Privacy-preserving authentication protocol for
distributed fog computing systems," Future Generation Computer Systems, vol. 122, pp.
347-357, 2021.
19. S. Li, W. Li, X. Li, and M. Li, "A distributed deep learning framework based on adaptive
gradient compression," Future Generation Computer Systems, vol. 123, pp. 285-294,
2021.
20. M. Xu, X. Li, L. Chen, and X. Li, "Collaborative learning with a distributed parallel model
based on blockchain," Journal of Parallel and Distributed Computing, vol. 154, pp. 22-33,
2021.
21. R. Wang, Q. Wang, Y. Yang, and Y. Chen, "Collaborative data processing with distributed
computing in IoT environments," IEEE Internet of Things Journal, vol. 8, no. 7, pp. 5412-
5422, 2021.
22. X. Xu, S. Yang, J. Wang, and C. Chen, "A distributed optimization algorithm for joint
communication and computing resource allocation in vehicular edge computing," IEEE
Transactions on Vehicular Technology, vol. 70, no. 8, pp. 7689-7700, 2021.
VELIBOR BOŽIĆ
Research proposal
23. J. Chen, Y. Gao, Z. Wang, and J. Yang, "Distributed blockchain-based optimization for
energy management in microgrid," Journal of Parallel and Distributed Computing, vol.
150, pp. 95-106, 2021.
24. X. Cui, L. Wang, and X. Li, "Distributed reinforcement learning for multi-agent systems
with exploration strategy," Journal of Parallel and Distributed Computing, vol. 152, pp.
52-62, 2021.
25. L. Yan, X. Wu, H. Li, and J. Wang, "Distributed deep learning with communication-
efficient dual averaging for heterogeneous edge computing," Future Generation
Computer Systems, vol. 126, pp. 411-422, 2021.
26. Q. Yang, L. Cheng, X. Wang, and Y. Shi, "Distributed stochastic gradient descent with
consensus-based communication and information compression," Neurocomputing, vol.
469, pp. 218-226, 2021.