Conference Paper
Conference Paper
Abstract—Now-a-days, edge computing and cloud computing Global accessibility is yet another fundamental benefit of cloud
are considered for collaborating together to produce computing computing. Moreover, cloud services reduce geographical
solutions that are more effective, scalable and adaptable. The barriers, enabling real-time collaboration across several regions
proliferation of cloud infrastructures has drastically increased and supporting the delivery of services to a global clientele
energy consumption leading to the need for more research in with less idle time [2].
optimizing energy efficiency for sustainable and efficient systems
with reduced operational costs. In addition, the edge computing Edge computing is a distributed computing paradigm that
paradigm has gained wide attention during the last few decades moves data storage and computation closer to the point of
due to the rise of the Internet of Things (IoT) devices, the demand, which is generally at the network's edge, close to the
emergence of applications that require low latency, and the data generating source. Edge computing reduces latency,
widespread demand for environmentally friendly computing. bandwidth utilization, and reaction times by processing data
Moreover, lowering cloud-edge systems' energy footprints is locally on IoT sensors, gateways, or edge servers rather than
essential for fostering sustainability in light of growing concerns depending on a centralized cloud [4], [5]. This method is more
about environmental effects. This research presents a efficient for time-sensitive applications where rapid decisions
comprehensive review of strategies aimed at optimizing energy are essential, like real-time analytics, industrial automation,
efficiency in cloud architectures designed for edge computing and driverless cars. Edge computing improves performance,
environments. Various strategies, including workload
security, and dependability in a variety of applications with the
optimization, resource allocation, virtualization technologies, and
use of decentralized computing [6], [7].
adaptive scaling methods, have been identified as techniques that
are widely utilized by contemporary research in reducing energy Cloud computing has revolutionized the way businesses
consumption while maintaining high performance. Furthermore, and organizations operate, offering scalable, on-demand
the paper investigates how advancements in machine learning computing resources. Nevertheless, the growth of cloud
and AI can be leveraged to dynamically manage resource infrastructures has increased their energy consumption that has
distribution and energy-efficient enhancements in cloud-edge been a major concern. Therefore, the demand for energy-
systems. In addition, challenges to the approaches for energy efficient cloud architectures has become a critical area of
optimization have been discussed in detail to further provide
research due to environmental concerns, operational costs, and
insights for future research. The conducted comprehensive
the need for sustainability. In addition, the proliferation of edge
review provides valuable insights for future research in the edge
computing paradigm, particularly emphasizing the critical computing where data is processed closer to the data source to
importance of enhancing energy efficiency in these systems. reduce latency and bandwidth usage adds another dimension to
cloud energy optimization [8], [9].
Keywords—Cloud computing; edge computing; energy The rapid growth of energy consumption by edge cloud
efficiency; sustainability computing infrastructures has been a significant focus of
I. INTRODUCTION contemporary research considering the operational costs and
environmental concerns aiming at the sustainability of cloud
Cloud computing is a concept for providing computer computing architectures. Among the approaches for a
resources such as servers, storage, databases, networking, sustainable edge computing paradigm, dynamic resource
software, and analytics over the internet. Users can obtain allocation, AI driven energy optimization, energy-aware
physical infrastructure and data centers on-demand from cloud scheduling and load balancing, green data centres,
service providers, negating the need to own and manage them virtualizations and containerizations are acquiring significant
and providing more flexibility, scalability, and cost-efficiency. focus by the researchers [10], [11], [12], [13].
Without the upfront expenses and hassles of maintaining
traditional IT systems, cloud computing allows businesses and Nevertheless, the exponential growth of cloud computing
individuals to scale their IT needs as needed, covering infrastructures has resulted in inflated demand for further
everything from data storage to sophisticated application research on optimizing energy efficient cloud architectures in
development [1] [2] [3]. order to move towards a sustainable edge computing paradigm.
In addition, maintaining performance and scalability while
The adaptability of cloud technology is one of its most reducing energy consumption has also been a controversial
intriguing features. Businesses can simply reallocate their topic that is further researched [14] [15]. Therefore, further
computer resources to meet changing demands, allowing for research that enhance the green cloud architectures for edge
ongoing scaling in response to changing business requirements.
637 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 11, 2024
computing are required for the improved sustainability of the settings, and dynamically manage resource utilization based on
cloud computing paradigm. needs by evaluating real-time data. With this real-time
optimization, energy conservation is guaranteed without
The rest of the paper is laid out as follows. Section II compromising the necessary performance and service levels.
discusses approaches that have been widely concerned to
achieve sustainability in the edge computing paradigm AI can also provide intelligent load balancing and
focusing on energy efficiency while Section III broadly scheduling, which will optimize the energy consumption of
explores recent related work that makes use of different edge computing systems. AI models, for instance, can
strategies for energy efficiency in the edge computing anticipate high load periods and move workloads to nodes that
paradigm. Section IV provides a discussion of the conducted use less energy or have unused resources. AI algorithms can
review work along with challenges and further insights into the also allow edge devices to transition into low-power modes
energy-efficient approaches in the edge computing domain. while they are inactive or when processing demands drop,
Finally, Section V concludes the research findings along with which will cut down unnecessary energy consumption. AI-
future directions for the conducted research work. driven energy optimization may dramatically increase the
lifespan of edge devices, slash operating costs, and lessen the
II. STRATEGIES FOR ENERGY OPTIMIZATION environmental effect of edge computing infrastructures by
The following strategies were identified as the main continually learning from and reacting to the system [23] [24].
approaches that are widely focused in research that aim at C. Virtualization and Containerization
energy optimizations in edge computing. Nevertheless, the
techniques that are utilized for edge computing and even in the Virtualization and Containerization in edge computing play
cloud computing paradigm in contemporary research are a a significant role in improving resource usage, flexibility, and
combination of the below-discussed approaches. scalability of edge networks. Several Virtual Machines (VMs)
can operate on a single physical server owing to virtualization,
A. Energy Aware Scheduling and Load balancing which makes it possible to abstract hardware resources. This
Energy-aware scheduling and Load Balancing in edge makes it possible to consolidate edge resources and execute
computing aims to optimize resource allocation while reducing various applications or services in separate environments,
energy consumption. In the context of an edge environment, making the best use of the hardware that is available.
resources are distributed across a number of small, Virtualization assists in the management of various workloads
geographically separated devices, requiring efficient and offers the flexibility to dynamically scale resources in
scheduling algorithms to manage tasks without overloading response to variations in demand in edge computing [25], [26].
nodes or consuming excessive amounts of energy [16], [17]. In addition, it makes it possible to handle software upgrades
Energy-aware scheduling approaches prioritize tasks based on and system maintenance effectively without interfering with
their energy demands and resource requirements, ensuring that ongoing services that in turn increase system reliability.
high-priority tasks are executed on energy-efficient nodes Nevertheless, containerization, which bundles apps and
while low-priority tasks are deferred across less critical their dependencies into containers, is a less complex substitute
resources. By optimizing task scheduling, systems can for virtualization. Since containers share the host operating
minimize energy wastage, extend device lifespans, and ensure system kernel, containers are more efficient than VMs in terms
seamless service delivery [18] [19]. of startup times and overhead. Moreover, containerization
Load balancing complements energy-aware scheduling by facilitates the quick deployment and scalability of applications
distributing computational workloads evenly across available among dispersed nodes in edge computing. Due to its great
nodes to prevent resource overuse and reduce energy degree of mobility, moving apps between various edge devices
consumption. In edge computing, load balancing techniques or contexts is much simpler. This is especially critical in edge
must consider not only the computational capacity of each environments, which are dynamic and heterogeneous and have
node but also its current energy state. Dynamic load balancing a wide range of device capabilities [27], [28], [29]. Therefore,
ensures that tasks are allocated to nodes with sufficient energy it can be stated that containerization is a perfect solution for the
reserves, minimizing the risk of node failure due to energy changing needs of edge computing since it allows for quicker
depletion. These approaches improve the efficiency and application development cycles, better system responsiveness,
sustainability of edge computing setups, enabling them to and more effective resource management.
support more applications and services without overwhelming D. Network Optimization
the energy resources of distributed nodes [20] [21].
Network optimizations in edge computing primarily aims
B. AI-Driven Energy Optimization to reduce latency, enhance bandwidth efficiency, and improve
AI-driven energy optimization approaches that are utilized overall performance by bringing data processing and storage
in Edge computing incorporate artificial intelligence strategies closer to end devices, such as IoT sensors or mobile users. By
to improve the energy efficiency of edge networks and devices, decentralizing computing power, edge computing reduces the
which are frequently decentralized and resource-constrained. amount of data that needs to be sent to central cloud data
AI algorithms, in particular deep learning and machine centers, minimizing delays in data transmission [30], [31].
learning, are able to track and forecast edge device energy Techniques like adaptive routing allow the network to choose
consumption and task trends [22]. AI systems are able to make the optimal path for data, considering real-time conditions such
decisions about how best to assign jobs to nodes, modify power as congestion, which improves response times [32].
638 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 11, 2024
Moreover, Dynamic Voltage and Frequency Scaling highlights architectural differences and provides insights into
(DVFS) is another approach in network optimization for energy efficiency, it may benefit from more consideration of
energy efficiency that reduces energy consumption when full real-world variables, such as the uneven distribution of end
performance is not needed by adjusting the power consumption users and the impact of varying application workloads on
of networking devices depending on real-time demands [33], energy consumption. In addition, the reliance on existing
[34]. Network Function Virtualization (NFV) is also widely simulators, which have their limitations, could affect the
researched as an efficient approach for network optimization accuracy of the results.
for energy efficiency in edge computing as it enables the
operation of several network services on a single physical Another group of researchers in study [13] have presented a
server, which eliminates the need for specialized hardware novel framework that employs Deep Reinforcement Learning
[35]. Adaptive transmission power control [36], [37] and the (DRL) to optimize workflow scheduling in edge-cloud
application of energy-efficient communication protocols [38], computing environments, specifically targeting the challenges
such as the enhanced energy efficiency features of 5G [39] are introduced by the proliferation of IoT devices. Traditional
other approaches that wireless networks can reduce overall cloud architectures often struggle with the demands of IoT
power consumption. Therefore, these approaches can applications due to issues like high latency and limited
significantly lower the energy footprint of networks without bandwidth. This study aims to address these challenges by
sacrificing the performance by following the network balancing the conflicting objectives of minimizing energy
optimization techniques discussed above. consumption and execution time while ensuring that workflow
deadlines are met. The proposed DRL technique demonstrates
E. Green Data Centers significant improvements over baseline algorithms, achieving
Edge data centers are typically smaller and distributed, 56% better energy efficiency and 46% faster execution times.
which makes traditional data center energy efficiency Key innovations include a hierarchical action space that
techniques less applicable. By utilizing sustainable methods distinguishes between edge and cloud nodes, as well as a multi-
like the usage of renewable energy sources, cutting-edge actor framework that enhances the learning process by
cooling systems, and energy-efficient hardware, green data allowing separate networks to manage task allocation. The
center features aim to reduce their environmental impact for results indicate that this approach is particularly effective for
sustainable and green data centers [40], [41]. The transition latency-sensitive applications, such as video surveillance,
towards greener operations decreases carbon footprints and where efficient resource management is critical. Overall, the
operational expenses, making these data centers a crucial research highlights the potential of DRL in optimizing resource
element of sustainable IT infrastructure. allocation and scheduling in edge-cloud environments,
providing valuable insights for future advancements in this
In contemporary research, other than the shift towards rapidly evolving field.
renewable energy sources in data centers that assists in
reducing dependency on the grid, automation techniques, low The researchers of study [11] address the crucial issue of
power consuming electric equipment, and devices with resource allocation in cloud computing, particularly in the
extended thermal limits have also been incorporated in order to context of increasing energy consumption and performance
improve sustainability and to move towards green data centers demands on data centers. The authors propose a hybrid model
[42]. In addition, thermal energy harvesting, kinetic energy and that combines Genetic Algorithms (GA) and Random Forest
hybrid systems have also been noted in data centers to improve (RF) techniques to optimize the allocation of VMs to physical
the resilience, and contribute to more sustainable and energy machines (PMs). The GA is employed to generate an
efficient infrastructures. optimized training dataset that maps VMs to PMs, which is
then utilized by the RF for classification and allocation tasks.
III. RECENT WORK This approach aims to minimize power consumption while
maximizing resource utilization and maintaining load balance
The authors in study [43] have conducted a comprehensive
across the data center. The effectiveness of the proposed model
analysis of energy consumption across various cloud-related
is evaluated using real-time workload traces from PlanetLab,
architectures, including cloud, fog, and edge computing. It
showing significant improvements in energy efficiency and
introduces a taxonomy that categorizes these architectures
execution time compared to traditional methods. The study
based on their characteristics, such as the number and role of
contributes to the existing body of knowledge by
data centers and their connectivity. The authors propose a
demonstrating the potential of hybrid optimization techniques
generic energy model that accurately estimates and compares
in enhancing cloud infrastructure management. However, the
the energy consumption of these infrastructures, taking into
authors acknowledge the need for further research to assess the
account factors like cooling systems and network devices. The
model's adaptability to diverse workloads and its scalability in
findings of this research work indicate that fully distributed
heterogeneous cloud environments.
architectures can consume significantly less energy between
14% and 25% compared to centralized and partly distributed The research in [10] introduces an Energy-Efficient Task
architectures, highlighting the importance of energy efficiency Offloading Strategy (ETOS) aimed at enhancing energy
in the design and deployment of modern computing solutions. efficiency in Mobile Edge Computing (MEC) environments for
In summary, the study aims to provide a foundational resource-intensive mobile applications. The study formulates
framework for future research in energy consumption analysis the task offloading problem as a non-linear optimization
within the evolving landscape of cloud computing challenge, proposing a hybrid approach that combines Particle
technologies. However, while the proposed model effectively Swarm Optimization (PSO) and Grey Wolf Optimizer (GWO)
639 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 11, 2024
to effectively allocate resources while considering capacity and compared to existing models like TW BP PM and FSDL. The
latency constraints. The proposed ETOS leverages the study categorizes energy consumption modeling into two main
collaborative capabilities of MEC servers to minimize energy approaches: system resource utilization and Performance
consumption during task execution. Extensive simulations Monitor Counter (PMC)-based modeling. The ECMS model
demonstrate that ETOS outperforms existing baseline methods leverages a simpler network topology and lower input
in terms of energy utilization, response delay, and offloading dimensions, resulting in reduced training time and CPU
utility, particularly under limited resource conditions. Despite workload during execution. The findings indicate a strong
its promising results, the research highlights the need for real- correlation between energy consumption and CPU utilization,
world validation and addresses the potential complexity of emphasizing the need for precise energy models to inform
implementing the hybrid optimization approach in practical optimization algorithms. The research concludes with future
scenarios, suggesting directions for future work to enhance directions, suggesting the extension of the ECMS model to
applicability and effectiveness in real-world MEC systems. mixed workloads and integration with advanced AI/ML
techniques, such as reinforcement learning, to further enhance
The research in study [12] focuses on developing a novel energy efficiency in edge computing environments, ultimately
multi-classifier algorithm aimed at optimizing energy-efficient contributing to sustainable practices in the industry.
task offloading in Fog Computing environments for IoT
applications. As the number of connected devices increases, A technical analysis on service placement approaches in the
efficient resource management becomes crucial to minimize context of edge computing has been focused by the authors
energy consumption and enhance service quality. The proposed [45] with the aim of addressing energy efficiency in edge
algorithm evaluates various attributes related to tasks, network computing paradigm in IoT systems. The main objective of this
conditions, and processing capabilities of Fog nodes to research work has been to identify the effective and efficient
determine the most suitable node for task execution. By strategies for service placement in IoT environments along
leveraging machine learning techniques, the algorithm aims to with a taxonomy to categorize the studies in this field and in
improve decision-making processes regarding task offloading, terms of cloud edge service placement approaches and
thereby reducing execution time and energy usage. The study algorithms that have been utilized. In addition to the technical
emphasizes the importance of balancing energy efficiency with analysis, a statistical analysis has also been provided by the
performance metrics, demonstrating that the multi-classifier authors along with evaluation factors to which the research
approach can significantly enhance Quality of Service (QoS) findings provide further insights on future research in this
parameters. In summary, this research contributes to the paradigm. The results suggest that the server placement
ongoing efforts to optimize Fog Computing frameworks, approaches fall under three types of categories, namely,
making them more effective in handling the computational decentralized, centralized and hierarchical while Genetic
demands of IoT applications while addressing energy Algorithm has been widely utilized by researchers compared to
constraints. other machine learning algorithms such as Greedy, Markov,
BSAP, Topsis and Polynomial algorithms. Furthermore, time,
The authors of study [44] propose a novel framework to cost, and latency evaluation metrics have been identified as the
optimize energy consumption and computational efficiency in most concerned evaluation metrics in the context of service
IoT environments. It introduces a three-layer architecture placement. Finally, the authors provide insights on to open
comprising sensor, edge, and cloud layers, facilitating effective issues and challenges in the context of service placement that is
task offloading and resource management. The study employs vital for future research focusing on service placement.
Long Short Term Memory (LSTM) networks for accurate
workload prediction, enabling the system to adapt to dynamic The research in study [19] presents a heterogeneous cluster-
conditions. Additionally, it utilizes Lyapunov optimization based wireless sensor network (WSN) model aimed at
methods to address the non-convex nature of resource optimizing task allocation to minimize energy consumption
allocation problems. Simulation results demonstrate significant and balance load. The network consists of clusters, each with a
improvements in energy efficiency and processing rates. cluster center and several sensor nodes, where the cluster
However, the research acknowledges limitations, including center collects data from the nodes and communicates with a
concerns about scalability, the assumptions made regarding central processor. The study establishes a task model where
user behavior, and a lack of focus on security aspects. complex tasks are divided into independent subtasks, each
Therefore, the paper contributes valuable insights into mobile requiring specific resources, data sizes, and computation times.
edge computing, highlighting the potential for enhanced To address the task allocation problem, the authors propose a
performance in IoT networks while suggesting areas for further Fusion Algorithm (FA) that integrates Genetic Algorithm (GA)
exploration and refinement. and Ant Colony Optimization (ACO) techniques. This
algorithm features a novel mutation operator and a new
The research of [22] presents the Edge Intelligent Energy population initialization method, enhancing its effectiveness in
Consumption Model (ECMS) aimed at optimizing energy
reducing energy consumption and balancing the load across the
usage in MEC environments. As energy consumption in edge network. Experimental results demonstrate that the FA
data centers becomes increasingly critical, the ECMS model outperforms traditional GA, achieving 8.1% lower energy
provides a framework for predicting and managing energy consumption and significantly reducing the load on both sensor
needs based on varying workloads, including CPU-intensive, nodes and cluster centers. The proposed approach ensures all
Web transactional, and I/O-intensive tasks. The authors sensors remain operational throughout task execution, thereby
validate the model through experimental results, demonstrating increasing the reliability and longevity of the WSN. Therefore,
its superior performance in accuracy and training time
640 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 11, 2024
the research contributes valuable insights into efficient task The research in [48] presents a novel approach to enhance
allocation strategies in edge computing environments. task offloading in dynamic vehicular environments. It
addresses the limitations of existing centralized and
The researchers of study [46] investigate an RIS-assisted decentralized Deep Reinforcement Learning (DRL) algorithms,
Non-Orthogonal Multiple Access (NOMA) Mobile Edge which often struggle with computational constraints and
Computing (MEC) network, focusing on minimizing energy coordination issues. The proposed framework introduces a
consumption for users. The authors propose a joint multi-layer Vehicular Edge Computing (VEC) architecture that
optimization approach that includes RIS phase shifts, data optimizes task management across vehicles, edge servers, and
transmission rates, power control, and transmission time. Due cloud resources. Key contributions include the development of
to the non-convex nature of the problem, the authors an energy-efficient VEC framework that considers the diverse
decompose it into two sub-problems: firstly, utilizing a dual computing capabilities of network entities and introduces a
method for a closed-form solution with a fixed RIS phase utility function to enhance energy efficiency. Additionally, a
vector, and the other employing a penalty method for decentralized Multi-Agent Deep Reinforcement Learning
suboptimal power control solutions. The optimization process (MADRL) algorithm is proposed, which effectively adapts to
alternates between these sub-problems until convergence is changing conditions while minimizing latency and maximizing
achieved. To demonstrate the effectiveness of their proposed task completion rates. The research also provides a
NOMA-MEC scheme, the authors compare it against three comprehensive performance evaluation using simulations in a
benchmark schemes: TDMA-MEC partial offloading, full local realistic environment, demonstrating the effectiveness of the
computing, and full offloading. These researchers have also proposed solution in managing varying traffic densities. The
introduced an alternating 1-D search method for optimizing conducted research highlights the potential of decentralized
RIS phase shifts in the TDMA-MEC scheme. Numerical approaches in improving the efficiency and responsiveness of
results indicate that the proposed scheme significantly reduces vehicular networks, paving the way for advancements in
overall energy consumption and highlights the impact of user autonomous vehicle applications and real-time data processing.
distance on performance. The paper concludes by
acknowledging the potential for future work on robust The research in [24] presents a novel cloud-edge
transmission design to address channel state information cooperative content-delivery strategy aimed at minimizing
estimation errors. network latency in asymmetrical Internet of Vehicles (IoV)
environments. By leveraging Deep Reinforcement Learning
Another group of researchers in [47] have worked on a (DRL), the authors propose a Deep Q Network (DQN) policy
deep reinforcement learning (DRL) approach for delay-aware that optimizes content caching and request routing based on
and energy-efficient computation offloading in dynamic MEC perceptive request history and current network states. The
networks with multiple users and servers. The primary study formulates the joint allocation of heterogeneous
objective is to maximize the number of tasks completed before resources as a queuing theory-based delay minimization
their deadlines while minimizing energy consumption. The objective, addressing the challenges of computation complexity
proposed DRL model operates in an end-to-end manner, and dynamic network conditions. Extensive simulations
eliminating the need for post-action optimization functions, demonstrate that the proposed strategy significantly reduces
and can handle a large action space without relying on network latency compared to existing solutions, showcasing its
traditional optimization methods. The study formulates the adaptability to varying user requirements and network states.
offloading problem as a Markov Decision Process (MDP), The findings indicate that the DQN model achieves fast
capturing the complexity of the MEC system by incorporating convergence and improved performance across different
time-varying channel conditions and various task profiles. scenarios. The paper concludes with a discussion on future
Extensive simulations demonstrate that the proposed DRL work, including the exploration of end-user mobility and
model significantly outperforms existing DRL models and deeper collaboration among mobile users, edge, and cloud
greedy algorithms in terms of task completion and energy networks to enhance overall Quality of Experience (QoE)
efficiency. The results indicate that the DRL model learns while balancing network delay and energy consumption. This
optimal policies over time, effectively managing the trade-off research contributes valuable insights into optimizing resource
between exploration and exploitation during training. The allocation in IoT-edge-cloud systems, particularly in the
conducted research highlights the potential of DRL in context of intelligent transportation systems.
enhancing the performance of MEC systems, making it a
valuable contribution to the field of IoT and edge computing. Table I illustrates a summary of the recent work that were
discussed in detail under objective, approach, key findings and
remarks of the particular research work.
641 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 11, 2024
642 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 11, 2024
Fig. 1. Mind map with key findings from the conducted review.
643 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 11, 2024
644 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 11, 2024
objective load balancing approach using reinforcement learning infrastructures’, Journal of Network and Computer Applications, vol.
algorithm’, Computing, vol. 105, no. 6, pp. 1337–1359, 2023. 221, p. 103764, 2024.
[19] J. Wen, J. Yang, T. Wang, Y. Li, and Z. Lv, ‘Energy-efficient task [36] X. Cao, G. Zhu, J. Xu, Z. Wang, and S. Cui, ‘Optimized power control
allocation for reliable parallel computation of cluster-based wireless design for over-the-air federated edge learning’, IEEE Journal on
sensor network in edge computing’, Digital Communications and Selected Areas in Communications, vol. 40, no. 1, pp. 342–358, 2021.
Networks, vol. 9, no. 2, pp. 473–482, 2023. [37] X. Cao, G. Zhu, J. Xu, and S. Cui, ‘Transmission power control for
[20] M. Raeisi-Varzaneh, O. Dakkak, A. Habbal, and B.-S. Kim, ‘Resource over-the-air federated averaging at network edge’, IEEE Journal on
scheduling in edge computing: Architecture, taxonomy, open issues and Selected Areas in Communications, vol. 40, no. 5, pp. 1571–1586, 2022.
future research directions’, IEEE Access, vol. 11, pp. 25329–25350, [38] X. Mo and J. Xu, ‘Energy-efficient federated edge learning with joint
2023. communication and computation design’, Journal of Communications
[21] H. Huang, W. Zhan, G. Min, Z. Duan, and K. Peng, ‘Mobility-aware and Information Networks, vol. 6, no. 2, pp. 110–124, 2021.
computation offloading with load balancing in smart city networks using [39] H. Koumaras et al., ‘5G-enabled UAVs with command and control
MEC federation’, IEEE Transactions on Mobile Computing, 2024. software component at the edge for supporting energy efficient
[22] Z. Zhou, M. Shojafar, J. Abawajy, H. Yin, and H. Lu, ‘ECMS: An Edge opportunistic networks’, Energies, vol. 14, no. 5, p. 1480, 2021.
Intelligent Energy Efficient Model in Mobile Edge Computing’, IEEE [40] X. Shao, Z. Zhang, P. Song, Y. Feng, and X. Wang, ‘A review of energy
Transactions on Green Communications and Networking, vol. 6, no. 1, efficiency evaluation metrics for data centers’, Energy and Buildings,
pp. 238–247, 2022. vol. 271, p. 112308, 2022.
[23] K. Sathupadi, ‘Ai-driven energy optimization in sdn-based cloud [41] Q. Zhang et al., ‘A survey on data center cooling systems: Technology,
computing for balancing cost, energy efficiency, and network power consumption modeling and control strategy optimization’, Journal
performance’, International Journal of Applied Machine Learning and of Systems Architecture, vol. 119, p. 102253, 2021.
Computational Intelligence, vol. 13, no. 7, pp. 11–37, 2023.
[42] M. Manganelli, A. Soldati, L. Martirano, and S. Ramakrishna,
[24] T. Cui, R. Yang, C. Fang, and S. Yu, ‘Deep reinforcement learning- ‘Strategies for improving the sustainability of data centers via energy
based resource allocation for content distribution in IoT-edge-cloud mix, energy conservation, and circular energy’, Sustainability, vol. 13,
computing environments’, Symmetry, vol. 15, no. 1, p. 217, 2023. no. 11, p. 6114, 2021.
[25] Y. Mansouri and M. A. Babar, ‘A review of edge computing: Features [43] E. Ahvar, A.-C. Orgerie, and A. Lebre, ‘Estimating Energy
and resource virtualization’, Journal of Parallel and Distributed Consumption of Cloud, Fog, and Edge Computing Infrastructures’,
Computing, vol. 150, pp. 155–183, 2021. IEEE Transactions on Sustainable Computing, vol. 7, no. 2, pp. 277–
[26] C. Jian, L. Bao, and M. Zhang, ‘A high-efficiency learning model for 288, 2022.
virtual machine placement in mobile edge computing’, Cluster [44] Q. Wang, L. T. Tan, R. Q. Hu, and Y. Qian, ‘Hierarchical Energy-
Computing, vol. 25, no. 5, pp. 3051–3066, 2022. Efficient Mobile-Edge Computing in IoT Networks’, IEEE Internet of
[27] L. Urblik, E. Kajati, P. Papcun, and I. Zolotová, ‘Containerization in Things Journal, vol. 7, no. 12, pp. 11626–11639, 2020.
Edge Intelligence: A Review’, Electronics, vol. 13, no. 7, p. 1335, 2024. [45] L. Heng, G. Yin, and X. Zhao, ‘Energy aware cloud-edge service
[28] J. Zhang, X. Zhou, T. Ge, X. Wang, and T. Hwang, ‘Joint task placement approaches in the Internet of Things communications’,
scheduling and containerizing for efficient edge computing’, IEEE International Journal of Communication Systems, vol. 35, no. 1, p.
Transactions on Parallel and Distributed Systems, vol. 32, no. 8, pp. e4899, 2022.
2086–2100, 2021. [46] Z. Li et al., ‘Energy Efficient Reconfigurable Intelligent Surface
[29] S. Hu, W. Shi, and G. Li, ‘CEC: A containerized edge computing Enabled Mobile Edge Computing Networks With NOMA’, IEEE
framework for dynamic resource provisioning’, IEEE Transactions on Transactions on Cognitive Communications and Networking, vol. 7, no.
Mobile Computing, vol. 22, no. 7, pp. 3840–3854, 2022. 2, pp. 427–440, 2021.
[30] F. Zhou and R. Q. Hu, ‘Computation Efficiency Maximization in [47] L. Ale, N. Zhang, X. Fang, X. Chen, S. Wu, and L. Li, ‘Delay-Aware
Wireless-Powered Mobile Edge Computing Networks’, IEEE and Energy-Efficient Computation Offloading in Mobile-Edge
Transactions on Wireless Communications, vol. 19, no. 5, pp. 3170– Computing Using Deep Reinforcement Learning’, IEEE Transactions on
3184, 2020. Cognitive Communications and Networking, vol. 7, no. 3, pp. 881–892,
[31] X. Zhou, X. Yang, J. Ma, I. Kevin, and K. Wang, ‘Energy-efficient 2021.
smart routing based on link correlation mining for wireless edge [48] M. Fardad, G.-M. Muntean, and I. Tal, ‘Decentralized vehicular edge
computing in IoT’, IEEE Internet of Things Journal, vol. 9, no. 16, pp. computing framework for energy-efficient task coordination’, in 2024
14988–14997, 2021. IEEE 99th Vehicular Technology Conference (VTC2024-Spring), 2024,
[32] T. Vaiyapuri, V. S. Parvathy, V. Manikandan, N. Krishnaraj, D. Gupta, pp. 1–7.
and K. Shankar, ‘A novel hybrid optimization for cluster-based routing [49] K. Kaur, S. Garg, G. S. Aujla, N. Kumar, J. J. P. C. Rodrigues, and M.
protocol in information-centric wireless sensor networks for IoT based Guizani, ‘Edge Computing in the Industrial Internet of Things
mobile edge computing’, Wireless Personal Communications, vol. 127, Environment: Software-Defined-Networks-Based Edge-Cloud
no. 1, pp. 39–62, 2022. Interplay’, IEEE Communications Magazine, vol. 56, no. 2, pp. 44–51,
[33] A. Javadpour et al., ‘An energy-optimized embedded load balancing 2018.
using DVFS computing in cloud data centers’, Computer [50] J. Lin, W. Yu, N. Zhang, X. Yang, H. Zhang, and W. Zhao, ‘A Survey
Communications, vol. 197, pp. 255–266, 2023. on Internet of Things: Architecture, Enabling Technologies, Security
[34] S. K. Panda, M. Lin, and T. Zhou, ‘Energy-efficient computation and Privacy, and Applications’, IEEE Internet of Things Journal, vol. 4,
offloading with DVFS using deep reinforcement learning for time- no. 5, pp. 1125–1142, 2017.
critical IoT applications in edge computing’, IEEE Internet of Things [51] U. M. Malik, M. A. Javed, S. Zeadally, and S. ul Islam, ‘Energy-
Journal, vol. 10, no. 8, pp. 6611–6621, 2022. Efficient Fog Computing for 6G-Enabled Massive IoT: Recent Trends
[35] A. Cañete, M. Amor, and L. Fuentes, ‘HADES: An NFV solution for and Future Opportunities’, IEEE Internet of Things Journal, vol. 9, no.
energy-efficient placement and resource allocation in heterogeneous 16, pp. 14572–14594, 2022.
645 | P a g e
www.ijacsa.thesai.org