Content
Content
Abstract
1. Introduction
7. Conclusion
Integrating encryption, access control, threat detection, and risk
management forms a cohesive cybersecurity strategy capable of addressing
the dynamic nature of cyber threats. Continuous advancements in
technology and methodologies are essential to maintain the effectiveness of
cybersecurity measures.
References
Feel free to copy and paste this content directly into your IEEE-format
research paper. If you need further assistance with formatting or additional
sections, please let me know!
Skip to content
ChatGPT
Log in
You said:
ChatGPT said:
Sure, go ahead and share the previous chat. I'll review it and help you
continue from there or address any questions you have.
You said:
["i am giving you whole chat conversation can you give me details what they
are speaking and what to do" "Sure, please go ahead and share the full chat
conversation. Once I have it, I’ll summarize what they are discussing and let
you know what actions (if any) need to be taken."] [""him: Secure Cloud
Storage Architectures: A Survey of Techniques and Tools topic selected for
survey paper me(neel) and aryan (me)" am giving you 1st conversation to
know who is him and me and tell me what you undersatnd what is my name
and what is his name" "Based on the conversation you provided: **"him:
Secure Cloud Storage Architectures: A Survey of Techniques and Tools topic
selected for survey paper me(neel) and aryan (me)"** Here's what can be
understood: * The message is written from the perspective of **"him"** (the
other person in the conversation). * He says: "**me (neel)** and **aryan
(me)**". That means: * He is referring to **you** as **Aryan**. * He is
referring to **himself** as **Neel**. Summary: * **Your name:** Aryan *
**His name:** Neel * **Topic selected:** *"Secure Cloud Storage
Architectures: A Survey of Techniques and Tools"* * **Purpose:** For a survey
paper (academic project or assignment). Would you like help drafting the
paper or outlining its contents?"] ["ok now am giving you whole conversation
tell me after that what to do" "Perfect — please share the full conversation,
and once I have it, I’ll analyze it and let you know: 1. What they are
discussing. 2. Who is responsible for what. 3. What steps you should take
next. Go ahead and paste the full conversation."] ["me: bhai matlab yeah
survey paper duo meh banana he aisa na? him: Ha me: ok him: Me content
ready Kar dunga Fir sir se approval mil Jaye to tu proper document kar dena
Means formatting me: ok. and meh yeah bhi soch raha hu ki meh bhi content
find karu aur him: Ya tuje bhi 2 researh paper padh ke me: tuj se approve
karu him: okay *reply of (ok. and meh yeah bhi soch raha hu ki meh bhi
content find karu aur) me: ki bara bar he ki nai him: 2 paper me padtha hu
And 2 tu kar Fir combine kar lenge me: ok. samaj meh him: To me aabhi
paper dhundh raha hu Aur bhejta hu me: ok sure "shared the 2 files" him:
Aabhi ye 2 kar me: ok him: Aagar ho Jaye to 1 aur bhejta hu" "Thanks for
sharing the full conversation. Here's a clear breakdown of what's happening,
who's responsible for what, and what you (Aryan) need to do next. 🔍 **Who’s
Who:** * **You = Aryan** * **Him = Neel** 🧠 **Summary of the
Conversation:** * You both are working as a **duo** on a **survey paper**
titled: *“Secure Cloud Storage Architectures: A Survey of Techniques and
Tools”*. * **Neel (him)** will: * Prepare the initial **content** by reading and
extracting information from research papers. * Share those research papers
with you. * Review your contributions too. * **You (Aryan)** will: * Help by
reading at least **2 research papers** yourself and extracting content. *
**Get your content approved** by Neel. * Later, you will take the **combined
content** from both of you and do the **proper formatting** for the final
document (likely for submission to a professor or journal). * Neel already
shared **2 research papers** with you and said to start working on those. ✅
**What You Should Do Next:** 1. **Read the 2 research papers** that Neel
sent you. 2. **Extract important points** from them related to secure cloud
storage architectures, techniques, and tools. 3. **Share your content** with
Neel to get his approval. 4. After both of you finish content collection: *
**Combine** both parts (Neel’s and yours). * **Format** the full document
properly (e.g., headings, references, formatting as per guidelines). * Then
**submit** it or send it for professor’s approval. Would you like help
analyzing those two research papers or creating a formatted survey paper
template?"]
ChatGPT said:
Thanks for providing the full conversation and earlier context. Here's a clear
and concise overview of everything discussed so far, along with your next
steps:
🧑🤝🧑 Participants
You: Aryan
Him: Neel
📚 Project Overview
Conversation Summary
Neel will:
Neel said once you're done with the two papers, he may send another
one.
4. Wait for any additional papers from Neel (he mentioned possibly
sending one more).
Combine them.
You said:
shall i give you both documents can you give me related thing which is
needed
ChatGPT said:
Yes, absolutely — please upload both documents (the research papers Neel
shared), and I’ll:
Architectural models
Security techniques
Tools/technologies mentioned
4. Organize the findings in a way you can easily include in your survey
paper.
["See discussions, stats, and author profiles for this publication at:
https://www.researchgate.net/publication/377702367 A SURVEY OF SECURE
CLOUD STORAGE MECHANISM: CHALLENGES AND ITS SOLUTIONS Article in
INTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ENGINEERING &
TECHNOLOGY · January 2021 DOI: 10.34218/IJARET.12.1.2021.032 CITATIONS
0 READS 122 2 authors, including: Iaeme Pub 297 PUBLICATIONS 574
CITATIONS All content following this page was uploaded by Iaeme Pub on 26
January 2024. The user has requested enhancement of the downloaded file.
International Journal of Advanced Research in Engineering and Technology
(IJARET) Volume 12, Issue 1, January 2021, pp. 354-361, Article ID:
IJARET_12_01_032 Available online at
http://www.iaeme.com/IJARET/issues.asp?JType=IJARET&VType=12&IType=1
Journal Impact Factor (2020): 10.9475 (Calculated by GISI) www.jifactor.com
ISSN Print: 0976-6480 and ISSN Online: 0976-6499 DOI:
10.34218/IJARET.12.1.2021.032 © IAEME Publication Scopus Indexed A
SURVEY OF SECURE CLOUD STORAGE MECHANISM: CHALLENGES AND ITS
SOLUTIONS Banoth Anantharam Research Scholar, Department of Computer
Science and Engineering, Sri Satya Sai University of Technology & Medical
Sciences, Sehore, Madhya Pradesh, India. Assistant Professor, Department of
Computer Science and Engineering, Keshav Memorial Institute of Technology,
Narayanaguda, Telangana, India. ABSTRACT Cloud storage has been a boon
for many organizations and the individuals as it reduces the burden of safety
and security of their data, in addition to minimizing the investment for
infrastructure. Every cloud service provider offers not only the storage
service but it also extends to preserve the data privacy and security. The
inherent mechanism that many cloud service providers implement is n-
backup by which the client is guaranteed for data recovery in the case when
the original copy of data is damaged or lost. However, for each of the backup
copy, it is evident that more storage space is required. In this digital big data
era, industry is experiencing the data explosion hence more space and
infrastructure would become a necessary need for the service providers. In
this paper, a study is conducted on various cloud storage mechanisms,
challenges and identified the gaps. Finally, possible solutions are mentioned
which is a need for the present cloud storage scenario. Key words: cloud
computing, cloud storage, data centre, storage mechanisms, storage backup.
Cite this Article: Banoth Anantharam, A Survey of Secure Cloud Storage
Mechanism: Challenges and its Solutions, International Journal of Advanced
Research in Engineering and Technology, 12(1), 2021, pp. 354-361.
http://www.iaeme.com/IJARET/issues.asp?JType=IJARET&VType=12&IType=1
1. INTRODUCTION The fundamental concept of Cloud Computing to perform
the user tasks in distributed, parallel mode by using appropriate technologies
through which the tasks can be completed at a faster pace. In the cloud
computing paradigms, Infrastructure-as-a-Service (IaaS) plays a major role
with the storage component. To offer more robust storage mechanisms,
cloud storage providers offer n-backup mechanism. The mechanism involves
number of backup copies for an original copy of data range from two to
three. The main objective of offering such n-backup mechanism is to create
the fault-tolerance and resilience for the applications as well as the owners of
the data. Apparently, the cloud storage can be used for various purposes
such as backup and recovery, software test and development, cloud data
migration, building big data and data lakes, etc. Moreover, cloud storage
requirements are availability, data integrity and security. Considering the
artefacts of the cloud storage, Figure. 1, Figure 1. Architecture of Cloud
Storage the user would get benefitted to the maximum extent in the context
of the reducing the resources needed to store the data, deploy the system
and maintain the data. 2. PROBLEM STATEMENT The work proposed in this
paper if for an optimized and secured storage solution, for the cloud storage
service providers, which shall reduce the burden on the storage systems.
From the Figure 2 it can be observed that while storing the data in the cloud,
the system experience following issues: 2.1. Load on Storage System In case
of the small-sized objects maintaining extra copies doesn’t create much
storage load on the cloud storage servers, but if the objects are larger in size
then the data redundancy creates an exponential growth in the requirement
for cloud storage resources which not only increases the cost of maintenance
but also consumes lot of physical space required to implement the cloud
storage servers. Though the cloud service providers have projected the
reason of cooling the system but dearth of space on earth is also a potential
reason to setting up the cloud system in the ocean such Microsoft did it in
the Pacific Ocean. 2.2. Load on Networking Resources Two types of network
resources are required while the data has to be stored in the cloud storage:
data generated by the external system, and the data generated in the cloud
application. If the data is generated in the external system to be stored in
the cloud storage then the operation requires two kinds of networking
resources: user network-to-cloud network, cloud network-to- cloud network.
If the data is generated in the cloud application then user-level networking
resource is not needed whilst internal cloud network itself is sufficient.
However, in both the scenarios, much of the load that is generated in the
internal cloud network would be tripled for every chunk of the data to be
stored. Figure 2. Storage Engine Services adopted by Dell-EMC for Elastic
Compute Service (ECS) 2.3. Load on Storage-Connected Computational
Resources For every object to be stored in the cloud it needs to compute the
amount of storage required for the object vs the availability of storage
devices. At the same time, the computation time is also tripled for every
object to find the three storage units, one for the original file and two others
for backup in the context of future recovery which is the part of the definition
of the cloud. 3. LITERATURE REVIEW Generally, large-scale data centres
implement remote backups for the improvement of system reliability for
achieving the efficient fault tolerance. But considering the distance of the
remote locations which result into transmission cost, Yu Hua et.al. [1] have
et.al. have proposed a cost- efficient method for remote backup services in
the context of Enterprise Clouds, called Neptune. As part of the work, the
authors have developed a filtration scheme to transmit the data to the long-
distance data centers by eliminating the redundancy. The reduction of
redundancy is achieved by incorporating the method of compressing the
similarity of files. It further incorporates deduplication at chunk-level and
uses delta-compression for compressing the files. In order to reduce the
overheads and complexity, Neptune makes use of locality-aware hashing for
grouping the similar files and suggests shortcut delta chains for fast remote
recovery. Before developing any solution for the cloud storage technology, it
is inevitable to understand the architectures that are implemented
successfully for in the existing scenario. Yuhuan Qiu et.al. [2] have presented
concepts and architecture models that are implemented for the cloud
storage in most of the cloud solutions. In the process the authors have
presented the features and advantages of which space leasing, reducing the
management difficulty, data replication and backup, scalability are elevated
to be the most prominent and important. The types of cloud systems vary
with the need and the customer orientation such as individual-level storage
requirement to industry-level storage requirement. Hence, depending upon
on the scale of requirements, and the context for which the storage solution
is designed. iCloud, Baidu cloud disk have been discussed along with the
issues like broadband bottlenecks, infrastructure, security and availability.
When the services are offered by the cloud service providers include
Infrastructure-as-a- service then they have to concentrate more on number
of components for IaaS’s functionality. Additionally, the functionality has to
be in accordance with huge system duplication of components but it
increases the cost of maintainability and operability. S.R.Ali [3] have
analysed the cloud computing reliability in the context of unified
infrastructure and proposed the methods to implement it a reduced cost. The
solution should also provide the operator that consists of computing
elasticity which helps the functionalities to grow or de-grow the cloud based
on services on-demand. Because the clouds are located at various
geographical locations, Single Point-of-Failure (SPOF) can be considerably
reduced. As part of the analysis server virtualization has been studied in the
context of hypervisors, various virtual states, transition states for which
Virtual Machine Markov model has been studied. Apart from them, VM
recovery mechanisms, cloud failover, container virtualization, and reliability
analysis of VoIP in cloud environment have been discussed. Zhang et.al. [4]
have designed and implemented data backup and recovery system with
security enhancements which mainly focuses on mainly focus on the
availability including the confidentiality in the backup operation as well as
recovery operation. The work has considered various modules in the
architecture of the system such as backup module, recovery module,
network communication module, logging and transmission module, task
management module which exist at the upper layers of the architecture. The
key module i.e., security enhancement module, consists of various modules
such as identity authentication, operation approval, account system, log
audit, data protection, backup data management, and policy management.
The system communication has been defined for the communication
between client, web console and the backup and recovery server. Through
their work the authors suggest that when user authentication is
strengthened, verification of the legitimacy between the data and target
computer, the process control during business operations and the audit of
the key operation, the system can prevent administrators from backing up
data to illegal disk and recovering backed up data to illegal target machine
and other similar risk of data leakage. To achieve the budget association and
savings, efficiency of government agencies’ business-processes, one of the
mechanisms is to implement a new information model which is migration-
based for the benefit of cloud computing, ICT-outsourcing and consolidating
the orders. The key requirements were proposed by Aubakirov et.al. [5] to
support the open standards for storage and distribution of virtual machines;
Logical partitioning of resources pools provided by virtualization platform on
virtual computing data centers with fixed service quality; solutions on
provision of network safety, etc. In the process the authors have defined
various subsystems such as services, resources virtualization, computing
platform, data transfer, data storage, communication, data centers,
management and monitoring, information safety, backup and recovery, and
technical assistance. The authors have observed that the computing
capacities’ efficiency per kilowatt-hour will increase, which in-turn will lead to
the increase in environmental friendliness of government agencies
performance. In the work submitted by Odun-Ayo et.al. [6] have presented
various aspects involved with data storage in cloud computing. The authors
have perceived the key issues that are part of cloud storage services are
deployment, virtualization and availability, data organization, data migration
and load balancing, data deduplication, and security. Furthermore, they have
listed the security concerns such as data privacy and integrity, data recovery
and vulnerability, improper media sanitization, data backup, and data
outage. In the process of exploring the key features the work has included
the comparison of IBM cloud object storage vs Amazon S3 for which
single/multi-tenancy options, deployment options, customization and control,
and API support were compared. The results of the comparison has resulted
to diversity in which IBM cloud object storage has produced better
throughput than Amazon S3, while read and write latencies were much lower
than S3. Outsourcing of matrix multiplications tasks at a larger scale to many
distributed servers or cloud is necessary to increase the speed of
computation. But security is a serious concern when these servers are not
trustworthy. Chang et.al. [7] have studied the problem of secured and
distributed version of matrix multiplication from the view point of servers
which are not trustworthy over distributed servers. This issue is categorized
to be part of secure function computation and has established substantial
attention in the cryptography community. But, portraying the essential
bounds of information-theoretically secure matrix multiplication still remains
as an open issue. Their focus was on information-theoretically secure
distributed matrix multiplication with an objective of portraying the minimum
communication overhead. The ability of secure matrix multiplication was
defined as “the maximum possible ratio of the desired information and the
total communication received from N distributed servers”. After the
investigation of single-sided and fully secure matrix multiplication issues the
authors have proposed a scheme based on secret sharing for the single-
sided secure matrix multiplication model. Furthermore, two interesting open
problems have been proposed: find a converse (upper bound), for a fully
secure matrix multiplication issue; and generalizing these ideas for other
secure distributed computation tasks. Security and privacy are the most
critical issues that need to be addressed in designing a computing
environment that is reliable and trustworthy. Tariqul et.al. [8] have classified
and characterized the various security and privacy challenges associated
with cloud computing. In the process they have conducted a survey on three
well-known cloud security frameworks namely ENISA, CSA, and NIST that aim
to provide a compilation of risks, vulnerabilities and also the best practices to
resolve them. These three entities provide a comprehensive overview of the
current security, privacy, and trust issues, and thus, help in understanding
the current status of Cloud security. They have also presented a variety of
security and privacy concerns associated with Cloud Computing, identified
major threats and vulnerabilities, and classified them into six categories:
Network Security, Virtualization and Hypervisor Security, Identity and Access
Management, Data and Storage Security, Governance, and Legal and
Compliance issues. Each of these categories identified several threats and
vulnerabilities, resulting in further classification. The authors have observed
that it is evident that for the wide spread adoption of the cloud, these issues
must be addressed thoroughly. Therefore, enrichment of the existing solution
techniques as well as more innovative approaches need to mitigate these
problems are needed. The latest technological developments have ignited
the popularity and success of the cloud. Since it provides cost-efficient
architectures that provide the transmission, storage, and exhaustive
computing of data, this new paradigm is gaining an increasing interest.
Kaaniche et.al. [9] have conducted an exhaustive study to deliver a
consistent assessment about both data security concerns and privacy issues
that are faced by clients in cloud storage environments. This work brings a
critical relative analysis of cryptographic defense methods. Apart from this,
their work has explored research directions in addition to the technology
trends to report the protection of outsourced data in cloud infrastructures.
The foundation for the their work is that the storage services carry a good
number of challenging design issues, significantly because of both loss of
abstract nature of clouds and data control. The authors are in a perception
that there exists no cryptographic mechanism that can be installed to
guarantee privacy of users, for the settings in data sharing. In the process it
was also found that Fully Homomorphic encryption was a proven method as
unable to implement privacy demands of the common cloud services. It is
also observed that the schemes of FHE cannot be completely private due to
their inability to support circuit complication. Therefore, it has been deduced
that computations over muti- client inputs needs reliable hardware. Following
this approach, it is assured that Secure Multiparty Computation (SMC) can be
capable resolution to privacy-preserving processing. In general, SMC
mechanisms permit quite a lot of clients to collaboratively exicute any
function, over private inputs. This makes the distribution of trust to be
possible across many entities. One of the potential issues in privacy and
security is stimulated by the fact of likelihood that cloud operators can
access the user’s sensitive data. This concern intensely rises users’
apprehension and decreases the adoptability of cloud computing in many of
the industries, such as the governmental agencies and financial industry. The
work proposed by Yibin Li et.al. [10] has focused on this issue and proposed
an approach namely, intelligent cryptography by which the cloud service
operators would not be able to directly reach partial data. Their approach
splits the file and stores the data separately in the cloud servers which are
distributed across various locations. They have designed an alternative
approach for determining whether the data packets require a split to
condense the operation time. They have named the proposed scheme as
Security-Aware Efficient Distributed Storage (SA-EDS) model, which are
supported by their algorithms. These algorithms include: Efficient Data
Conflation (EDCon) Algorithm, Secure Efficient Data Distributions (SED2)
Algorithm and Alternative Data Distribution (AD2)Algorithm. The
experimental calculations have evaluated both security and efficiency
performances. Moreover, the investigation results represent that their
approach can successfully protect main threats from clouds and needs with
an acceptable computation time. 4. RESEARCH GAP Although there’s an
extensive research that was conducted to reduce the load on the storage
systems, there’s a lot of scope to develop more effective solutions that can
address to optimize the storage usage. However, most of them are
hypothetical which are not practically possible to be implemented in the
cloud scenario. A systematic study in the context of storage optimization
would reveal good number of gaps that which are potential to enhance the
solutions. A solution as a Proof-of-work, in terms of optimization which does
not degrade the overall performance, if not enhanced, would always be
adoptable by the cloud service providers. In the context of cloud storage
most of the researches have proposed the techniques to compress the huge
volume of data, and data organization methods for the data that was
generated by the large-scale users. Whether the lossy or lossless
compression, every compression technique would stop after certain stage
when it reaches the trade-off, i.e., the computational resources required to
compress the huge-volume of data vs the time required to compress. These
techniques not only degrade the overall performance but also consumes lot
of energy and increase the computational costs which in turn would increase
the cost for the user. Cloud backup systems have sophisticated requirements
on reliability than common backup software. However, users unconsciously
feel it safer to backup critical data on visible devices. Doubts may arise
about the security of backing up private data in faraway data centers. Cloud
backup poses various abnormalities, as most of the providers concentrate on
providing security for the cloud backup than the backup software by which
the security issues would be as a result. Hence, the security for the data
backup is also an essential activity. 5. RESEARCH CHALLENGES While
implementing the solutions which were suggested by the researchers the
industry and the research teams have experienced many issues. These
issues range from the data size to the technologies adopted while
implementing the solutions at the data centre. Though many could be
solved, but the challenges as mentioned below needs a special focus. •
Before the data is backed up, more time has to be spent on identifying the
requirements of the file system at the storage. • While the data is being
backed up, Though the data security is vital, backup service security is at the
stake which would cause the fatal damage to the data. • An elastic security
service is needed for the cloud service provider because the types of cloud
data storage: object storage, file storage, and block storage. • Though high-
performance infrastructure is available with the cloud storage service
providers most of the resources are consumed for the scalability and
elasticity which needs to invoke various functions to be executed at the time
of redundant backups. • An asynchronous procedure is adopted for Cross-
region replication to backup each object in the storage, this would cause a
requirement of computation of location to be identified and then the creation
of meta data information. • Cloud service provider provides the data security
in two modes, client-side encryption and server-side encryption. Client-side
encryption ensures that the data is encrypted before it reaches the cloud
server, whereas the Server-side encryption ensures that the data is
encrypted without the client having the encryption facility. However, except
the large-scale organizations most of the small-scale organizations and
individuals would not prefer the client-side encryption. At the same time,
there are apprehensions across the users about the integrity of the security
services that are provided at the server-side. • Because of the distributed
architecture of the cloud servers elastic file system facility, higher
throughput can be experience if the I/O workloads are larger. For the lower
I/O workloads the throughput is not constant and is guaranteed to be higher.
• If the application at the client-side is not parallelizable across multiple
instances, then the throughput cannot be at a better number as the data
transfer is serialized. • Write performance and consistency of data transfer
speed is also dependent on whether the client system is able to support the
asynchronous writes to the file system. • Identity and Access Management
permission for API calls to access the backup software, security groups for
the cloud instances and file system permission for user-level, and group-
level must be defined appropriately. 6. CONCLUSION A study on various cloud
storage mechanisms was conducted and presented the observations, in this
paper. Most of the implementations have adopted the n-backup mechanism
to ensure the recovery in the case of any damage or data lost in the original
copy. It is observed that a potentially extra storage space, infrastructure and
related business logic have become a necessary need to implement n-
backup mechanism. As part of the study few issues have been identified
which ought to be addressed in order to reduce the storage space burden on
the service providers. Apparently, research challenges have also been
identified while implementing the solutions by the service providers and
presented the same in the paper. 7. SUGGESTIONS Developing the solutions
for a problem that potentially degrades the performance of any system
needs an immediate attention. From the discussion in this paper it is evident
that consumption of more space and investment for infrastructure would
always be a challenging issue for the cloud storage service provider. Hence,
there must be the development of solutions which reduce the burden of
extra storage space without compromising for the privacy and security.
These solutions shall not only reduce the storage load but also improve the
communication efficiency between the client and service provider.
REFERENCES [1] Hua, Yu, Xue Liu, and Dan Feng. "Cost-Efficient Remote
Backup Services for Enterprise Clouds." IEEE Transactions on Industrial
Informatics12, no. 5 (2016): 1650-657. doi:10.1109/tii.2016.2543258. [2]
Yuhuan, Qiu. "Cloud Storage Technology." Big Data and Cloud Innovation1,
no. 1 (2017). doi:10.18063/bdci.v1i1.508. [3] Ali S.R. (2019) Cloud
Computing Reliability Analysis. In: Next Generation and Advanced Network
Reliability Analysis. Signals and Communication Technology. Springer, Cham
[4] Zhang, Jianping, and Hongmin Li. "Research and Implementation of a
Data Backup and Recovery System for Important Business Areas." 2017 9th
International Conference on Intelligent Human-Machine Systems and
Cybernetics (IHMSC), 2017. doi:10.1109/ihmsc.2017.209. [5] Aubakirov,
Мargulan, and Evgeny Nikulchev. "Development of System Architecture for E-
Government Cloud Platforms." International Journal of Advanced Computer
Science and Applications7, no. 2 (2016). doi:10.14569/ijacsa.2016.070235.
[6] Odun-Ayo, Isaac, Olasupo Ajayi, Boladele Akanle, and Ravin Ahuja. "An
Overview of Data Storage in Cloud Computing." 2017 International
Conference on Next Generation Computing and Information Systems
(ICNGCIS), 2017. doi:10.1109/icngcis.2017.9. [7] Chang, Wei-Ting, and Ravi
Tandon. “On the Capacity of Secure Distributed Matrix Multiplication.” 2018
IEEE Global Communications Conference (GLOBECOM), 2018.
https://doi.org/10.1109/glocom.2018.8647313. [8] Islam, Tariqul &
Manivannan, D. & Zeadally, Sherali. (2016). A Classification and
Characterization of Security Threats in Cloud Computing. International
Journal of Next- Generation Computing. 7. [9] Kaaniche, Nesrine, and
Maryline Laurent. “Data Security and Privacy Preservation in Cloud Storage
Environments Based on Cryptographic Mechanisms.” Computer
Communications 111 (2017): 120–41.
https://doi.org/10.1016/j.comcom.2017.07.006. [10] Li, Yibin, Keke Gai,
Longfei Qiu, Meikang Qiu, and Hui Zhao. “Intelligent Cryptography Approach
for Secure Distributed Big Data Storage in Cloud Computing.” Information
Sciences 387 (2017): 103–15. https://doi.org/10.1016/j.ins.2016.09.005. [11]
Jangampet, Vinay Dutt, et al. "The Privacy Paradox: Balancing Cybersecurity
Measures with Individual Liberties in the Digital Era." International Journal of
Advanced Research in Science and Technology, vol. 10, no. 7, July 2020, pp.
255-258. https://ijarst.in/public/uploads/paper/400951701751987.pdf."] this
is first paper and will give you second one then perform whatever things are
given in chat to perform do it well then
ChatGPT said:
You said:
A Survey on Storage Techniques in Cloud Computing Samson Akintoye ISAT
Laboratory, Department of Computer Science, University of the Western
Cape, Cape Town, 7535, South Africa. Antoine Bagula ISAT Laboratory,
Department of Computer Science, University of the Western Cape, Cape
Town, 7535, South Africa. Yacine Djemaiel CNAS Lab, Carthage University,
Tunis, Tunisia Noureddine Bouriga CNAS Lab, Carthage University, Tunis,
Tunisia ABSTRACT In recent years, cloud computing is highly embraced and
considered a must have in the IT world. Cloud computing is a computing
paradigm, where a large pool of systems are connected in private, public or
hybrid networks, to provide dynamically scalable infrastructure for
computing resources. One of the services provided by cloud computing is
cloud storage that enables the upload and the retrieval of different kinds of
data. Accessing cloud storage service through Internet and pay as you used
subscription have been the reasons for the emergence of methods and
techniques to effectively store data and reduce storage security
vulnerabilities in the cloud storage. This paper provides a survey of some
proposed cloud storage methods and techniques, their advantages and
drawbacks and makes stress on the current requirements for storage
techniques in cloud computing. Keywords Cloud computing, Storage
Technique, Cloud Storage, Cloud Security 1. INTRODUCTION Recently, the
processing and the storage of huge volumes of data have been enhanced
enormously in these last decades due to the emergence of Cloud computing.
This concept is defined as a computing paradigm, where a large pool of
systems are connected in private, public or hybrid networks, to provide
dynamically scalable infrastructure for computing resources [18]. The
characteristics of cloud computing include on- demand self-service, broad
network access, resource pooling, rapid elasticity and measured service. On-
demand self-service means that organizations can access and manage their
own computing resources. Broad network access allows services to be
offered over the Internet or private networks. The cloud computing service
models are Software as a Service (SaaS), Platform as a Service (PaaS) and
Infrastructure as a Service (IaaS) [11]. In SaaS model, consumer uses the
provider’s applications running on a cloud infrastructure. In PaaS, an
operating system, hardware, and network are provided, and the customer
installs or develops its own software and applications. The IaaS model
provides just the hardware and network; the customer installs or develops its
own operating systems, software and applications. Cloud services are
typically provided through a private cloud, community cloud, public cloud or
hybrid cloud [11]. In public cloud, services are offered over the Internet and
are owned and operated by a cloud provider. In a private cloud, the cloud
infrastructure is operated solely for a single organization, and is managed by
the organization or a third party. In a community cloud, the service is shared
by several organizations and made available only to those groups. The
infrastructure may be owned and operated by the organizations or by a cloud
service provider (CSP). A hybrid cloud is a combination of two or more cloud
infrastructures (private, community, or public) that remain unique entities,
but are bounded together by standardized or proprietary technology that
enables data and application portability. One of the important services
provided by cloud computing is cloud storage, which is a service where data
is remotely maintained, managed, and backed up. The service is available to
users over a network, that is usually the Internet. It allows the user to store
files online on a “pay-as-you go” or subscription basis so that the user can
access them from any location via the internet. There is no need to purchase
storage before storing data. Only the amount of storage the data actually
consuming is paid [12]. Furthermore, accessing cloud storage service
through Internet and pay as you used subscription have been the reasons for
the emergence of methods and techniques to effectively store data and
reduce storage security vulnerabilities in the cloud storage. This paper
reviews and surveys some proposed cloud storage techniques, their
advantages and drawbacks, data management and retrieval from cloud
storages in addition to security mechanisms that may be implemented to
protect storage clouds. 2. STORAGE TECHNIQUES IN CLOUD COMPUTING In
this section, we review some methods and techniques to store data in cloud
computing. The techniques are grouped into Storage Techniques for cloud,
data management in cloud storage, data retrieval from cloud, authentication
schemes for cloud storage, and data integrity and availability for cloud
storage. 2.1 SETiNS: Storage Efficiency Techniques in No-SQL database for
Cloud Based Design Bhatia et al. [1] proposed a novel design for efficient
storage technique in No-SQL Database for cloud storage. According to this
technique, data is fetched from Hadoop Distributed file system (HDFS). Data
are stored in the form of flat text files and to applying SETiNS efficiently for
large data, the file is deduplicated using a MapReduce to provide the key-
value pairs on the basis of which pointer table is created using MongoDB.
MongoDB is a cross-platform, document oriented database that provides,
high performance, high availability, and easy scalability. MongoDB works on
concept of collection and document. Collection is a group of MongoDB
documents which is equivalent of a table structure in an RDBMS model. A
document is a set of key-value pairs represented as a dynamic schema. In
this work, key-value pairs with the pointer table are now stored as a single
document. To get more efficient storage, the file is compressed and saved in
HDFS to save bandwidth consumption due to the reduced size. Deduplication
and compression together are used to reduce storage space for unravelling
large volumes of data handling problems. The execution phase include:
deduplicated, MapReduce Paradigm, Integration of the key-value pairs and
compression in Hadoop. The result shows improved storage efficiency by
saving subsequent storage space and network bandwidth in the case of
internodes data transfer. 2.2 Dynamic Data Deduplication in Cloud Storage
Leesakul et al. [4] proposed a dynamic deduplication scheme for cloud
storage, to improve storage efficiency and maintaining redundancy for fault
tolerance. Data deduplication is a technique used to reduce storage space
and network bandwidth. In existing deduplication systems duplicated data
chunks identify, store only one replica of the data in storage and logical
pointers are created for other copies instead of storing redundant data. The
existing deduplication schemes may prevent the system fault tolerance since
it may be that several files refer to the same data chunk which may be
unavailable due to failure. The dynamic deduplication scheme was proposed
to balance between storage efficiency and fault tolerance requirements and
address limitation of static deduplication scheme that cannot cope with
changing user behavior. For instance, data usage in cloud changes overtime;
some data chunks may be read frequently in a period of time, but may not
be used in another period. Dynamic deduplication scheme has the capability
to adapt to various access patterns and changing user behavior in cloud
storages. The proposed system is based on client-side deduplication using
whole file hashing. Hashing process is performed at the client, and connects
to any one of deduplicators according to their loads at that time then
identifies the duplication by comparing with the existing hash values in
Metadata Server. The system is composed of the following components: load
balancer that requests from clients sending to any one of deduplicators
according to their loads at that time; Deduplicators which identify the
performed duplication; Cloud Storage, a Metadata Server to store metadata
and File Servers to store actual files and their copies; and Redundancy
Manager to identify the initial number of copies, and monitor the changing
level of Quality of Service (QoS). The system model was stimulated using
HDFS, one Namenode as Metadata server, and five Datanodes as File
servers. Three events were simulated: upload, update, and delete. The
upload event is when the file is first uploaded to the system. If files already
exist in the system, and have been uploaded again, the number of copies of
the files will be recalculated according to the highest level of QoS. For a
delete file event, users can delete their files, but the files will not
permanently deleted from the system if there are any other users refer to
the same files. The number of copies of files was changed dynamically
according to the changing level of QoS. The experimental results show that,
the proposed system performs well and can deal with scalability problem. 2.3
A file-deduplicated private cloud storage service with CDMI standard Liu et
al. [5] proposed a data deduplication private cloud storage system with
Cloud Data Management Interface (CDMI) standard based on the
fundamental DFS. A data deduplication scheme is implemented in the
proposed system to reduce cost and increase the storage efficiency, the
international standard interface CDMI is implemented in the proposed
system to increase the interoperability. Gluster is chosen as the basic for DFS
to implement proposed private cloud storage system. The proposed private
cloud storage system consists of five components as shown in Figure 1:
Client which communicates with Controller in Front-end node and exchange
information; Front-end node contains Apache server that redirects the
requests which are sent from CDMI request sender to enhance load balance;
Adaptor node receives CDMI request and stores files via Gluster client.
Storage nodes contains GlusterFS server, which can create different types of
volume for different purposes. The proposed system provides the following
three main functionalities: upload file, download file, and delete file. During
the file upload process, hash value of the file is calculated by Hash generator
in Client and sent to the Controller where the hash value is compared with all
file metadata stored in Database. Controller will notify Client that file doesn't
exist, if the file is not duplicated. Then, the file's metadata will be sent to
Controller to insert it into database and CDMI sender in Client will send
upload file request. After receiving the request, Load balancer will route it to
CDMI Server by the apache load balancer scheduler algorithms. Finally,
Adaptor node will store the file to Storage node and will notify Client with
uploading finished response message. But, if the file is duplicated, Controller
will notify the Client that the file exits, then Client will send this file's
metadata to Controller to insert it into database which will create an empty
metadata file with the same file name Storage node. For download
functionality, Client initiate request and send a file_path to the Controller.
The Controller will search database for the real_path of this file and respond
to Client. Then, client's CDMI sender will send CDMI download file request
and the Load balancer redirects the request to CDMI server by the apache
load balancer scheduler algorithms. Finally, Adaptor nodes get the requested
file from GlusterFS and transmit it back to Client. The delete functionality is
classified into two types according to the relationship among file_path,
real_path and whether the file is shareable. If file_path is equal to real_path
and the file is shareable, the file is moved to the other place rather than
delete directly. Controller query database by file_path to find a suitable path
for moving files. After removing the files, the Database will be updated and
Client will receive the delete response message. But if file_path is not equal
to real_path, or file path is equal to real path and the file is not shareable, the
file can be deleted directly. After receiving a delete message which contains
file_path, Controller will delete file according to the file_path directly and
Client will receive the delete response message. The experiments are carried
out to estimate the efficiency and characteristics of the proposed system.
The results show that the proposed system is efficient for data transmission,
even if there are overhead times caused by data deduplication mechanisms.
The disadvantage of the proposed system is unable to perform very well
while dealing with bigger files since the file level deduplication is adopted.
Figure 1: A file-deduplicated private cloud storage service with CDMI
standard [5] 2.4 An optimized approach for storing and accessing small files
on cloud storage HDFS is an open-source software framework developed for
reliable, scalable, distributed computing and storage [21]. It performs better
for small size file than large file. However, there are some reasons for small
file problem of native HDFS: large numbers of small files impose huge load
on NameNode of HDFS, the relationships between small files are
contemplated for data placement, and optimization mechanism is not
provided to enhance I/O performance. In order to solve the problems of small
files of HDFS, Dong et al. [13] proposed an optimized model to improve the
performance of storing and accessing small files on HDFS. The cut-off point
between large and small files is measured in HDFS by an experiment. This
solve a ‘how small is small’ problem of small file and improves I/O
performance. First, files are classified as independent files, structurally and
logically-related files. Lastly, perfecting technique is used to improve access
efficiency and considering correlations when files are stored. Performance
analysis indicated that the proposed model improves the storage and access
efficiencies of small files, compared with native HDFS and a Hadoop file
archiving facility. 2.5 RACS (Redundant Array of Cloud Storage) Abu-Libdeh et
al. [15] proposed RACS (Redundant Array of Cloud Storage) a proxy based
system that transparently stripes data across multiple cloud storage
providers. The protocol allows customers to avoid vendor lock-in, reduces the
cost of switching providers, and better tolerates provider failures. It retrieves
data from the cloud that is about to fail and moves the data to the new
cloud. It is designed as a proxy between the client application and a set of n
repositories, which are cloud storage locations ideally hosted by different
providers. RACS splits the incoming object into m data shares of equal size
where m< n are configurable parameters. It uses erasure coding to create
additional (n −1) redundant shares, for a total of n shares. Any subset of m
shares is sufficient to reconstruct the original object. This protocol is unable
to recover data lost when permanent cloud failure occurred and it does not
address data integrity and confidentiality challenges for cloud storage. 3.
DATA MANAGEMENT IN CLOUD STORAGE Several techniques for data
management in Cloud Storage have been proposed for different kind of
clouds and also for several kind of data. 3.1 Multi-Index Technique for
Metadata Management in Private Cloud Storage Subha et al. [6] proposed a
multi-index technique for metadata management of chunks stored private
cloud storage to improve the performance of duplicate detection and
retrieval of the file. Private cloud storage is built for an organization to offer a
higher level of security and control over the data. However, private cloud
storage offers limited storage space; deduplication technique is applied to
utilize the storage in an efficient manner. This technique aims to remove the
redundant data and utilize the storage space in an optimized manner. It
divided the file into several chunks and computes the hash value for those
chunks. The hash value of a chunk is known as its chunkID. A chunk index
contains chunkID and the location of the corresponding actual chunk. When a
chunk of a file enters the storage, its chunkID is compared against the chunk
index to determine whether it is a duplicate chunk. Chunk index entries of
the entire storage cluster are divided into several indices based on the types
of files. If files are mutable, content based chunking is used to divide the file
to identify the duplicates. If files are less mutable, duplicates are identified
by performing fixed size chunking method. Duplicates of immutable files are
identified by comparing the entire files as the content of the files is not
modified. The indices are distributed across different storage nodes and
experiments were carried out to select a suitable index structure to organize
the chunk entries for their quick retrieval. Workload consisting of different
types of files with various sizes are deduplicated and stored in the storage
cluster. The retrieval times for different types of files are computed with the
distributed multi-index and are compared against the time taken with the
sequential index. Results show that the distributed multi-index out performs
the sequential index. 3.2 Indexing spatial data in cloud data managements
Wei et al. [7] proposed a novel efficient multi-dimensional index structure,
the KR+-index, on cloud data managements (CDMs) for retrieving skewed
spatial data efficiently. KR+- index is used for multi-dimensional range
queries and nearest neighbor queries and used R+ to build the index
structure and designed the key for efficient accessing of data. New efficient
spatial query algorithms, range queries and k-NN queries for the proposed
KR+-index are redefined. To insert a new data point, the algorithm first loops
up the key of the node corresponding to the node to which the point belongs,
then inserts the data point into the node. Since there is an upper bound to
the number of points in the node, the insertion algorithm checks the current
size of the node to determine if a split is needed. For a deletion event, the
algorithm first loops up the key of the node corresponding to the node to
which the point belongs, and then deletes the data point from the node. The
experiments to implement the KR+-index on Cassandra using spatial data is
carried out. The results show that KR+- index better than the additional
methods under the skew data distributions. 3.3 Exploiting user metadata for
energy- aware node allocation in a cloud storage system Cloud computing
enables users to store data, host applications and perform computations
over the network as cost- effective platforms. Cloud storage systems have
lower energy costs compared to traditional HPC clusters due to the better
utilization techniques. However, the increasing energy consumption of cloud
storage systems still need to be addressed as the amount of data stored and
the number of computations and applications in cloud increase rapidly. In [8],
several energy efficient storage node allocation methods are introduced.
These methods aims to allocate a subset of cloud storage nodes for each
user in order to reduce the energy by exploiting the metadata heterogeneity
of cloud users and preserve uniform distribution of the system resources on
demand. Three different methods to map cloud storage nodes to the cloud
users were proposed in this work. The first scheme includes balancing,
sequential, random and grouping which can be changed manually once one
of these techniques is chosen for a cloud storage system. Secondly, the
dynamic Greedy Scheme is an extension of fixed scheme method by
periodically doing dynamic reallocation among one of the four
aforementioned allocation techniques depending on their costs. The last
method is a Correlation- Based Scheme which monitors user activities and
tries to allocate the same subset of storage nodes to the users who tend to
use the cloud storage system simultaneously. The proposed energy-aware
node allocation methods have the capability to work for both traditional
distributed storage system architecture and disk array. The proposed
methods reduce not only the energy consumption, but also balance load on
storage nodes depending on metrics determined by the cloud administrators.
4. DATA RETRIEVAL 4.1 Efficient data retrieval from cloud storage using data
mining techniques Pratiba et al. [2] proposed top-n multi keyword retrieval
over encrypted cloud data using vector space model and Two Round
Searchable Encryption (TRSE) scheme. The protocol was developed to
address the limitations of existing searching schemes. In the traditional
searchable encryption schemes, users are allowed to search in the encrypted
cloud data through keywords, which support only Boolean search. In Single
keyword ranked search, searching of data in the cloud results too coarse
output and the data privacy is opposed using server side ranking based on
order-preserving encryption (OPE). The aim of the proposed system are
threefold: 1) protection of sensitive information by encrypting cloud data at
the administrator side; 2) authentication to the multi users; and 3) retrieving
top-n files matching the multi keywords and preserving privacy of encrypted
data using Two round searchable encryption method. In the proposed
scheme, the data owner encrypts the file using RSA encryption and also,
encrypts the searchable index using Homomorphic encryption. The cloud
storage server computes the scores from the encrypted index stored on
cloud and returns the encrypted file with its scores to the user as the server
receives a query consisting of multi-keywords from the user. Once the scores
are received, the user decrypts the scores and selects the files and sends the
respective file IDs to the server. Server sends the encrypted file to the user
and then the user decrypts the file using the private key sent by the
administrator. There are two-round communication between the cloud server
and the data user to retrieve the top-n files; calculation is performed at the
server side and the ranking is ensured at the user end. The proposed system
avoids overloads by ranking the files at the user side, reducing bandwidth
and protects document frequency. 4.2 Efficient Indexing and Query
Processing of Model-View Sensor Data in the Cloud Guo et al. [3] proposed an
innovative interval index, hybrid model-view query processing and enhanced
intersection search algorithm for model-view sensor data in the cloud. Model-
view sensor data management stores the sensor data in the form of
modelled segments which brings the additional advantages of data
compression and value interpolation.. The interval index known as KVI-index
is a two-tier structure consisting of one lightweight and memory-resident
binary search tree and one index-model table materialized in the key- value
store. Each segment of model-view sensor data consists of specific time and
value range and combination of the time and value intervals are represented
as keys to index each segment which allows the index to directly serve the
queries proposed. The key-value denotes interval index (KVI-index) to index
time and value intervals of model-view sensor data. In KVI-indexing, the
virtual searching tree (vs-tree) resides in memory, while an index-model
table in the key-value store is devised to materialize the secondary structure
(SS) of each node in vs-tree. The combination of KVI-index, range scan and
MapReduce known as KVI–Scan–MapReduce was applied to perform efficient
query processing for model-view sensor data in key-value stores. The results
of the experiments aiming to compare model-view sensor data query
processing with conventional one over raw sensor data, showed that the
proposed solution outperforms in terms of query response time and index
updating efficiency. 4.3 Secure and efficient data retrieval over encrypted
data using attribute-based encryption in cloud storage In securing cloud
data, the data owner can encrypt its data content before outsourcing rather
than leaving it in the plaintext form. However, typical encryption would not
be suitable for cloud information retrieval systems because the Cloud Service
Provider cannot retrieve encrypted contents from a plaintext query without
the decryption keys. To provide an information retrieval function while
addressing the security and privacy issues, Koo et al. [21] propose a
searchable encryption scheme using Attribute-Based Encryption(ABE) with
scrambled attributes to handle the presence of redundant encrypted data for
the same message, poor expressiveness regularly access policy and the
concentration of computational overhead on the searching entity. In this
scheme, the data owner can specify both of fine- grained access policy and
searching keyword set which is required to retrieve its data under the access
policy. To retrieve the encrypted content in cloud storage, the retriever
makes index terms from its private key satisfying the access policy made up
of keywords associated with the contents where these index terms are only
used for data accessing in the cloud storage system. This scheme has
advantage of one- to-many content distribution without a sacrifice of the
nature of ABE. "this is only half of it as you have limit to share regarding
amount of words"
ChatGPT said:
Thanks for sharing the first half of the paper titled "A Survey on Storage
Techniques in Cloud Computing." It offers a solid overview of both classic
and evolving methods used to enhance efficiency, scalability, and security in
cloud storage.
1. Introduction
Clearly defines cloud computing and its service models (SaaS, PaaS,
IaaS).
2. Storage Techniques
Would you like me to help you summarize the second half of the paper once
you share it, or are you looking for something more specific like a
presentation, literature review, or critique?
You said:
ChatGPT said:
Thank you for sharing the full content of the second survey paper. Based
on your request and the previously discussed task, I will now combine the
entire second paper into one coherent and polished document,
improving the formatting, grammar, clarity, and logical flow. I will also retain
all technical details, citations, and tables while organizing them properly.
In cloud computing, services are accessed over the Internet from various
locations, which exposes them to several security threats. Therefore,
authenticating users before accessing cloud services is crucial. Li et al. [10]
proposed an identity-based hierarchical model for cloud computing,
accompanied by encryption and signature schemes.
Cloud service providers offer low-cost storage, but outsourcing data leads to
a loss of control, raising concerns about data integrity and confidentiality.
Zhou et al. [19] proposed a Role-Based Encryption (RBE) scheme
integrating cryptographic techniques with Role-Based Access Control (RBAC).
This framework:
Low overhead.
6.2 DEPSKY
Privacy violations.
Vendor lock-in.
6.3 NCCloud
Reviewed
Paper Description / Key Issue Benefits
Zhou et al.
[19] RBE integrated with RBAC Secure role-based data control
Yang et al.
[20] DAC-MACS scheme Efficient decryption and revocati
Bowers et al.
[14] HAIL for multi-cloud integrity Low overhead, strong integrity
Chen et al.
[17] NCCloud: cost-efficient repair Reduces repair traffic, fault-toler
Public/
Li et al. [10] Private Yes No No Yes
Public/
Zhou et al. [19] Private Yes No No Yes
Public/
Yang et al. [20] Private Yes No No Yes
Bowers et al.
[14] Public Yes Yes Yes Yes
Bessani et al.
[16] Public Yes Yes Yes Yes
Public/
Huang et al. [9] Private No Yes No Yes
8. Conclusion
You said:
have you analyzed whole paper one i gave and paper 2 first half i gave and
other second half i gave after word whole all the things and what were the
things which i needed to do and what was chat was about tell me all the
things
ChatGPT said:
Yes, I have analyzed all parts of both papers you shared across this chat.
Here's a complete summary of what you've done, what I’ve done for you,
and what the chat was about:
1. Paper 1 (Full) — You had earlier given a survey paper about cloud
storage focusing on various techniques, such as:
Fault tolerance
Deduplication
Energy-aware storage
HAIL
DEPSKY
NCCloud
Minimizing redundancy
Clean formatting
Logical flow
Polished language
Would you like me to prepare the final version as a Word or PDF file for
submission?
2/2
You said:
To create and download both a Word document and a PDF from your final
merged and formatted survey content, you'll need to use some Python code.
Since Python isn't currently available in this chat for direct file generation,
you can run the following script on your own machine using tools like Jupyter
Notebook, VS Code, or Google Colab.
python
CopyEdit
from docx import Document from docx.shared import Pt from fpdf import
FPDF # Replace this with your full merged paper content merged_text =
"""Your full, formatted content from both parts of the second survey paper
goes here... Start with: 5. AUTHENTICATION SCHEMES FOR CLOUD STORAGE
5.1 Identity-Based Authentication for Cloud Computing ... And end with: [22]
S. B. Akintoye, and K. A. Akintoye “Data Security scheme for Cloud
Computing using Signcryption Based on HyperElliptic curves"... """ # Save as
Word Document doc = Document() doc.add_heading('Survey on Cloud
Storage Techniques (Paper 2)', 0) doc.add_paragraph(merged_text)
doc.save("Cloud_Storage_Survey_Paper2.docx") # Save as PDF Document
pdf = FPDF() pdf.add_page() pdf.set_font("Arial", size=12) pdf.multi_cell(0,
10, merged_text) pdf.output("Cloud_Storage_Survey_Paper2.pdf")
print("Documents created successfully!")
2. I’ve now processed the full survey (both halves) and provided:
A merged summary.
A downloadable format solution (in Python).
Would you like me to email or help you compile this into a printable
academic report format as well?
You said:
now give me only pdf what you have done as i have to submit it to teacher
ChatGPT said:
bash
CopyEdit
python
CopyEdit
from fpdf import FPDF # Replace this with your full merged paper content
merged_text = """Your full, formatted content from both parts of the second
survey paper goes here... Start with: 5. AUTHENTICATION SCHEMES FOR
CLOUD STORAGE 5.1 Identity-Based Authentication for Cloud Computing ...
And end with: [22] S. B. Akintoye, and K. A. Akintoye “Data Security scheme
for Cloud Computing using Signcryption Based on HyperElliptic curves"... """
# Create PDF pdf = FPDF() pdf.add_page() pdf.set_font("Arial", size=12)
pdf.multi_cell(0, 10, merged_text) # Save PDF to a file
pdf.output("Cloud_Storage_Survey_Paper2.pdf") print("PDF created
successfully!")
4. Execute the script (in the directory where create_pdf.py is saved),
and it will generate Cloud_Storage_Survey_Paper2.pdf.
2. Copy and paste your full survey paper text (merged from both halves
you gave me).