Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
77 views55 pages

What Is Cloud Computing?

The document proposes a system for multi-keyword ranked search over encrypted cloud data (MRSE) that addresses limitations of existing single-keyword and Boolean search systems. It defines MRSE and establishes privacy requirements. The proposed system uses an "inner product similarity" approach based on "coordinate matching" semantics to measure similarity between search queries and documents through binary vectors, while preserving privacy. This represents an improvement over prior work on single-keyword ranked search by enabling multi-keyword search over encrypted data without privacy breaches. The document outlines hardware and software requirements and reviews related literature on cloud computing and security concerns.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views55 pages

What Is Cloud Computing?

The document proposes a system for multi-keyword ranked search over encrypted cloud data (MRSE) that addresses limitations of existing single-keyword and Boolean search systems. It defines MRSE and establishes privacy requirements. The proposed system uses an "inner product similarity" approach based on "coordinate matching" semantics to measure similarity between search queries and documents through binary vectors, while preserving privacy. This represents an improvement over prior work on single-keyword ranked search by enabling multi-keyword search over encrypted data without privacy breaches. The document outlines hardware and software requirements and reviews related literature on cloud computing and security concerns.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 55

1.

INTRODUCTION

1.1 WHAT IS CLOUD COMPUTING?

Cloud Computing is the long dreamed vision of computing as a utility, where cloud
customers can remotely store their data into the cloud so as to enjoy the on-demand high quality
applications and services from a shared pool of configurable computing resources. Its great
flexibility and economic savings are motivating both individuals and enterprises to outsource their
local complex data management system into the cloud, especially when the data produced by them
that need to be stored and utilized is rapidly increasing. To protect data privacy and combat
unsolicited accesses in cloud and beyond, sensitive data, e.g., emails, personal health records, photo
albums, tax documents, financial transactions, etc., may have to be encrypted by data owners before
outsourcing to commercial public cloud; this, however, obsoletes the traditional data utilization
service based on plaintext keyword search. The trivial solution of downloading all the data and
decrypting locally is clearly impractical, due to the huge amount of bandwidth cost in cloud scale
systems. Moreover, aside from eliminating the local storage management, storing data into the cloud
serves no purpose unless they can be easily searched and utilized. Thus, exploring privacy-
preserving and effective search service over encrypted cloud data is of paramount importance.
Considering the potentially large number of on demand data users and huge amount of outsourced
data documents in cloud, this problem is particularly challenging as it is extremely difficult to meet
also the requirements of performance, system usability and scalability.
On the one hand, to meet the effective data retrieval need, large amount of documents
demand cloud server to perform result relevance ranking, instead of returning undifferentiated
result. Such ranked search system enables data users to find the most relevant information
quickly, rather than burdensomely sorting through every match in the content collection. Ranked
search can also elegantly eliminate unnecessary network traffic by sending back only the most
relevant data, which is highly desirable in the “pay-as-youuse” cloud paradigm. For privacy
protection, such ranking operation, however, should not leak any keyword related information.
On the other hand, to improve search result accuracy as well as enhance user searching
experience, it is also crucial for such ranking system to support multiple keywords search, as

1
single keyword search often yields far too coarse result. As a common practice indicated by
today’s web search engines (e.g., Google search), data users may tend to provide a set of
keywords instead of only one as the indicator of their search interest to retrieve the most relevant
data. And each keyword in the search request is able to help narrow down the search result
further. “Coordinate matching”, i.e., as many matches as possible, is an efficient principle among
such multi-keyword semantics to refine the result relevance, and has been widely used in the
plaintext information retrieval (IR) community. However, how to apply it in the encrypted cloud
data search system remains a very challenging task because of inherent security and privacy
obstacles, including various strict requirements like data privacy, index privacy, keyword
privacy, and many others.
In the literature, searchable encryption is a helpful technique that treats encrypted data as
documents and allows a user to securely search over it through single keyword and retrieve
documents of interest. However, direct application of these approaches to deploy secure large
scale cloud data utilization system would not be necessarily suitable, as they are developed as
crypto primitives and cannot accommodate such high service-level requirements like system
usability, user searching experience, and easy information discovery in mind. Although some
recent designs have been proposed to support Boolean keyword search as an attempt to enrich
the search flexibility, they are still not adequate to provide users with acceptable result ranking
functionality. Our early work has been aware of this problem,
and solves the secure ranked search over encrypted data with support of only single keyword
query. But how to design an efficient encrypted data search mechanism that supports multikey
word semantics without privacy breaches still remains a challenging open problem.
In this paper, for the first time, we define and solve the problem of multi-keyword ranked
search over encrypted cloud data (MRSE) while preserving strict system-wise privacy in cloud
computing paradigm. Among various multi-keyword semantics, we choose the efficient principle
of “coordinate matching”, i.e., as many matches as possible, to capture the similarity between
search query and data documents. Specifically, we use “inner product similarity”, i.e., the
number of query keywords appearing in a document, to quantitatively evaluate the similarity of
that document to the search query in “coordinate matching” principle. During index construction,
each document is associated with a binary vector as a subindex where each bit represents
whether corresponding keyword is contained in the document. The search query is also described

2
as a binary vector where each bit means whether corresponding keyword appears in this search
request, so the similarity could be exactly measured by inner product of query vector with data
vector. However, directly outsourcing data vector or query vector will violate index privacy or
search privacy. To meet the challenge of supporting such multikey word semantic without
privacy breaches, we propose a basic MRSE scheme using secure inner product computation,
which is adapted from a secure k-nearest neighbor (kNN) technique, and then improve it step by
step to achieve various privacy requirements in two levels of threat models.

Our contributions are summarized as follows,

1) For the first time, we explore the problem of multikey word ranked search over encrypted
cloud data, and establish a set of strict privacy requirements for such a secure cloud data
utilization system to become a reality.
2) We propose two MRSE schemes following the principle of “coordinate matching” while
meeting different privacy requirements in two levels of threat models.
3) Thorough analysis investigating privacy and efficiency guarantees of proposed schemes is
given, and experiments on the real-world dataset further show proposed schemes indeed
introduce low overhead on computation and communication.

1.2 Existing System:


The large number of data users and documents in cloud, it is crucial for the search service to
allow multi-keyword query and provide result similarity ranking to meet the effective data
retrieval need. The searchable encryption focuses on single keyword search or Boolean keyword
search, and rarely differentiates the search results.
Disadvantages:
 Single-keyword search without ranking.
 Boolean- keyword search without ranking.
 Single-keyword search with ranking.

3
1.3 Proposed System:
We define and solve the challenging problem of privacy-preserving multi-keyword ranked
search over encrypted cloud data (MRSE), and establish a set of strict privacy requirements for
such a secure cloud data utilization system to become a reality. Among various multi-keyword
semantics, we choose the efficient principle of “coordinate matching”.

Advantages:
 Multi-keyword ranked search over encrypted cloud data (MRSE).
 “Coordinate matching” by inner product similarity.

1.4 REQUIREMENT SPECIFICATIONS:

Hardware Requirements:

 System : Pentium IV 2.4 GHz.


 Hard Disk : 40 GB.
 Floppy Drive : 1.44 Mb.
 Monitor : 15 VGA Colour Monitor.
 Mouse : Optical Mouse.
 Ram : 512 Mb.
 Keyboard : 101 Keyboard.
Software Requirements:
 Operating system : Windows7 Ultimate 32-bit.
 Coding Language : ASP.Net with C#.
 Data Base : SQL Server 2008.
 Tool : Visual Studio 2010.

4
2. LITERATURE SURVEY

Literature survey is the most important step in software development process. Before developing
the tool it is necessary to determine the time factor, economy company strength. Once these things
are satisfied, ten next steps is to determine which operating system and language can be used for
developing the tool. Once the programmers start building the tool the programmers need lot of
external support. This support can be obtained from senior programmers, from book or from
websites. Before building the system the above consideration r taken into account for developing
the proposed system.

We have to analysis the Cloud Computing Outline Survey:

2.1 INTRODUCTION TO CLOUD COMPUTING:

Cloud computing providing unlimited infrastructure to store and execute customer data and
program. As customers you do not need to own the infrastructure, they are merely accessing or
renting; they can forego capital expenditure and consume resources as a service, paying instead
for what they use.

Benefits of Cloud Computing:


 Minimized Capital expenditure.
 Location and Device independence.
 Utilization and efficiency improvement.
 Very high Scalability.
 High Computing power.

5
Security a major Concern:
 Security concerns arising because both customer data and program are residing in
Provider Premises.
 Security is always a major concern in Open System Architectures.

Figure: 2.1.1 Security a major concern

Data Centre Security?


 Professional Security staff utilizing video surveillance, state of the art intrusion detection
systems, and other electronic means.
 When an employee no longer has a business need to access datacenter his privileges to
access datacenter should be immediately revoked.
 All physical and electronic access to data centers by employees should be logged and
audited routinely.
 Audit tools so that users can easily determine how their data is stored, protected, used,
and verify policy enforcement.

6
Data Location:
 When user uses the cloud, user probably won't know exactly where your data is hosted,
what country it will be stored in?
 Data should be stored and processed only in specific jurisdictions as define by user.
 Provider should also make a contractual commitment to obey local privacy requirements
on behalf of their customers.
 Data-centered policies that are generated when a user provides personal or sensitive
information that travels with that information throughout its lifetime to ensure that the
information is used only in accordance with the policy.

Figure: 2.1.2 Data Location

Backups of Data :
 Data store in database of provider should be redundantly stored in multiple physical
location.
 Data that is generated during running of program on instances is all customer data and
therefore provider should not perform backups.
 Control of Administrator on Databases.

7
Data Sanitization:
 Sanitization is the process of removing sensitive information from a storage device.
 What happens to data stored in a cloud computing environment once it has passed its
user’s “use by date”
 What data sanitization practices does the cloud computing service provider propose to
implement for redundant and retiring data storage devices as and when these devices are
retired or taken out of service.

Network Security:
 Denial of Service: Where servers and networks are brought down by a huge amount of
network traffic and users are denied the access to a certain Internet based service.
 QoS Violation: Through congestion, delaying or dropping packets, or through resource
hacking.
 Man in the Middle Attack: To overcome it always use SSL.
 IP Spoofing: Spoofing is the creation of TCP/IP packets using somebody else's IP
address.
 Solution: Infrastructure will not permit an instance to send traffic with a source IP or
MAC address other than its own.

How secure is encryption Scheme:


 Is it possible for all of my data to be fully encrypted?
 What algorithms are used?
 Encryption accidents can make data totally unusable.
 Encryption can complicate availability Solution.
 The cloud provider should provide evidence that encryption schemes were designed and
tested by experienced specialists.

8
Information Security:

 Security related to the information exchanged between different hosts or between hosts
and users.
 This issues pertaining to secure communication, authentication, and issues concerning
single sign on and delegation.
 Secure communication issues include those security concerns that arise during the
communication between two entities.
 These include confidentiality and integrity issues. Confidentiality indicates that all data
sent by users should be accessible to only “legitimate” receivers, and integrity indicates
that all data received should only be sent / modified by “legitimate” senders.
 Solution: public key encryption, X.509 certificates, and the Secure Socket Layer (SSL)
enables secure authentication and communication over computer networks.

9
2.2 INTRODUCTION TO .NET:

Microsoft .NET is a set of Microsoft software technologies for rapidly building


and integrating XML Web services, Microsoft Windows-based applications, and Web solutions.
The .NET Framework is a language-neutral platform for writing programs that can easily and
securely interoperate. There’s no language barrier with .NET: there are numerous languages
available to the developer including Managed C++, C#, Visual Basic and Java Script. The .NET
framework provides the foundation for components to interact seamlessly, whether locally or
remotely on different platforms. It standardizes common data types and communications
protocols so that components created in different languages can easily interoperate.

“.NET” is also the collective name given to various software components built
upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so on).

The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

The CLR is described as the “execution engine” of .NET. It provides the environment within
which programs run. The most important features are

 Conversion from a low-level assembler-style language, called Intermediate Language


(IL), into code native to the platform being executed on.
 Memory management, notably including garbage collection.
 Checking and enforcing security restrictions on the running code.
 Loading and executing programs, with version control and other such features.

The following features of the .NET framework are also worth description:

10
Managed Code:

The code that targets .NET, and which contains certain extra information - “metadata” - to
describe itself. Whilst both managed and unmanaged code can run in the runtime, only managed
code contains the information that allows the CLR to guarantee, for instance, safe execution and
interoperability.

Managed Data:

With Managed Code comes Managed Data. CLR provides memory allocation and Deal location
facilities, and garbage collection. Some .NET languages use Managed Data by default, such as
C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR
can, depending on the language you’re using, impose certain constraints on the features
available. As with managed and unmanaged code, one can have both managed and unmanaged
data in .NET applications - data that doesn’t get garbage collected but instead is looked after by
unmanaged code.

Common Type System:

The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety.
This ensures that all classes are compatible with each other, by describing types in a common
way. CTS define how types work within the runtime, which enables types in one language to
interoperate with types in another language, including cross-language exception handling. As
well as ensuring that types are only used in appropriate ways, the runtime also ensures that code
doesn’t attempt to access memory that hasn’t been allocated to it.

Common Language Specification:

The CLR provides built-in support for language interoperability. To ensure that you can develop
managed code that can be fully used by developers using any programming language, a set of
language features and rules for using them called the Common Language Specification (CLS)
has been defined. Components that follow these rules and expose only CLS features are
considered CLS-compliant.

11
The Class Library:

.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the
namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as
well as Object. All objects derive from System. Object. As well as objects, there are value types.
Value types can be allocated on the stack, which can provide useful flexibility. There are also
efficient means of converting value types to object types if and when necessary.

The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O,
threading, and so on, as well as XML and database connectivity.

The class library is subdivided into a number of sets (or namespaces), each providing distinct
areas of functionality, with dependencies between the namespaces kept to a minimum.

Languages Supported by .NET:

The multi-language capability of the .NET Framework and Visual Studio .NET enables
developers to use their existing programming skills to build all types of applications and XML
Web services. The .NET framework supports new versions of Microsoft’s old favorites Visual
Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to
the family.

Visual Basic .NET has been updated to include many new and improved language features that
make it a powerful object-oriented programming language. These features include inheritance,
interfaces, and overloading, among others. Visual Basic also now supports structured exception
handling, custom attributes and also supports multi-threading.

Visual Basic .NET is also CLS compliant, which means that any CLS-compliant language can
use the classes, objects, and components you create in Visual Basic .NET.

Managed Extensions for C++ and attributed programming are just some of the enhancements
made to the C++ language. Managed Extensions simplify the task of migrating existing C++
applications to the new .NET Framework.

12
C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid
Application Development”. Unlike other languages, its specification is just the grammar of the
language. It has no standard library of its own, and instead has been designed with the intention
of using the .NET libraries as its own.

Microsoft Visual J# .NET provides the easiest transition for Java-language developers into the
world of XML Web Services and dramatically improves the interoperability of Java-language
programs with existing software written in a variety of other programming languages.

Active State has created Visual Perl and Visual Python, which enable .NET-aware applications
to be built in either Perl or Python. Both products can be integrated into the Visual Studio .NET
environment. Visual Perl includes support for Active State’s Perl Dev Kit.

Other languages for which .NET compilers are available include

 FORTRAN
 COBOL
 Eiffel

13
.NET FRAMEWORK

ASP.NET Windows Forms

XML WEB SERVICES

Base Class Libraries

Common Language Runtime

Operating System

Figure: 2.2.1 .Net Framework

C#.NET is also compliant with CLS (Common Language Specification) and supports
structured exception handling. CLS is set of rules and constructs that are supported by the
CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET
Framework; it manages the execution of the code and also makes the development process
easier by providing services.

C#.NET is a CLS-compliant language. Any objects, classes, or components that created in


C#.NET can be used in any other CLS-compliant language. In addition, we can use objects,
classes, and components created in other CLS-compliant languages in C#.NET .The use of
CLS ensures complete interoperability among applications, regardless of the languages used
to create the application.

Constructors and Destructors:


Constructors are used to initialize objects, whereas destructors are used to destroy them. In
other words, destructors are used to release the resources allocated to the object. In C#.NET
the sub finalize procedure is available.

14
Garbage Collection:

Garbage Collection is another new feature in C#.NET. The .NET Framework monitors allocated
resources, such as objects and variables. In addition, the .NET Framework automatically releases
memory for reuse by destroying objects that are no longer in use.

In C#.NET, the garbage collector checks for the objects that are not currently in use by
applications. When the garbage collector comes across an object that is marked for garbage
collection, it releases the memory occupied by the object.

Overloading:

Overloading is another feature in C#. Overloading enables us to define multiple procedures with
the same name, where each procedure has a different set of arguments. Besides using
overloading for procedures, we can use it for constructors and properties in a class.

Multithreading:
C#.NET also supports multithreading. An application that supports multithreading can handle
multiple tasks simultaneously, we can use multithreading to decrease the time taken by an
application to respond to user interaction.

Structured Exception Handling:

C#.NET supports structured handling, which enables us to detect and remove errors at runtime.
In C#.NET, we need to use Try…Catch…Finally statements to create exception handlers. Using
Try…Catch…Finally statements, we can create robust and effective exception handlers to
improve the performance of our application.

The .NET Framework:

The .NET Framework is a new computing platform that simplifies application development in
the highly distributed environment of the Internet.

15
Objectives of . NET Framework:

1. To provide a consistent object-oriented programming environment whether object codes is


stored and executed locally on Internet-distributed, or executed remotely.

2. To provide a code-execution environment to minimizes software deployment and guarantees


safe execution of code.

3. Eliminates the performance problems.

There are different types of application, such as Windows-based applications and Web-based
applications.

Features of SQL-SERVER:

The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000
Analysis Services. The term OLAP Services has been replaced with the term Analysis Services.
Analysis Services also includes a new data mining component. The Repository component
available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data
Services. References to the component now use the term Meta Data Services. The term
repository is used only in reference to the repository engine within Meta Data Services.

SQL-SERVER database consist of six type of objects,

They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

5. MACRO

Table: A database is a collection of data about a specific topic.

16
Views Of Table:

We can work with a table in two types,

1. Design View

2. Datasheet View

1. Design View: To build or modify the structure of a table we work in the table design view.
We can specify what kind of data will be hold.

2. Datasheet View: To add, edit or analyses the data itself we work in tables datasheet view
mode.

Query:
A query is a question that has to be asked the data. Access gathers data that answers the question
from one or more table. The data that make up the answer is either dynaset (if you edit it) or a
snapshot (it cannot be edited).Each time we run query, we get latest information in the dynaset.
Access either displays the dynaset or snapshot for us to view or perform an action on it, such as
deleting or updating.

17
3. SYSTEM ANALYSIS

3.1 System Design:

Feasibility Study:
The feasibility of the project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system analysis the feasibility
study of the proposed system is to be carried out. This is to ensure that the proposed system is
not a burden to the company. For feasibility analysis, some understanding of the major
requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

Economical Feasibility:

This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development
of the system is limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used are freely
available. Only the customized products had to be purchased.

Technical Feasibility:
This study is carried out to check the technical feasibility, that is, the technical requirements of
the system. Any system developed must not have a high demand on the available technical
resources. This will lead to high demands on the available technical resources. This will lead to
high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.

18
Social Feasibility:

The aspect of study is to check the level of acceptance of the system by the user. This includes
the process of training the user to use the system efficiently. The user must not feel threatened by
the system, instead must accept it as a necessity. The level of acceptance by the users solely
depends on the methods that are employed to educate the user about the system and to make him
familiar with it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.

3.2 INPUT DESIGN AND OUTPUT DESIGN:

INPUT DESIGN:

The input design is the link between the information system and the user. It comprises the
developing specification and procedures for data preparation and those steps are necessary to put
transaction data in to a usable form for processing can be achieved by inspecting the computer to
read data from a written or printed document or it can occur by having people keying the data
directly into the system. The design of input focuses on controlling the amount of input required,
controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The
input is designed in such a way so that it provides security and ease of use with retaining the
privacy. Input Design considered the following things:’

 What data should be given as input?


 How the data should be arranged or coded?
 The dialog to guide the operating personnel in providing input.
 Methods for preparing input validations and steps to follow when error occur.

OBJECTIVES:

1. Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process and
show the correct direction to the management for getting correct information from the
computerized system.

19
2. It is achieved by creating user-friendly screens for the data entry to handle large volume of
data. The goal of designing input is to make data entry easier and to be free from errors. The data
entry screen is designed in such a way that all the data manipulates can be performed. It also
provides record viewing facilities.

3. When the data is entered it will check for its validity. Data can be entered with the help of
screens. Appropriate messages are provided as when needed so that the user will not be in maize
of instant. Thus the objective of input design is to create an input layout that is easy to follow.

OUTPUT DESIGN:

A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to
other system through outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the most important and direct
source information to the user. Efficient and intelligent output design improves the system’s
relationship to help user decision-making.

1. Designing computer output should proceed in an organized, well thought out manner; the right
output must be developed while ensuring that each output element is designed so that people will
find the system can use easily and effectively. When analysis design computer output, they
should Identify the specific output that is needed to meet the requirements.

2. Select methods for presenting information.

3. Document, report, or other formats that contain information produced by the system.

20
The output form of an information system should accomplish one or more of the following
objectives.

 Convey information about past activities, current status or projections of theFuture.


 Signal important events, opportunities, problems, or warnings.
 Trigger an action.
 Confirm an action

3.3 Functional and Non-Functional Requirements Specification:


Functional Requirements:
The system is required to perform the following functions:
 Display all the information about the Web application that is being developed and some
set of instructions the user might want to remember before they sets up the system for
configuring global time.
 Sign in your application with log key before running your application.
 Ability to upload the files in the database.
 Able to display data.
 Ability to show the priority of the uploaded file.
 Ability to delete the uploaded file.

Non-Functional Requirements:
 Web Application framework enabling reuse and replacement of components.
 Web Application works on any browser.
 MySQL Server 2008 for structured data storage.
 The system is expected to run on supported operating systems.
 The system should provide secured access to the web server.
 This Application helps us to find the data easily according to the priorities given to the
searching files.

21
3.4 Software Design:
3.4.1 Data Flow Diagram:

 The DFD is also called as bubble chart. It is a simple graphical formalism that can be
used to represent a system in terms of input data to the system, various processing carried
out on this data, and the output data is generated by this system.
 The data flow diagram (DFD) is one of the most important modeling tools. It is used to
model the system components. These components are the system process, the data used
by the process, an external entity that interacts with the system and the information flows
in the system.
 DFD shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information flow and the
transformations that are applied as data moves from input to output.
 DFD is also known as bubble chart. A DFD may be used to represent a system at any
level of abstraction. DFD may be partitioned into levels that represent increasing
information flow and functional detail.

22
User Search
Admin Login

Check
View File List

Change Pa ssword S elect File


From L ist

RegisterUser
View Downloading L ist
Details

View File Reque st


E nte r Activation Code

Up load File

Down load File

Da ta Base

E nd

Figure: 3.4.1 Dataflow Diagram

23
3.5 UML DIAGRAMS:

What is UML?

UML stands for Unified Modeling Language. UML is a standardized general-purpose


modeling language in the field of object-oriented software engineering. The standard is managed,
and was created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object
oriented computer software. In its current form UML is comprised of two major components: a
Meta-model and a notation. In the future, some form of method or process may also be added to;
or associated with, UML.
The Unified Modeling Language is a standard language for specifying, Visualization,
Constructing and documenting the artifacts of software system, as well as for business modeling
and other non-software systems.
The UML represents a collection of best engineering practices that have proven
successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software and the
software development process. The UML uses mostly graphical notations to express the design
of software projects.
Goals of UML:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can
develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations, frameworks, patterns
and components.
7. Integrate best practices.

24
Diagrams of UML:
3.5.1 Usecase Diagram:

A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals (represented as
use cases), and any dependencies between those use cases. The main purpose of a use case
diagram is to show what system functions are performed for which actor. Roles of the actors in
the system can be depicted.

Search File
Admin
LogIn

View Result Change Passwor d


List

Upload
Register File
User Detail
Download Admin
User View Download
Get Activation Details
Code For EMail

View File
Request

Download
File Change
Log Key

Figure: 3.5.1 Usecase Diagram

25
3.5.2 Sequence Diagram:

A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that
shows how processes operate with one another and in what order. Sequence diagrams are
sometimes called event diagrams, event scenarios, and timing diagrams

System DataBase

User Admin

Search File
Admin Login
Select File From
List
Change Password

Use r Registera tion

Enter Activa tion Code

View Downloading List

Vie w File Request

Stored File s
in DataBase

Download Files

Figure: 3.5.2 Sequence Diagram

26
3.5.3 Class Diagram:

In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of
static structure diagram that describes the structure of a system by showing the system's classes,
their attributes, operations (or methods), and the relationships among the classes. It explains
which class contains information.

File Upload
User Registration Serial no.
File Name
User I D Path
User Nam e File
MailID Categoriy
Phone Number Encrypted File
Registration() Upload()
Download() Download()
Zip_Converstion()

Encryption
Admin File Name
File Type
U ser Name File Path
Password
Encrypt Key
Log Key
File

Change Passwor d() Encrypt()


Change Log Key() Decrypt()
Mail Sending()

Figure: 3.5.3 Class Diagram

27
3.5.4 Activity Diagram :

Activity diagrams are graphical representations of workflows of stepwise activities and actions
with support for choice, iteration and concurrency. In the Unified Modeling Language, activity
diagrams can be used to describe the business and operational step-by-step process.

Admin
LogIn

Se arch Files

Change Password

View Result

U pload File

Register U ser
Details View Download
Details

Get Activation
Code From Mail View File
Re quest

Download File Chenge Log K ey

Figure: 3.5.4 Activity Diagram

28
4. IMPLEMENTATION

Implementation is the stage of the project when the theoretical design is turned out into a
working system. Thus it can be considered to be the most critical stage in achieving a successful
new system and in giving the user, confidence that the new system will work and be effective.

The implementation stage involves careful planning, investigation of the existing system
and it’s constraints on implementation, designing of methods to achieve changeover and
evaluation of changeover methods.

Interaction Model:

1. Client-driven interventions:

Client-driven interventions are the means to protect customers from unreliable services.
For example, services that miss deadlines or do not respond at all for a longer time are
replaced by other more reliable services in future discovery operations.
2. Provider-driven interventions:

Provider-driven interventions are desired and initiated by the service owners to shield
themselves from malicious clients. For instance, requests of clients performing a denial of
service attack by sending multiple requests in relatively short intervals are blocked
(instead of processed) by the service.

29
5. MODULE DESCRIPTION

MODULES:
1. Encrypt Module
2. Client Module
3. Multi-keyword Module
4. Admin Module

Encrypt Module:

This module is used to help the server to encrypt the document using RSA Algorithm and
to convert the encrypted document to the Zip file with activation code and then activation code
send to the user for download.

Client Module:

This module is used to help the client to search the file using the multiple key words
concept and get the accurate result list based on the user query. The user is going to select the
required file and register the user details and get activation code in mail from the
“customerservice404” email before enter the activation code. After user can download the Zip
file and extract that file.

Multi-keyword Module:

This module is used to help the user to get the accurate result based on the multiple
keyword concepts. The users can enter the multiple words query, the server is going to split that
query into a single word after search that word file in our database. Finally, display the matched
word list from the database and the user gets the file from that list.

30
Admin Module:

This module is used to help the server to view details and upload files with the security.
Admin uses the log key to the login time. Before the admin logout, change the log key. The
admin can change the password after the login and view the user downloading details and the
counting of file request details on flowchart. The admin can upload the file after the conversion
of the Zip file format.

31
6. SYSTEM TESTING

6.1 Introduction To Testing:

The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished product It is the
process of exercising software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement

Software Testing Strategies:

Unit Testing:

Unit testing involves the design of test cases that validate that the internal program logic
is functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.

32
Integration Testing:

Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components is
correct and consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of component.

Functional Testing

Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : Identified classes of valid input must be accepted.

Invalid Input : Identified classes of invalid input must be rejected.

Functions : Identified functions must be exercised.

Output : Identified classes of application outputs must be exercised.

Systems / Procedures : Interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key functions, or


special test cases. In addition, systematic coverage pertaining to identify Business process flows;
data fields, predefined processes, and successive processes must be considered for testing.
Before functional testing is complete, additional tests are identified and the effective value of
current tests is determined.

33
System Testing:

System testing ensures that the entire integrated software system meets requirements. It tests
a configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points.

White Box Testing:

White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose. It is
used to test areas that cannot be reached from a black box level.

Black Box Testing:

Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.

Unit Testing:

Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as
two distinct phases.

34
Test strategy and approach:
Field testing will be performed manually and functional tests will be written in detail.

Test objectives:

 All field entries must work properly.


 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.

Features to be Tested:

 Verify that the entries are of the correct format


 No duplicate entries should be allowed.
 All links should take the user to the correct page.

Integration Testing:
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by interface
defects.

The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

Acceptance Testing:
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional requirements.

Test Results: All the test cases mentioned above passed successfully. No defects

35
6.2 TESTING STRATEGIES:
Test Cases:
Screen: Registration Screen
S.No Test Cases Condition being Expected Output Actual Output Pass/Fail
checked
1 Registration Cursor will be at Cursor should be at Cursor points at Pass
Screen should user id user id user id
be active/open

2 Registration Cursor will be at Cursor should be at Cursor points at Pass


Screen should password Password Password
be active/open
3 Registration Cursor will be at Cursor should be at Cursor points at Pass
Screen should conform password Conform Password conform
be active/open Password
4 Registration Cursor will be at Cursor should be at Cursor points at Pass
Screen should email id email id email id
be active/open
5 Valid /Invalid Validation User name should be User name is Pass
username username valid valid
6 Valid/Invalid Validating e-mail id should be e-mail id is Pass
e-mail id e-mail id valid valid
7 Valid/Invalid Validating Password should be Password is Pass
Password password valid valid
8 Valid/Invalid Validating Conform password Conform Pass
conform password conform password should be valid password is
valid
9 Valid/Invalid Registration will Registration will be Registration Pass
Entire data be enabled enabled Status enabled

36
Screen: Login Screen verifying Valid username

Condition being Expected Actual


S.No Test Cases Pass/Fail
checked Output Output

1 Login Screen Cursor will be at Cursor should Cursor points Pass


should user id be at user id at user id
be active/open
Login Screen
Cursor will be at Cursor should Cursor points
2 should Pass
password be at password at password
be active/open

Valid/Invalid Validating e-mail should e-mail id is


3 Pass
e-mail id e-mail id be valid valid

Password
Valid/Invalid Validating Password is
4 should be Pass
Password password valid
valid
Valid/Invalid
Login will be Login should Login status Pass
5 e-mail-id,
enabled be enable enabled
Password

37
Screen: Login Screen verifying Valid Password

S.No Test Cases Condition being Expected Output Actual Output Pass/Fail
checked

1 Name Only Characters Characters Characters Pass


(a-z, A-Z) (a-z, A-Z) (a-z, A-Z)

2 e-mail Id Validating
e-mail id e-mail id should be
e-mail id is valid Pass
valid

3 Alpha numeric, Should be


Alphanumeric,
Password Special Alphanumeric, Pass
Special characters
Characters Special Characters

4 Re-password
Validating Password should be
Password is valid Pass
password valid

5 Submit On clicking the Validating the Should navigate to Pass


button details given in other page
register form

6 Reset On Clicking the Should Clear the Clear the data Pass
button data

38
7. SCREENSHOTS

Figure:7.1 Starting Page

39
Figure:7.2 Home Page

40
Figure:7.3 Changing Password

41
Figure:7.4 Password Successfully Changed

42
Figure:7.5 Uploaded the Files

43
Figure:7.6 Displaying the Uploaded Files

44
Figure:7.7 Display of File usage

45
Figure:7.8 Displaying the Downloaded Files

46
Figure:7.9 Changing the username

47
Figure:7.10 Searching the Keywords

48
Figure:7.11 Displaying the Related Search Files

49
Figure:7.12 Registration Form

50
Figure:7.13 Sending Activation Code to Gmail Account

51
Figure:7.14 Saving the Downloaded File

52
Figure:7.15 File usage Information

53
8. CONCLUSION

In this paper, for the first time we define and solve the problem of multi-keyword
ranked search over encrypted cloud data, and establish a variety of privacy requirements. Among
various multi-keyword semantics, we choose the efficient principle of “coordinate matching”,
i.e., as many matches as possible, to effectively capture similarity between query keywords and
outsourced documents, and use “inner product similarity” to quantitatively formalize such a
principle for similarity measurement. For meeting the challenge of supporting multi-keyword
semantic without privacy breaches, we first propose a basic MRSE scheme using secure inner
product computation, and significantly improve it to achieve privacy requirements in two levels
of threat models. Thorough analysis investigating privacy and efficiency guarantees of proposed
schemes is given, and experiments on the real-world dataset show our proposed schemes
introduce low overhead on both computation and communication. As our future work, we will
explore supporting other multi-keyword semantics (e.g., weighted query) over encrypted data,
integrity check of rank order in search result and privacy guarantees in more stronger threat
model.

54
BIBLIOGRAPHY

References:

1. http://www.cs.cmu.edu/»enron/.Beginning ASP.NET 4: in C# and VB by Imar Spaanjaars.


2. Programming ASP.NET 3.5 by Jesse Liberty, Dan Maharry, Dan Hurwitz.
3. S. Kamara and K. Lauter, “Cryptographic cloud storage,” in RLCPS, January 2010,
LNCS. Springer, Heidelberg.
4. D. Song, D. Wagner, and A. Perrig, “Practical techniques for searches on encrypted
data,” in Proc. of S&P, 2000.
5. Beginning ASP.NET 3.5 in C# 2008: From Novice to Professional, Second Edition by
Matthew MacDonald.
6. Y.-C. Chang and M. Mitzenmacher, “Privacy preserving keyword searches on remote
encrypted data,” in Proc. of ACNS, 2005.
7. D. Boneh, G. D. Crescenzo, R. Ostrovsky, and G. Persiano, “Public key encryption with
keyword search,” in Proc. of EUROCRYPT, 2004.
8. J. Li, Q. Wang, C. Wang, N. Cao, K. Ren, and W. Lou, “Fuzzy keyword search over
encrypted data in cloud computing,” in Proc. of IEEE INFOCOM’10 Mini-Conference,
San Diego, CA, USA, March 2010.

DOTNET

http://www.asp.net.com
http://www.dotnetspider.com/
http://www.dotnetspark.com

55

You might also like