What Is Cloud Computing?
What Is Cloud Computing?
INTRODUCTION
Cloud Computing is the long dreamed vision of computing as a utility, where cloud
customers can remotely store their data into the cloud so as to enjoy the on-demand high quality
applications and services from a shared pool of configurable computing resources. Its great
flexibility and economic savings are motivating both individuals and enterprises to outsource their
local complex data management system into the cloud, especially when the data produced by them
that need to be stored and utilized is rapidly increasing. To protect data privacy and combat
unsolicited accesses in cloud and beyond, sensitive data, e.g., emails, personal health records, photo
albums, tax documents, financial transactions, etc., may have to be encrypted by data owners before
outsourcing to commercial public cloud; this, however, obsoletes the traditional data utilization
service based on plaintext keyword search. The trivial solution of downloading all the data and
decrypting locally is clearly impractical, due to the huge amount of bandwidth cost in cloud scale
systems. Moreover, aside from eliminating the local storage management, storing data into the cloud
serves no purpose unless they can be easily searched and utilized. Thus, exploring privacy-
preserving and effective search service over encrypted cloud data is of paramount importance.
Considering the potentially large number of on demand data users and huge amount of outsourced
data documents in cloud, this problem is particularly challenging as it is extremely difficult to meet
also the requirements of performance, system usability and scalability.
On the one hand, to meet the effective data retrieval need, large amount of documents
demand cloud server to perform result relevance ranking, instead of returning undifferentiated
result. Such ranked search system enables data users to find the most relevant information
quickly, rather than burdensomely sorting through every match in the content collection. Ranked
search can also elegantly eliminate unnecessary network traffic by sending back only the most
relevant data, which is highly desirable in the “pay-as-youuse” cloud paradigm. For privacy
protection, such ranking operation, however, should not leak any keyword related information.
On the other hand, to improve search result accuracy as well as enhance user searching
experience, it is also crucial for such ranking system to support multiple keywords search, as
1
single keyword search often yields far too coarse result. As a common practice indicated by
today’s web search engines (e.g., Google search), data users may tend to provide a set of
keywords instead of only one as the indicator of their search interest to retrieve the most relevant
data. And each keyword in the search request is able to help narrow down the search result
further. “Coordinate matching”, i.e., as many matches as possible, is an efficient principle among
such multi-keyword semantics to refine the result relevance, and has been widely used in the
plaintext information retrieval (IR) community. However, how to apply it in the encrypted cloud
data search system remains a very challenging task because of inherent security and privacy
obstacles, including various strict requirements like data privacy, index privacy, keyword
privacy, and many others.
In the literature, searchable encryption is a helpful technique that treats encrypted data as
documents and allows a user to securely search over it through single keyword and retrieve
documents of interest. However, direct application of these approaches to deploy secure large
scale cloud data utilization system would not be necessarily suitable, as they are developed as
crypto primitives and cannot accommodate such high service-level requirements like system
usability, user searching experience, and easy information discovery in mind. Although some
recent designs have been proposed to support Boolean keyword search as an attempt to enrich
the search flexibility, they are still not adequate to provide users with acceptable result ranking
functionality. Our early work has been aware of this problem,
and solves the secure ranked search over encrypted data with support of only single keyword
query. But how to design an efficient encrypted data search mechanism that supports multikey
word semantics without privacy breaches still remains a challenging open problem.
In this paper, for the first time, we define and solve the problem of multi-keyword ranked
search over encrypted cloud data (MRSE) while preserving strict system-wise privacy in cloud
computing paradigm. Among various multi-keyword semantics, we choose the efficient principle
of “coordinate matching”, i.e., as many matches as possible, to capture the similarity between
search query and data documents. Specifically, we use “inner product similarity”, i.e., the
number of query keywords appearing in a document, to quantitatively evaluate the similarity of
that document to the search query in “coordinate matching” principle. During index construction,
each document is associated with a binary vector as a subindex where each bit represents
whether corresponding keyword is contained in the document. The search query is also described
2
as a binary vector where each bit means whether corresponding keyword appears in this search
request, so the similarity could be exactly measured by inner product of query vector with data
vector. However, directly outsourcing data vector or query vector will violate index privacy or
search privacy. To meet the challenge of supporting such multikey word semantic without
privacy breaches, we propose a basic MRSE scheme using secure inner product computation,
which is adapted from a secure k-nearest neighbor (kNN) technique, and then improve it step by
step to achieve various privacy requirements in two levels of threat models.
1) For the first time, we explore the problem of multikey word ranked search over encrypted
cloud data, and establish a set of strict privacy requirements for such a secure cloud data
utilization system to become a reality.
2) We propose two MRSE schemes following the principle of “coordinate matching” while
meeting different privacy requirements in two levels of threat models.
3) Thorough analysis investigating privacy and efficiency guarantees of proposed schemes is
given, and experiments on the real-world dataset further show proposed schemes indeed
introduce low overhead on computation and communication.
3
1.3 Proposed System:
We define and solve the challenging problem of privacy-preserving multi-keyword ranked
search over encrypted cloud data (MRSE), and establish a set of strict privacy requirements for
such a secure cloud data utilization system to become a reality. Among various multi-keyword
semantics, we choose the efficient principle of “coordinate matching”.
Advantages:
Multi-keyword ranked search over encrypted cloud data (MRSE).
“Coordinate matching” by inner product similarity.
Hardware Requirements:
4
2. LITERATURE SURVEY
Literature survey is the most important step in software development process. Before developing
the tool it is necessary to determine the time factor, economy company strength. Once these things
are satisfied, ten next steps is to determine which operating system and language can be used for
developing the tool. Once the programmers start building the tool the programmers need lot of
external support. This support can be obtained from senior programmers, from book or from
websites. Before building the system the above consideration r taken into account for developing
the proposed system.
Cloud computing providing unlimited infrastructure to store and execute customer data and
program. As customers you do not need to own the infrastructure, they are merely accessing or
renting; they can forego capital expenditure and consume resources as a service, paying instead
for what they use.
5
Security a major Concern:
Security concerns arising because both customer data and program are residing in
Provider Premises.
Security is always a major concern in Open System Architectures.
6
Data Location:
When user uses the cloud, user probably won't know exactly where your data is hosted,
what country it will be stored in?
Data should be stored and processed only in specific jurisdictions as define by user.
Provider should also make a contractual commitment to obey local privacy requirements
on behalf of their customers.
Data-centered policies that are generated when a user provides personal or sensitive
information that travels with that information throughout its lifetime to ensure that the
information is used only in accordance with the policy.
Backups of Data :
Data store in database of provider should be redundantly stored in multiple physical
location.
Data that is generated during running of program on instances is all customer data and
therefore provider should not perform backups.
Control of Administrator on Databases.
7
Data Sanitization:
Sanitization is the process of removing sensitive information from a storage device.
What happens to data stored in a cloud computing environment once it has passed its
user’s “use by date”
What data sanitization practices does the cloud computing service provider propose to
implement for redundant and retiring data storage devices as and when these devices are
retired or taken out of service.
Network Security:
Denial of Service: Where servers and networks are brought down by a huge amount of
network traffic and users are denied the access to a certain Internet based service.
QoS Violation: Through congestion, delaying or dropping packets, or through resource
hacking.
Man in the Middle Attack: To overcome it always use SSL.
IP Spoofing: Spoofing is the creation of TCP/IP packets using somebody else's IP
address.
Solution: Infrastructure will not permit an instance to send traffic with a source IP or
MAC address other than its own.
8
Information Security:
Security related to the information exchanged between different hosts or between hosts
and users.
This issues pertaining to secure communication, authentication, and issues concerning
single sign on and delegation.
Secure communication issues include those security concerns that arise during the
communication between two entities.
These include confidentiality and integrity issues. Confidentiality indicates that all data
sent by users should be accessible to only “legitimate” receivers, and integrity indicates
that all data received should only be sent / modified by “legitimate” senders.
Solution: public key encryption, X.509 certificates, and the Secure Socket Layer (SSL)
enables secure authentication and communication over computer networks.
9
2.2 INTRODUCTION TO .NET:
“.NET” is also the collective name given to various software components built
upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so on).
The CLR is described as the “execution engine” of .NET. It provides the environment within
which programs run. The most important features are
The following features of the .NET framework are also worth description:
10
Managed Code:
The code that targets .NET, and which contains certain extra information - “metadata” - to
describe itself. Whilst both managed and unmanaged code can run in the runtime, only managed
code contains the information that allows the CLR to guarantee, for instance, safe execution and
interoperability.
Managed Data:
With Managed Code comes Managed Data. CLR provides memory allocation and Deal location
facilities, and garbage collection. Some .NET languages use Managed Data by default, such as
C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR
can, depending on the language you’re using, impose certain constraints on the features
available. As with managed and unmanaged code, one can have both managed and unmanaged
data in .NET applications - data that doesn’t get garbage collected but instead is looked after by
unmanaged code.
The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety.
This ensures that all classes are compatible with each other, by describing types in a common
way. CTS define how types work within the runtime, which enables types in one language to
interoperate with types in another language, including cross-language exception handling. As
well as ensuring that types are only used in appropriate ways, the runtime also ensures that code
doesn’t attempt to access memory that hasn’t been allocated to it.
The CLR provides built-in support for language interoperability. To ensure that you can develop
managed code that can be fully used by developers using any programming language, a set of
language features and rules for using them called the Common Language Specification (CLS)
has been defined. Components that follow these rules and expose only CLS features are
considered CLS-compliant.
11
The Class Library:
.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the
namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as
well as Object. All objects derive from System. Object. As well as objects, there are value types.
Value types can be allocated on the stack, which can provide useful flexibility. There are also
efficient means of converting value types to object types if and when necessary.
The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O,
threading, and so on, as well as XML and database connectivity.
The class library is subdivided into a number of sets (or namespaces), each providing distinct
areas of functionality, with dependencies between the namespaces kept to a minimum.
The multi-language capability of the .NET Framework and Visual Studio .NET enables
developers to use their existing programming skills to build all types of applications and XML
Web services. The .NET framework supports new versions of Microsoft’s old favorites Visual
Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to
the family.
Visual Basic .NET has been updated to include many new and improved language features that
make it a powerful object-oriented programming language. These features include inheritance,
interfaces, and overloading, among others. Visual Basic also now supports structured exception
handling, custom attributes and also supports multi-threading.
Visual Basic .NET is also CLS compliant, which means that any CLS-compliant language can
use the classes, objects, and components you create in Visual Basic .NET.
Managed Extensions for C++ and attributed programming are just some of the enhancements
made to the C++ language. Managed Extensions simplify the task of migrating existing C++
applications to the new .NET Framework.
12
C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid
Application Development”. Unlike other languages, its specification is just the grammar of the
language. It has no standard library of its own, and instead has been designed with the intention
of using the .NET libraries as its own.
Microsoft Visual J# .NET provides the easiest transition for Java-language developers into the
world of XML Web Services and dramatically improves the interoperability of Java-language
programs with existing software written in a variety of other programming languages.
Active State has created Visual Perl and Visual Python, which enable .NET-aware applications
to be built in either Perl or Python. Both products can be integrated into the Visual Studio .NET
environment. Visual Perl includes support for Active State’s Perl Dev Kit.
FORTRAN
COBOL
Eiffel
13
.NET FRAMEWORK
Operating System
C#.NET is also compliant with CLS (Common Language Specification) and supports
structured exception handling. CLS is set of rules and constructs that are supported by the
CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET
Framework; it manages the execution of the code and also makes the development process
easier by providing services.
14
Garbage Collection:
Garbage Collection is another new feature in C#.NET. The .NET Framework monitors allocated
resources, such as objects and variables. In addition, the .NET Framework automatically releases
memory for reuse by destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that are not currently in use by
applications. When the garbage collector comes across an object that is marked for garbage
collection, it releases the memory occupied by the object.
Overloading:
Overloading is another feature in C#. Overloading enables us to define multiple procedures with
the same name, where each procedure has a different set of arguments. Besides using
overloading for procedures, we can use it for constructors and properties in a class.
Multithreading:
C#.NET also supports multithreading. An application that supports multithreading can handle
multiple tasks simultaneously, we can use multithreading to decrease the time taken by an
application to respond to user interaction.
C#.NET supports structured handling, which enables us to detect and remove errors at runtime.
In C#.NET, we need to use Try…Catch…Finally statements to create exception handlers. Using
Try…Catch…Finally statements, we can create robust and effective exception handlers to
improve the performance of our application.
The .NET Framework is a new computing platform that simplifies application development in
the highly distributed environment of the Internet.
15
Objectives of . NET Framework:
There are different types of application, such as Windows-based applications and Web-based
applications.
Features of SQL-SERVER:
The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000
Analysis Services. The term OLAP Services has been replaced with the term Analysis Services.
Analysis Services also includes a new data mining component. The Repository component
available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data
Services. References to the component now use the term Meta Data Services. The term
repository is used only in reference to the repository engine within Meta Data Services.
They are,
1. TABLE
2. QUERY
3. FORM
4. REPORT
5. MACRO
16
Views Of Table:
1. Design View
2. Datasheet View
1. Design View: To build or modify the structure of a table we work in the table design view.
We can specify what kind of data will be hold.
2. Datasheet View: To add, edit or analyses the data itself we work in tables datasheet view
mode.
Query:
A query is a question that has to be asked the data. Access gathers data that answers the question
from one or more table. The data that make up the answer is either dynaset (if you edit it) or a
snapshot (it cannot be edited).Each time we run query, we get latest information in the dynaset.
Access either displays the dynaset or snapshot for us to view or perform an action on it, such as
deleting or updating.
17
3. SYSTEM ANALYSIS
Feasibility Study:
The feasibility of the project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system analysis the feasibility
study of the proposed system is to be carried out. This is to ensure that the proposed system is
not a burden to the company. For feasibility analysis, some understanding of the major
requirements for the system is essential.
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
Economical Feasibility:
This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development
of the system is limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used are freely
available. Only the customized products had to be purchased.
Technical Feasibility:
This study is carried out to check the technical feasibility, that is, the technical requirements of
the system. Any system developed must not have a high demand on the available technical
resources. This will lead to high demands on the available technical resources. This will lead to
high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.
18
Social Feasibility:
The aspect of study is to check the level of acceptance of the system by the user. This includes
the process of training the user to use the system efficiently. The user must not feel threatened by
the system, instead must accept it as a necessity. The level of acceptance by the users solely
depends on the methods that are employed to educate the user about the system and to make him
familiar with it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.
INPUT DESIGN:
The input design is the link between the information system and the user. It comprises the
developing specification and procedures for data preparation and those steps are necessary to put
transaction data in to a usable form for processing can be achieved by inspecting the computer to
read data from a written or printed document or it can occur by having people keying the data
directly into the system. The design of input focuses on controlling the amount of input required,
controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The
input is designed in such a way so that it provides security and ease of use with retaining the
privacy. Input Design considered the following things:’
OBJECTIVES:
1. Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process and
show the correct direction to the management for getting correct information from the
computerized system.
19
2. It is achieved by creating user-friendly screens for the data entry to handle large volume of
data. The goal of designing input is to make data entry easier and to be free from errors. The data
entry screen is designed in such a way that all the data manipulates can be performed. It also
provides record viewing facilities.
3. When the data is entered it will check for its validity. Data can be entered with the help of
screens. Appropriate messages are provided as when needed so that the user will not be in maize
of instant. Thus the objective of input design is to create an input layout that is easy to follow.
OUTPUT DESIGN:
A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to
other system through outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the most important and direct
source information to the user. Efficient and intelligent output design improves the system’s
relationship to help user decision-making.
1. Designing computer output should proceed in an organized, well thought out manner; the right
output must be developed while ensuring that each output element is designed so that people will
find the system can use easily and effectively. When analysis design computer output, they
should Identify the specific output that is needed to meet the requirements.
3. Document, report, or other formats that contain information produced by the system.
20
The output form of an information system should accomplish one or more of the following
objectives.
Non-Functional Requirements:
Web Application framework enabling reuse and replacement of components.
Web Application works on any browser.
MySQL Server 2008 for structured data storage.
The system is expected to run on supported operating systems.
The system should provide secured access to the web server.
This Application helps us to find the data easily according to the priorities given to the
searching files.
21
3.4 Software Design:
3.4.1 Data Flow Diagram:
The DFD is also called as bubble chart. It is a simple graphical formalism that can be
used to represent a system in terms of input data to the system, various processing carried
out on this data, and the output data is generated by this system.
The data flow diagram (DFD) is one of the most important modeling tools. It is used to
model the system components. These components are the system process, the data used
by the process, an external entity that interacts with the system and the information flows
in the system.
DFD shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information flow and the
transformations that are applied as data moves from input to output.
DFD is also known as bubble chart. A DFD may be used to represent a system at any
level of abstraction. DFD may be partitioned into levels that represent increasing
information flow and functional detail.
22
User Search
Admin Login
Check
View File List
RegisterUser
View Downloading L ist
Details
Up load File
Da ta Base
E nd
23
3.5 UML DIAGRAMS:
What is UML?
24
Diagrams of UML:
3.5.1 Usecase Diagram:
A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals (represented as
use cases), and any dependencies between those use cases. The main purpose of a use case
diagram is to show what system functions are performed for which actor. Roles of the actors in
the system can be depicted.
Search File
Admin
LogIn
Upload
Register File
User Detail
Download Admin
User View Download
Get Activation Details
Code For EMail
View File
Request
Download
File Change
Log Key
25
3.5.2 Sequence Diagram:
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that
shows how processes operate with one another and in what order. Sequence diagrams are
sometimes called event diagrams, event scenarios, and timing diagrams
System DataBase
User Admin
Search File
Admin Login
Select File From
List
Change Password
Stored File s
in DataBase
Download Files
26
3.5.3 Class Diagram:
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of
static structure diagram that describes the structure of a system by showing the system's classes,
their attributes, operations (or methods), and the relationships among the classes. It explains
which class contains information.
File Upload
User Registration Serial no.
File Name
User I D Path
User Nam e File
MailID Categoriy
Phone Number Encrypted File
Registration() Upload()
Download() Download()
Zip_Converstion()
Encryption
Admin File Name
File Type
U ser Name File Path
Password
Encrypt Key
Log Key
File
27
3.5.4 Activity Diagram :
Activity diagrams are graphical representations of workflows of stepwise activities and actions
with support for choice, iteration and concurrency. In the Unified Modeling Language, activity
diagrams can be used to describe the business and operational step-by-step process.
Admin
LogIn
Se arch Files
Change Password
View Result
U pload File
Register U ser
Details View Download
Details
Get Activation
Code From Mail View File
Re quest
28
4. IMPLEMENTATION
Implementation is the stage of the project when the theoretical design is turned out into a
working system. Thus it can be considered to be the most critical stage in achieving a successful
new system and in giving the user, confidence that the new system will work and be effective.
The implementation stage involves careful planning, investigation of the existing system
and it’s constraints on implementation, designing of methods to achieve changeover and
evaluation of changeover methods.
Interaction Model:
1. Client-driven interventions:
Client-driven interventions are the means to protect customers from unreliable services.
For example, services that miss deadlines or do not respond at all for a longer time are
replaced by other more reliable services in future discovery operations.
2. Provider-driven interventions:
Provider-driven interventions are desired and initiated by the service owners to shield
themselves from malicious clients. For instance, requests of clients performing a denial of
service attack by sending multiple requests in relatively short intervals are blocked
(instead of processed) by the service.
29
5. MODULE DESCRIPTION
MODULES:
1. Encrypt Module
2. Client Module
3. Multi-keyword Module
4. Admin Module
Encrypt Module:
This module is used to help the server to encrypt the document using RSA Algorithm and
to convert the encrypted document to the Zip file with activation code and then activation code
send to the user for download.
Client Module:
This module is used to help the client to search the file using the multiple key words
concept and get the accurate result list based on the user query. The user is going to select the
required file and register the user details and get activation code in mail from the
“customerservice404” email before enter the activation code. After user can download the Zip
file and extract that file.
Multi-keyword Module:
This module is used to help the user to get the accurate result based on the multiple
keyword concepts. The users can enter the multiple words query, the server is going to split that
query into a single word after search that word file in our database. Finally, display the matched
word list from the database and the user gets the file from that list.
30
Admin Module:
This module is used to help the server to view details and upload files with the security.
Admin uses the log key to the login time. Before the admin logout, change the log key. The
admin can change the password after the login and view the user downloading details and the
counting of file request details on flowchart. The admin can upload the file after the conversion
of the Zip file format.
31
6. SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished product It is the
process of exercising software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement
Unit Testing:
Unit testing involves the design of test cases that validate that the internal program logic
is functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.
32
Integration Testing:
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components is
correct and consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of component.
Functional Testing
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user manuals.
33
System Testing:
System testing ensures that the entire integrated software system meets requirements. It tests
a configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points.
White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose. It is
used to test areas that cannot be reached from a black box level.
Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.
Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as
two distinct phases.
34
Test strategy and approach:
Field testing will be performed manually and functional tests will be written in detail.
Test objectives:
Features to be Tested:
Integration Testing:
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by interface
defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Acceptance Testing:
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects
35
6.2 TESTING STRATEGIES:
Test Cases:
Screen: Registration Screen
S.No Test Cases Condition being Expected Output Actual Output Pass/Fail
checked
1 Registration Cursor will be at Cursor should be at Cursor points at Pass
Screen should user id user id user id
be active/open
36
Screen: Login Screen verifying Valid username
Password
Valid/Invalid Validating Password is
4 should be Pass
Password password valid
valid
Valid/Invalid
Login will be Login should Login status Pass
5 e-mail-id,
enabled be enable enabled
Password
37
Screen: Login Screen verifying Valid Password
S.No Test Cases Condition being Expected Output Actual Output Pass/Fail
checked
2 e-mail Id Validating
e-mail id e-mail id should be
e-mail id is valid Pass
valid
4 Re-password
Validating Password should be
Password is valid Pass
password valid
6 Reset On Clicking the Should Clear the Clear the data Pass
button data
38
7. SCREENSHOTS
39
Figure:7.2 Home Page
40
Figure:7.3 Changing Password
41
Figure:7.4 Password Successfully Changed
42
Figure:7.5 Uploaded the Files
43
Figure:7.6 Displaying the Uploaded Files
44
Figure:7.7 Display of File usage
45
Figure:7.8 Displaying the Downloaded Files
46
Figure:7.9 Changing the username
47
Figure:7.10 Searching the Keywords
48
Figure:7.11 Displaying the Related Search Files
49
Figure:7.12 Registration Form
50
Figure:7.13 Sending Activation Code to Gmail Account
51
Figure:7.14 Saving the Downloaded File
52
Figure:7.15 File usage Information
53
8. CONCLUSION
In this paper, for the first time we define and solve the problem of multi-keyword
ranked search over encrypted cloud data, and establish a variety of privacy requirements. Among
various multi-keyword semantics, we choose the efficient principle of “coordinate matching”,
i.e., as many matches as possible, to effectively capture similarity between query keywords and
outsourced documents, and use “inner product similarity” to quantitatively formalize such a
principle for similarity measurement. For meeting the challenge of supporting multi-keyword
semantic without privacy breaches, we first propose a basic MRSE scheme using secure inner
product computation, and significantly improve it to achieve privacy requirements in two levels
of threat models. Thorough analysis investigating privacy and efficiency guarantees of proposed
schemes is given, and experiments on the real-world dataset show our proposed schemes
introduce low overhead on both computation and communication. As our future work, we will
explore supporting other multi-keyword semantics (e.g., weighted query) over encrypted data,
integrity check of rank order in search result and privacy guarantees in more stronger threat
model.
54
BIBLIOGRAPHY
References:
DOTNET
http://www.asp.net.com
http://www.dotnetspider.com/
http://www.dotnetspark.com
55