History of HE

The word “Homomorphic encryption” became popular in the field of cryptography as the third party data storage become popular. Rivest was the first researcher to give an idea to implement homomorphism in encryption. In 1978, after a little while developing a successful RSA security algorithm, Rivest introduced the idea of incorporating the homomorphic property in the encryption technique on his research paper, “On Data Banks and Privacy Homomorphism”, which focused on a sensitive bank-loan customer data.

Encryption is a well known technique for preserving the privacy of sensitive information. One of the basic, apparently inherent, limitations of this technique is that an information system working with encrypted data can at most store or retrieve the data for the user; any more complicated operations seem to require that the data be decrypted before being operated on. This limitation follows from the choice of encryption functions used, however, and although there are some truly inherent limitations on what can be accomplished, we shall see that it appears likely that there exist encryption functions which permit encrypted data to be operated on without preliminary decryption of the operands, for many sets of interesting operations. These special encryption functions we call “privacy homomorphisms” ; they form an interesting subset of arbitrary encryption schemes (called “privacy transformations”).

Homomorphic Encryption Schemes

In this section, we explain the basics of HE theory. Then, we present notable PHE, SWHE and FHE schemes. For each scheme, we also give a brief description of the scheme. An encryption scheme is called homomorphic over an operation * if it supports the following equation:

                                 E(m1) * E(m2) = E(m1 * m2), ∀m1,m2 ∈ M,                                      Where E is the encryption algorithm and M is the set of all possible messages.

An HE scheme is primarily characterized by four operations: KeyGen, Enc, Dec, and Eval. KeyGen is the operation, which generates a secret and public key pair for the asymmetric version of HE or a single key for the symmetric version. Enc is the operation which converts the plaintext into ciphertext. Dec is the process of getting plaintext back. However, Eval is an HE-specific operation, which takes ciphertexts as input and outputs a ciphertext corresponding to a functioned plaintext. Eval performs the function f() over the ciphertexts (c1,c2) without seeing the messages (m1,m2). Eval takes ciphertexts as input and outputs evaluated ciphertexts. The most crucial point in this homomorphic encryption is that the format of the ciphertexts after an evaluation process must be preserved in order to be decrypted correctly. In addition, the size of the ciphertext should also be constant to support unlimited number of operations. Otherwise, the increase in the ciphertext size will require more resources and this will limit the number of operations. Of all HE schemes in the literature, PHE schemes support Eval function for only either addition or multiplication, SWHE schemes support for only limited number of operations or some limited circuits(e.g., branching programs) while FHE schemes supports the evaluation of arbitrary functions(e.g., searching, sorting, max, min, etc.) with unlimited number of times over ciphertexts. The well-known PHE, SWHE, and FHE schemes are explained in the following sections with a greater detail. The interest in the area of HE significantly increased after the work of Gentry [Gentry2009] in2009. Here, we start with the PHE schemes, which are the first stepping stones for FHE schemes.

The major application of FHE is cloud computing.

By this way, user can store his/her data in encrypted form in public cloud without letting know the real data. Cloud is having more storage and computing capabilities then user’s system. So the computation can be done in cloud with the help of FHE without the knowledge of secret key to the cloud administrator. More precisely, FHE is having the following property whenever f is a function composed of addition and multiplication operation in the ring:

            Decrypt (f (c1,……..,ct)) = f {m1,………, mt }                                                                   

On the off chance that the cloud (or an adversary) can proficiently compute f(c1,……, ct) from ciphertexts c1,……,ct , without realizing any data about the relating plaintexts m1,…..,mt , then the framework is proficient and secure. An another prerequisite for FHE is that the ciphertext sizes stay remain bounded, independent of the function f ; this is known as the “compact ciphertexts” prerequisite.

Homomorphic Encryption in Cloud Computing

A number of areas are there in cloud computing, such as medical, financial and advertising sector where the services of the cloud computing can be implemented. Large amount of data is stored in the cloud database just because the user doesn’t have the large space capacity and computational platform. The data stored is so large, so that user does not want to store and perform any computation locally. So the user prefers to use cloud storage and computation. Here the homomorphic plays very important role as the user want to use the cloud services, but does not want the cloud provider to access user’s data. Homomorphic encryption technique provides the way to perform the arithmetic operation like addition and multiplication on encrypted data.

Evolution of Cloud Computing

Evolution of Cloud Computing
Some people say internet is cloud or cloud is in the internet & this is correct upto certain extent.In order to understand this concept better, we need to study the evolution of internet in detail.
The history of the Internet begins with the development of electronic computers in the
1950s. Initial concepts of wide area networking originated in several computer science
laboratories in the United States, United Kingdom, and France.The U.S. Department of
Defense awarded contracts as early as the 1960s, including for the development of
the ARPANET project, directed by Robert Taylor and managed by Lawrence Roberts. The first message was sent over the ARPANET in 1969 from computer science Professor Leonard Kleinrock’s laboratory at University of California, Los Angeles (UCLA) to the second network node at Stanford Research Institute (SRI).Packet switching networks such as the NPL network ARPANET, Merit Network, CYCLADES, and Telenet, were developed in the late 1960s and early 1970s using a variety of communications protocols.Donald Davies first demonstrated packet switching in 1967 at the National Physics Laboratory(NPL) in the UK, which became a testbed for UK research for almost two decades. [ The ARPANET project led to the development of protocols for internetworking, in which multiple separate networks could be joined into a network of networks.
The Internet protocol suite(TCP/IP) was developed by Robert E. Kahn and Vint Cerf in the 1970s and became the standard networking protocol on the ARPANET, incorporating concepts from the French CYCLADES project directed by Louis Pouzin. In the early 1980s the NSF funded the establishment for national supercomputing centers at several universities, and provided interconnectivity in 1986 with the NSFNET project, which also created network access to the supercomputer sites in the United States from research and education organizations. Commercial Internet service providers(ISPs) began to emerge in the very late 1980s. The ARPANET was decommissioned in 1990. Limited private connections to parts of the Internet by officially commercial entities emerged in several American cities by late 1989 and 1990,and the NSFNET was decommissioned in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic.
In the 1980s, research at CERN in Switzerland by British computer scientist Tim Berners-Lee resulted in the World Wide Web, linking hypertext documents into an information system, accessible from any node on the network.Since the mid-1990s, the Internet has had a revolutionary impact on culture, commerce, and technology, including the rise of near-instant communication by electronic mail, instant messaging, voice over Internet Protocol(VoIP) telephone calls, two-way interactive video calls, and the World Wide Web with its discussion forums, blogs, social networking, and online shopping sites. The research and education community continues to develop and use advanced networks such as JANET in the United
Kingdom and Internet2 in the United States. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more.
The Internet’s takeover of the global communication landscape was almost instant in
historical terms: it only communicated 1% of the information flowing through two-
way telecommunications networks in the year 1993, already 51% by 2000, and more than 97% of the telecommunicated information by 2007.Today the Internet continues to grow, driven by ever greater amounts of online information, commerce, entertainment, and social networking. However, the future of the global internet may be shaped by regional differences in the world.

On 6 August 1991, exactly twenty years ago, the World Wide Web became publicly
available.When Internet is made public then some companies used it for commercial
use.Everyone thinking that he/she made his/her own website.So many companies connect
their servers to this network or they host their own website on these servers.Using this
website they promote their business or services.
Big companies connect their own server to internet and run their websites.But for small
companies connect their own server to internet is very costly.To overcome this problem a new business start on internet,called Web hosting. Web hosting is a service that allows
organizations and individuals to post a website or web page onto the Internet. A web
host, or web hosting service provider, is a business that provides the technologies and
services needed for the website or webpage to be viewed in the Internet. Websites are
hosted, or stored, on special computers called servers. When Internet users want to view your website, all they need to do is type your website address or domain into their
browser. Their computer will then connect to your server and your webpages will be
delivered to them through the browser.

Growth of contents:
Contents plays a vital role in engaging the users for a website.So,the internet providers came with an idea to get the content written by the user to make the stystem more interactive.The more the content the more the user .So keeping this idea in the mind companies made some services free for their users like email services provided by Google i.e. gmail.The free services resulted in the form of increased traffic which lead to the need of more amount of servers to manage the traffic.Therefore, the companies like Google developed their own data centres which had thounsand of servers joined in the form of clusters so as to handle the huge amount of traffic.These data centres are situated in the different locations of the world and are well collaborated with each other.Togetherly these forms the grid.Now these companies started giving their servers on rent to increase their business. So all these scenarios lead to a new term called “Cloud Computing”. Previously this was known as Cluster or Grid.

Cloud Computing

1.Introduction
Computing Models
i) Desktop Computing
A desktop computer is a personal computer designed for regular use at a single location on or near a desk or table due to its size and power requirements. The most common configuration has a case that houses the power supply, motherboard (a printed circuit board with a microprocessor as the central processing unit (CPU),memory, bus, and other electronic components), disk storage (usually one or more hard disk drives, optical disc drives, and in early models a floppy disk drive); a keyboard and mouse for input; and a computer monitor, speakers, and, often, a printer for output. The case may be oriented horizontally or vertically and placed either underneath, beside, or on top of a desk.

ii) Client-Server Model
Client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients.Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs which share their resources with clients. A client does not share any of its resources, but requests a server’s content or service function. Clients therefore initiate communication sessions with servers which await incoming requests. Examples of computer applications that use the client–server model are Email, network printing, and the World Wide Web.
Example: When a bank customer accesses online banking services with a web browser (the client), the client initiates a request to the bank’s web server. The customer’s login credentials may be stored in a database, and the web server accesses the database server as a client. An application server interprets the returned data by applying the bank’s business logic, and provides the output to the web server. Finally, the web server returns the result to the client web browser for display.

Problem:Traffic congestion has always been a problem in the paradigm of C / S. When a
large number of simultaneous clients send requests to the same server might cause many problems for this (to more customers, more problems for the server). 

iii) Cluster Computing
Cluster computing is used to overcome the problems that occurs with client server
computing.Suppose a business have a huge no. of clients then in such a case a single server is not able to handle the entire load.So, in such a case cluster computing helps to manage the operations. A computer cluster is a set of loosely or tightly connected computers that work together so that, in many respects, they can be viewed as a single system. Unlike grid computers,computer clusters have each node set to perform the same task, controlled and scheduled by software.The components of a cluster are usually connected to each other through fast local area networks, with each node (computer used as a server) running its own instance of an operating system. In most circumstances, all of the nodes use the same hardware[better source needed] and the same operating system, although in some setups (e.g. using Open Source Cluster Application Resources (OSCAR)), different operating systems can be used on
each computer, or different hardware.Clusters are usually deployed to improve
performance and availability over that of a single computer, while typically being much
more cost-effective than single computers of comparable speed or availability.
Examples include the IBM General Parallel File System, Microsoft’s Cluster Shared Volumes or the Oracle Cluster File System.
iv) Grid Computing
Grid Computing is the use of widely distributed computer resources to reach a common goal. The grid can be thought of as a distributed system with non-interactive workloads that involve a large number of files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed (thus not physically coupled) than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries.