ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Amazing porn model Belle Delphine nudes on Watch free video collection of Belle Delphine nede leaked

Rare Muslim porn and سكس on Tons of Arab porn clips.

XNXX and Xvideos porn clips free on Best XnXX porn tube channels, categorized sex videos, homemade and amateur porn.

Exlusive russian porn Get uniqe porn clips from Russia

Find out on best collection of Arabain and Hijab سكس

Enhancing Data Securing In Cloud Using Scalable Transactions

Mrudul S Rajhans
Department of Computer Science, PICT, University of Pune, MH, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering


Cloud computing is becoming one of the most used paradigms to deploy highly available and scalable systems. These systems usually demand the management of huge amounts of data, which cannot be solved with traditional nor replicated database systems as we know them. Recent solutions store data in special key-value structures, in an approach that commonly lacks the consistency provided by transactional guarantees, as it is traded for high scalability and availability. In order to ensure consistent access to the information, the use of transactions is required. However, it is well-known that traditional replication protocols do not scale well for a cloud environment. Here we take a look at current proposals to deploy transactional systems in the cloud and we propose a new system aiming at being a step forward in achieving this goal. We proceed to focus on data partitioning and describe the key role it plays in achieving high scalability. No SQL Cloud data stores provide scalability and high availability properties for web applications, but at the same time they sacrifice data consistency. However, many applications cannot afford any data inconsistency. Cloud TPS is a scalable transaction manager which guarantees full ACID properties for multiitem transactions issued by Web applications, even in the presence of server failures and network partitions. We implement this approach on top of the two main families of scalable data layers: in our local cluster and Amazon Simple DB inthe Amazon cloud shows that our system scales linearly at least up to some nodes in the Amazon cloud. The given cloud implementation has done on open stack framework using Ubuntu operating environment.



Data security, cloud computing, Cryptography techniques, Distributed system, AES(Advance Encryption Standard).


The use of multiple database instances to maintain security instead of single database instance. This work mainly focus on sharing document on cloud using AES(Advanced Encryption Standard)[1] encryption algorithm. It actually divide single document in to two instances. Where these instances are reside on single cloud. It also support scalable transaction of document.

Motivation of Dissertation

Cloud computing is becoming wider and popular and thus providing a vibrant technical environment where innovative solutions and services can be created. Cloud promises its end users cheap and flexible services.It offers the vision of a virtually infinite pool of computing, storage and networking resources where applications can be scalable deployed[4]. Cloud computing provides its users with different types of services. One of the most important service provided by cloud is STaaS[16] i.e. storage as a service.Customers often store sensitive information with cloud storage providers. Thus providing a secure framework in the cloud computing environment is the challenge which is being faced by the cloud storage providers. Customers want their data to be secured as well as it should be available at any time. Many a times this becomes difficult with a single cloud provider and it may result in service availability failure as well as the possibility of data intrusion or data getting stelling from the cloud provider. To overcome these failures we are using AES(Advanced Encryption Standard) algorithm for encryption [1]and data will be divided into two or more database servers so single whole data is not available at single server.
The remainder of this paper is organized as follows.
Section 2 describes Literature survey of paper.It gives references of paper and what they suggest. Section 3 describeImplementation detail of our system with mathematical model , system architecture and algorithms used.Section 4will concludethe paper.


Abha, Mohit propose a simple data protection model where data is encrypted using Advanced Encryption Standard (AES) before it is launched in the cloud, thus ensuring data confidentiality and security.
Mr.Mohammedet al. [1], found that the research into the use ofmulti-cloud providers to maintain security has received less attention from the research community than has the use of single clouds. This work aims to promote the use of multi-clouds due to its ability to reduce security risks that affect the cloud computing user.[3]
Mr.CongWang et al. [5] investigates secure outsourcing of widely applicable linear programming (LP) computations. In order to achievepractical efficiency, this mechanism design explicitly decomposesthe LP computation outsourcing into public LP solvers runningon the cloud and private LP parameters owned by the customer.
Zhou Wei, Guillaume Pierre, Chi-Hung Chi et sl[4] search thatCloudTPS is a scalable transaction manager which guarantees full ACID properties for multi-item transactions issued by web applications, even in the presence of server failures and network partitions. So they implement this approach on top of the two main families of scalable data layers: Bigtable and SimpleDB. Performance evaluation on top of HBase (an open-source version of Bigtable) in our local cluster and Amazon SimpleDB in the Amazon cloud shows that our system scales linearly at least up to 40 nodes in our local cluster and 80 nodes in the Amazon cloud.
K.D. Bowers, A. Juels and A. Opreaintroduce HAIL (High-Availability and Integrity Layer), a distributedcryptographic system that allows a set of servers to prove toa client that a stored file is intact and retrievable.
DebajyotiGitesh, Parthi, Sagar ,Vibha suggest the encryption of the files to be uploaded on the cloud. The integrity and confidentiality of the data uploaded by the user is ensured doubly by not only encrypting it but also providing access to the data only on successful authentication.


Mathematical model of System

Let S be the system that use cloud for storing documents created and used by different users. In this cloud use two instance for dividing document in encrypted format where input by user is plaintext but output is in cipher text.Data will be stored in linux image’s instance.Document must be divided in scalable manner means if large document come it require appropriate time.
S= {I, O, F, Su, Fa / ?S}
? I is the input to the system.
? O is the output of the system.
? F is set of functions.
? Su is success of system.
? Fa is the failure of the system.
o I is the input set such that
– I = {U,D,V}.
– P = {u1,u2,u3….un}, set of ‘n’ users.
– V = {v1, v2, v3….vm}, set of ‘m’ virtual machines.
– D = { d1,d2,d3….dm }, set of documents used.
o Output (O) is the cipher text data will be stored in database.
– Output (O) = D
o F is a set of functions where.
– F = {Fp, Fc}
o Fp is a function for entering plaintext by user.
o Fc is a function for entering ciphertext by user.
Large document and small document require adequate time
o Document is not successfully divided into two instances.

Scalable Transactions in our approach

We operates with long-term averages and it might not suit for various traffic burst patterns. That’s why metrics are very important when doing resource provisioning. The queue is valuable because it buys us more time. It doesn’t affect the throughput. The throughput is only sensible to performance improvements or more servers. But if the throughput is constant then queuing is going to level traffic bursts at the cost of delaying the over flown requests processing.
Ws = service time (the connection acquire and hold time) = 100 ms = 0.1s Ls = in-service requests (pool size) = 5 Assuming there is no queueing (Wq = 0):
Our connection pool can deliver up to 50 requests per second without ever queueing any incoming connection request. Whenever there are traffic spikes we need to rely on a queue, and since we impose a fixed connection acquire timeout the queue length will be limited.
Since the system is considered stable the arrival rate applies both to the queue entry as for the actual services:
This queuing configuration still delivers 50 requests per second but it may queue 100 requests for 2 seconds as well. A one second traffic burst of 150 requests would be handled, since:
• 50 requests can be served in the first second
• the other 100 are going to be queued and served in the next two seconds The timeout equation is:
So for a 3 seconds spike of 250 requests per second:
Λ spik250requests/s
Tspike = 3s
The number of requests to be served is:
This spike would require 15 seconds to be fully processed, meaning a 700 queue buffer that takes another 14 seconds to be processed.


The proposed system aims to build private cloud using open source software OpenStack. The system architecture ofOpenStackis as depicted in Fig.1. The proposed system consists of various modules such as Horizon, Nova, Swift, Glance, and Keystone
• Nova: Nova is the Computing Fabric controller for the OpenStack Cloud. The necessary activities for the life cycle of instances within the OpenStack cloud are handled by Nova. This characteristic makes Nova a Management Platform to manage various compute resources, networking, authorization, and scalability needs of the OpenStack cloud.
• Glance: Glance is a standalone service which provides a catalog service for storing and querying virtual disk images. Nova and Glance together provides an end-to end solution for cloud disk image management.
• Swift: Swift can store billions of virtual object distributed across the nodes. The swift offers built-in redundancy, failover management, archiving and media streaming. Swift plays an important role in scalability.
• Keystone: Keystone provides identity and access policy services for all components in the OpenStack family.All components of OpenStack including Swift, Glance, and Nova are authenticated and authorized by Keystone.
• Horizon: Horizon can be used to manage instances and images, create key pairs, attach volumes to instances,manipulate Swift containers etc.


The proposed system is implemented using open source software called Openstack and Ubuntu operating system.The three nodes such as Compute, Controller and Storage are installed with Ubuntu server operating system becauseall these nodes have to behave like servers as shown in figure Compute node is installed with the Nova packagesand services. Controller node is installed with the Glance,Keystone and Horizon packages and services. Storage node is installed with the Swift or cinder packages andthe services. All three nodes are connected internally to OpenStack Dashboard with internal network. TheApplication which is ready to use the cloud service is connected through external network to controller node of the private cloud.

Module 1: Compute Node

The installation of nova packages is carried out by downloading the nova packages by the following Command:
• sudo apt-get install nova-api nova-cert nova-compute nova-compute-kvm nova-doc nova-network nova object store nova-scheduler nova-volume rabbitmqservernovnc nova-consoleauth These install lines added most of the packages that expected (nova-api, nova-compute, nova-network etc.) to work nova on the open stack.

Module 2: Control Node

The installation of glance packages is carried out by downloading the glance packages by the following command sudo apt-get install glance glance-api glance-client glance-common glance-registry python- glance These install lines added most of the packages that expected (glance-api, nova-registry etc.) to work on the open stack. The installation of keystone packages is carried out by downloading the keystone packages by the following command.
• sudo apt-get install keystone python-keystone python key stone client These install lines added most of the packages that expected (python-keystone, python-keystone etc.) to work keystone on the open stack. The installation of horizon packages is carried out by downloading the horizon packages by the following command.
• sudo apt-get install open stack-dashboard These install lines added most of the packages that expected to work dashboard on the OpenStack.

Module 3: Storage Node

downloading the swift packages by the following command
• sudo apt-get install swift swift-proxy swift-account swift-container swift-object These install lines added most of the packages that expected (swift-proxy, swift-account, swift-container etc.) to work swift on the OpenStack.



The plaintext input and cipher text output for the AES(Advanced Encryption Standard) algorithms are blocks of 128 bits. The cipher key input is a sequence of 128, 192 or 256 bits. In other words the length of the cipher key, Nk, is either 4, 6 or 8 words which represent the number of columns in the cipher key. The AES(Advanced Encryption Standard) algorithm is categorized into three versions based on the cipher key length. The number of rounds of encryption for each AES(Advanced Encryption Standard) version depends on the cipher key size.
In the AES(Advanced Encryption Standard) algorithm, the number of rounds is represented by Nr, where Nr = 10 when Nk= 4, Nr= 12 when Nk= 6, and Nr= 14 when Nk= 8. The following table illustrated the variations of the AES(Advanced Encryption Standard) algorithm. For the AES(Advanced Encryption Standard) algorithm the block size (Nb), which represents the number of columns comprising the State is Nb= 4.
The basic processing unit for the AES(Advanced Encryption Standard) algorithm is a byte. As a result, the plaintext, cipher text and the cipher key are arranged and processed as arrays of bytes. For an input, an output or a cipher key denoted by a, the bytes in the resulting array are referenced as an , where n is in one of the following ranges:
Block length = 128 bits, 0 <= n < 16
Key length = 128 bits, 0 <= n < 16
Key length = 192 bits, 0 <= n < 24
Key length = 256 bits, 0 <= n < 24
All byte values in the AES(Advanced Encryption Standard) algorithm are presented as the concatenation of their individual bit values between braces in the order {b7, b6, b5, b4, b3, b2, b1, b0}. These bytes are interpreted as finite field elements using a polynomial representation:
As an example, {10001001} (or {85} in hexadecimal) identifies the polynomial x7 ? x3 ?1. The arrays of bytes in the AES(Advanced Encryption Standard) algorithm are represented as n a a a ...a 0 1 2 .
All the AES(Advanced Encryption Standard) algorithm operations are performed on a two dimensional 4x4 array of bytes which is called the State, and any individual byte within the State is referred to as sr,c, where letter ‘r’ represent the row and letter ‘c’ denotes the column. At the beginning of the encryption process, the State is populated with the plaintext. Then the cipher performs a set of substitutions and permutations on the State. After the cipher operations are conducted on the State, the final value of the state is copied to the ciphertext output as is shown in the following figure.
At the beginning of the cipher, the input array is copied into the State according the following scheme:
s[r,c] = in [r + 4c] for 0 ≤ r < 4 and 0 ≤ c < 4 ,
and at the end of the cipher the State is copied into the output array as shown below:
out[r+4c] = s[r,c] for 0 ≤ r < 4 and 0≤ c< 4

Cipher Transformations

The AES(Advanced Encryption Standard) cipher either operates on individual bytes of the State or an entire row/column. At the start of the cipher, the input is copied into the State as described in Section 2.2. Then, an initial Round Key addition is performed on the State. Round keys are derived from the cipher key using the Key Expansion routine. The key expansion routine generates a series of round keys for each round of transformations that are performed on the State.
The transformations performed on the state are similar among all AES(Advanced Encryption Standard) versions but the number of transformation rounds depends on the cipher key length. The final round in all AES(Advanced Encryption Standard) versions differs slightly from the first Nr??1 rounds as it has one less transformation performed on the State. Each round of AES(Advanced Encryption Standard) cipher (except the last one) consists of all the following transformation:
• SubBytes( )
• ShiftRows( )
• MixColumns( )
• AddRoundKey ( )


TPS is nothing but a Transaction Processing System which gives an assurance scalable operation between client servers as well distributed architecture. Basically in our project we have use TPS concept for data reliability purpose.
In a given research work when data will get divided in different blocks; we have to store it scheduled different cloud servers. The given schema gives an assurance to end user his data is scalable and integrated. We consider if we have two different cloud servers then how do we use TPS at addition time. We will use the transaction like this
1. Begin the transaction
2. Execute a set of data manipulations and/or queries
3. If no errors occur then commit the transaction and end it
4. If errors occur then rollback the transaction and end it
If no errors occurred during the execution of the transaction then the system commits the transaction. A transaction commit operation applies all data manipulations within the scope of the transaction and persists the results to the cloud database. If an error occurs during the transaction, or if the user specifies a rollback operation, the data manipulations within the transaction are not persisted to the database. In no case can a partial transaction be committed to the database since that would leave the database in an incompatible state.


It is clear that although the use of cloud computing has rapidly increased; cloud computing security is still considered the major issue in the cloud computing environment. Customers do not want to lose their private information as a result of malicious insiders in the cloud. In addition, the loss of service availability has caused many problems for a large number of customers recently. In addition, data intrusion leads to many problems for the users of cloud computing. The purpose of this project is to implement secured-clouds to address the security risks and solutions. By using AES algorithmand scalability of transaction of cloud system.


I take this opportunity to thank my internal guide Prof. RekhaKulkarni for giving me guidance and support throughout the dissertation. Her valuable guidelines and suggestions were very helpful. I am also grateful to Prof. G. P. Potdar, Head of Computer Engineering Department, Pune Institute of Computer Technology for giving me all the help and important suggestions all over the dissertation. I also thank to Dr. P. T. Kulkarni, Principal,Pune Institute of Computer Technology for his encouragement and providing the required resources.

Tables at a glance

Table icon
Table 1

Figures at a glance

Figure 1 Figure 2 Figure 3 Figure 4 Figure 5
Figure 1 Figure 2 Figure 3 Figure 4 Figure 5