ISSN ONLINE(2320-9801) PRINT (2320-9798)
S. Christina Suganthi Monica1, M.Subasini2 M.E, Department of CSE, IFET College of Engineering, Villupuram, Tamilnadu, India |
Related article at Pubmed, Scholar Google |
Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering
Personal data stored in the Cloud may contain account numbers, passwords, notes, and other important information that could be used and misused by a miscreant, a competitor, or a court of law. These data are cached, copied, and archived by Cloud Service Providers (CSPs), often without users’ authorization and control. Time constrained data helps to protect all the data, by destructing it after the specific timeout . In addition, the decryption key is destructed after the user-specified time. In this paper, we present, a time constrained data system that meets this challenge through a novel integration of cryptographic techniques with active storage techniques .Time constrained data system are used to meet all the privacy-preserving goals. Compared to the system without time constrained data mechanism, throughput for uploading and downloading with the proposed timeout data acceptably decreases by less than 72%, while latency for upload/download operations with time constrained data mechanism increases by less than 60%.
INTRODUCTION |
With the development of cloud computing and cloud services are being important for people’s life , since people are requested or to post some private information to the cloud ,since they believe that cloud service providers will provide them security. As people rely more and more on cloud technology and since when data is being processed by the current computer system or internet ,their privacy or on more risk people rely more and more on cloud technology and since when data is being processed by the current computer system or internet ,their privacy or on more risk .since people have knowledge about the copies it can be leaked through cloud service providers. These are the problems present a formidable challenges in protecting people privacy. we introduce a concept known as vanish .in vanish system we are going to encrypt the data using a randomly generated key and through the shamir secret sharing technique ,we divide and store the shares of the key in distributed hash table DHT is a kind of peer-peer network to hold the key value pairs .This DHT has property that the old data are discarded after some time and making room for new data to enter .since the DHT refresh every data which are present after about 8 hours. This is a disadvantage. since data are been refreshed about 8 hours and we cannot determine how much time the key can survival. Another attack which are been imposed on vanish system is Sybil attacks it works by continuously crawling the DHT and saving each storage value before timeout. In order to overcome the disadvantage in vanish system ,we are going to introduce the concept the Time constrained data destruction in cloud .we have a meta server and time constrained data system which creates the active storage object . In this system we are going to provide a specific timeout for the data to be available . |
Our contribution to be summarized are as follows: |
1).we focus on the related key distribution algorithm which is known as shamir secret sharing algorithm for purpose of dividing the key equally and storing in object storage system |
.2).Based on the active storage framework ,the object based system can manage and store the equally divided key . |
3).Time constrained concept supports security erasing files and random encryption keys in hard disk drive or solid state drive. |
4). Through functionality and security properties evaluation of the time constrained data destruction prototype, the results demonstrate that time constrained data destruction is practical to use and meets all the privacy-preserving goals. The prototype system imposes reasonably low run-time overhead. |
WORK ON TIME CONSTRAINED DATA SYSTEM |
A .Data Self-Destruct |
The time constrained data system in the Cloud environment should meet the following requirements: i) How to destruct all copies of the data simultaneously and make them unreadable in case the data is out of control? A local data destruction approach will not work in the Cloud storage because the number of backups or archives of the data that is stored in the Cloud is unknown, and some nodes preserving the backup data have been offline. The clear data should become permanently unreadable because of the loss of encryption key, even if an attacker can retroactively obtain a pristine copy of that data; ii) No explicit delete actions by the user, or any third-party storing that data; iii) No need to modify any of the stored or archived copies of that data; iv) No use of secure hardware but support to completely erase data in HDD and SSD, respectively. FADE which is built upon standard cryptographic techniques and assuredly deletes files to make them unrecoverable to anyone upon revocations of file access policies. the public key based homomorphism authenticator with random mask technique are used to achieve a privacy-preserving public auditing system for Cloud data storage security. It present three types of assured delete: expiration time known at file creation, on-demand deletion of individual files, and custom keys for classes of data. Vanish is a system for creating messages that automatically self-destruct after a period of time. It integrates cryptographic techniques with global-scale, P2P, distributed hash tables (DHTs) : DHTs discard data older than a certain age. The key is permanently lost, and the encrypted data is permanently unreadable after data expiration. Vanish works by encrypting each message with a random key and storing shares of the key in a large, public DHT. However, Sybil attacks may compromise the system by continuously crawling the DHT and saving each stored value before it ages out .but it can efficiently recover keys for more than 99% of Vanish messages. but there is a special attack against vanish system they are hopping and Sybil attacks. In addition, for the Vanish system, the survival time of key attainment is determined by DHT system and not controllable for the user. Based on active storage framework, this paper proposes a distributed object-based storage system with time constrained data function. Our system combines a proactive approach in the object storage techniques and method object, using data processing capabilities of OSD to achieve data self-destruction. User can specify the key survival time of distribution key and use the settings of expanded interface to export the life cycle of a key, allowing the user to control the subjective life-cycle of private data. |
B. Object-Based Storage and Active Storage |
Object-based storage (OBS) uses an object-based storage device (OSD) as the underlying storage device. Each OSD consists of a CPU, network interface, ROM, RAM, and storage device (disk or RAID subsystem) and exports a high-level data object abstraction on the top of device block read/write interface. With the emergence of object-based interface, storage devices can take advantage of the expressive interface to achieve some cooperation between application servers and storage devices. A storage object can be a file consisting of a set of ordered logical data blocks, or a database containing many files, or just a single application record such as a database record of one transaction. Information about data is also stored as objects. An object-based model enables storage class memories (SCM) devices to overcome the disadvantages of the current interfaces and provided new features such as object-level reliability and compression. Since the data can be processed in storage devices, people attempt to add more functions into a storage device (e.g., OSD) and make it more intelligent and refer to it as “Intelligent Storage” or “Active Storage. |
C. Completely Erase Bits of Encryption Key |
In Time constrained data system, erasing files, which include bits (Shamir Secret Shares ) of the encryption key, is not enough when we erase/ delete a file from their storage media; it is not really gone until the areas of the disk it used are overwritten by new information. With flash-based solid state drives (SSDs), the erased file situation is even more complex due to SSDs having a very different internal architecture . Several techniques that reliably delete data from hard disks are available as built-in ATA or SCSI commands, software tools (such as, DataWipe,HDDerase,SDelete) and These techniques provide effective means of sanitizing HDDs: either individual files they store or the drive in their entirety. Software methods typically involve overwriting all or part of the drive multiple times with patterns specifically designed to obscure any remnant data. For instance, different from erasing files which simply marks file space as available for reuse, data wiping overwrites all data space on a storage device, replacing useful data with garbage data. Depending upon the method used, the overwrite data could be zeros (also known as “zero-fill”) or could be various random patterns .The ATA and SCSI command sets include “secure erase” commands that should sanitize an entire disk. Physical destruction and degaussing are also effective. SSDs work differently than platter-based HDDs especially ,when it comes to read and write processes on the drive. The most effective way to securely delete platter-based HDDs (overwriting space with data) becomes unusable on SSDs because of their design. Data on platter-based hard disks can be deleted by overwriting it. This ensures that the data is not recoverable by data recovery tools. This method is not working on SSDs as SSDs differ from HDDs in both the technology they use to store data and the algorithms they use to manage and access that data. Analog sanitization is more complex for SSDs than for hard drives as well. The analysis suggests that verifying analog sanitization in memories is challenging because there are many mechanisms that can imprint remnant data on the devices. SSDs, builtin commands are effective, but manufacturers sometimes implement them incorrectly; overwriting the entire visible address space of an SSD twice is usually, but not always, sufficient to sanitize the drive; none of the existing hard drive-oriented techniques for individual file sanitization are effective on SSDs. |
DESIGN AND IMPLEMENTATION OF TIME CONSTRAINED SYSTEM |
A.TIME CONSTRAINED DATA ARCHITECTURE |
There are three parties based on the active storage framework. i) Metadata server (MDS): MDS is responsible for user management, server management, session management and file metadata management. ii) Application node: The application node is a client to use storage service of the Time constrained data system. iii) Storage node: Each storage node is an OSD. It contains two core subsystems: key value store subsystem and active storage object (ASO) runtime subsystem. The key value store subsystem that is based on the object storage component is used for managing objects stored in storage node: lookup object, read/write object and so on. The object ID is used as a key. The associated data and attribute are stored as values. The ASO runtime subsystem based on the active storage agent module in the object-based storage system is used to process active storage request from users and manage method objects and policy objects. |
B. Active Storage Object |
An active storage object derives from a user object and has a time-to-live (TTL) value property. The TTL value is used to trigger the self-destruct operation. The TLL value of a user object is infinite so that a user object will not be deleted until a user deletes it manually. The TTLvalue of an active storage object is limited so an active object will be deleted when the value of the associated policy object is true. Interfaces extended by ActiveStorageObject class are used to manage TTL value. The create member function needs another argument for TTL. If the argument is 1,UserObject :create will be called to create a user object, else, ActiveStorageObject :: create will call UserObject ::create first and associate it with the selfdestruct method object and a self-destruct policy object with the TTL value. The getTTL member function is based on the read _ attr function and returns the TTL value of the active storage object. The setTTL, addTime and decTime member function is based on the write_attr function and can be used to modify the ttl value. |
C. Self-Destruct Method Object |
Generally, kernel code can be executed efficiently; however, a service method should be implemented in user space with these following considerations. Many libraries such as LIBC can be used by code in user space but not in kernel space. Mature tools can be used to develop software in user space. It is much safer to debug code in user space than in kernel space. A service method needs a long time to process a complicated task, so implementing code of a service method in user space can take advantage of performance of the system. The system might crash with an error in kernel code, but this will not happen if the error occurs in code of user space. A self-destruct method object is a service method. It needs three arguments. The LUN argument specifies the device, the PID argument specifies the partition and the OBJ_ ID argument specifies the object to be destructed. |
D. Data Process |
To use the Time constrained data system, user’s applications should implement logic of data process and act as a client node. There are two different logics: uploading and downloading. (i).Uploading file process (see Fig. 2): When a user uploads a file to a storage system and stores his key in this time constrained system, he should specify the file, the key and TTL as arguments for the uploading procedure. Fig. 3 presents its pseudo-code. In these codes, |
we assume data and key has been read from the file. The encrypt procedure uses a common encrypt algorithm or userdefined encrypt algorithm. After uploading data to storage server, key shares generated by Shamir Secret Sharing algorithm will be used to create active storage object (ASO) in storage node in the time constrained system. |
(ii)Downloading file process: Any user who has relevant permission can download data stored in the data storage system. The data must be decrypted before use. The whole logic is implemented in code of user’s application. In the above code, we assume encrypted data and meta information of the key has been read from the downloaded file. Before decrypting, client should try to get key shares from storage nodes in the time constrained data system. If the self-destruct operation has not been triggered, the client can get enough key shares to reconstruct the key successfully. If the associated ASO of the key. |
E. Data Security Erasing in Disk |
We must secure delete sensitive data and reduce the negative impact of OSD performance due to deleting operation. The proportion of required secure deletion of all the files is not great, so if this part of the file update operation changes, then the OSD performance will be impacted greatly. Our implementation method is as follows: i) The system prespecifies a directory in a special area to store sensitive files. ii) Monitor the file allocation table and acquire and maintain a list of all sensitive documents, the logical block address (LBA). iii) LBA list of sensitive documents appear to increase or decrease, the update is sent to the OSD. iv) OSD internal synchronization maintains the list of LBA, the LBA data in the list updates. For example, for SSD, the old data page write 0, and then another writes the new data page. When the LBA list is shorter than the corresponding file, size is shrinking. At this time, the old data needs to correspond to the page all write. v) For ordinary LBA, the system uses the regular update method. vi) By calling ordinary data erasure API, we can safely delete sensitive files of the specified directory. Our strategy only changes a few sensitive documents to the update operation, no effect on the operational performance of the ordinary file. In general, the secure delete function is implied while the OSD read and write performance can be negligible. |
METHODOLOGY FOR TIME CONSTRAINED DATA SYSTEM |
There are multiple storage services for a user to store data. Meanwhile, to avoid the problem produced by the centralized “trusted” third party, the responsibility of time constrained data system is to protect the user key and provide the function of self-destructing data. Fig. 4 shows the brief structure of the user application program realizing storage process. In this structure, the user application node contains two system clients: any third-party data storage system (TPDSS) and TCDS. The user application program interacts with the TCDS server through TCDS’ client, getting data storage service. The way to attain storage service by client interacting with a server depends on the design of TPDSS. We do not need a secondary development for different TPDSS. The process to store data has no change, but encryption is needed before uploading data and the decryption is needed after downloading data. In the process of encryption and decryption, the user application program interacts with TCDS. To test the implementation of TCDS described in the previous section, we use pNFS to put up a TPDSS to implement data storage service. The client mainly runs in kernel mode, and we can mount a remote file system to local. A VMware virtual environment is built up to test. The configuration of host and virtual node are as shown in Fig. 5. To avoid creating virtual machines repeatedly, we make the same configuration on every node. From a performance point of view, some adjustments may be needed, such as improving CPU configuration of metadata sever, increasing the size of the disk and memory for storage nodes. VMware version is VMware Workstation 7:1:3 build-324285. |
B. Evaluation |
The evaluation platform built up on pNFS supports simple file management, which includes some data process functions such as file uploading, downloading and sharing. |
1) Functional Testing: We input the full path of file, key file, and the life time for key parts. The system encrypts data and uploads encrypted data. The life time of key parts is 150 s for a sample text file with 101 bytes. System prompts creating active object are successful afterwards and that means the uploading file gets completed. The time output finally is the time to create active object. Time constrained data system was checked and corresponded with changes on work directory of the storage node. The sample text file also was downloaded or shared successfully before key destruct. |
2) Performance Evaluation: |
As mentioned, the difference of I/O process between Time constrained data system and Native system (e.g. pNFS) is the additional encryption/decryption process which needs support from the computation resource of TCDS client. We compare two systems: i) time constrained data system based on active storage framework (TCDS for short), and ii) a conventional system without self-destructing data function (Native for short). We evaluated the latency of upload and download with two schemes ( TCDS and Native) under different file sizes. Also, we evaluated the overhead of encryption and decryption with two schemes under different file sizes. Fig. 6 shows the latency of the different schemes. We observe that time constrained data system increases the average latency of the Native system by 59.06% and 25.69% for the upload and download, respectively. The reason for this performance degradation is the encryption and decryption processes introduce the overhead. |
CONCLUSION |
Data privacy has become increasingly important in the Cloud environment. Through this paper we introduced a new approach for protecting data privacy from attackers who retroactively obtain, through legal or other means, a user’s stored data and private decryption keys. We demonstrated the feasibility of our approach by presenting time constrained data system, a proof-of-concept prototype based on object-based storage techniques. It causes sensitive information, such as account numbers, passwords and notes to irreversibly self-destruct, without any action on the user’s part. so the data destruct itself after the specified timeout. and the copies of all the data and the encryption keys are also destroyed .In future we are trying to provide a integrity concept and also planning to provide a better system which provide more security to our data, so we implement the application using the blow fish algorithm. |
References |
|