ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Further Investigations on Evolution of Approaches Developed For Database Security

R.S.Venkatesh1, P.K.Reejeesh2, Prof.S.Balamurugan3, S.Charanyaa4
  1. Department of IT, Kalaignar Karunanidhi Institute of Technology, Coimbatore, TamilNadu, India1,2,3
  2. Senior Software Engineer Mainframe Technologies Former, Larsen & Tubro (L&T) Infotech, Chennai, TamilNadu, India4
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering

Abstract

This paper reviews methods developed for anonymizing data from 1989 to 1993 . Publishing microdata such as census or patient data for extensive research and other purposes is an important problem area being focused by government agencies and other social associations. The traditional approach identified through literature survey reveals that the approach of eliminating uniquely identifying fields such as social security number from microdata, still results in disclosure of sensitive data, k-anonymization optimization algorithm ,seems to be promising and powerful in certain cases ,still carrying the restrictions that optimized k-anonymity are NP-hard, thereby leading to severe computational challenges. k-anonimity faces the problem of homogeneity attack and background knowledge attack . The notion of ldiversity proposed in the literature to address this issue also poses a number of constraints , as it proved to be inefficient to prevent attribute disclosure (skewness attack and similarity attack), l-diversity is difficult to achieve and may not provide sufficient privacy protection against sensitive attribute across equivalence class can substantially improve the privacy as against information disclosure limitation techniques such as sampling cell suppression rounding and data swapping and pertubertation. This paper aims to discuss efficient anonymization approach that requires partitioning of microdata equivalence classes and by minimizing closeness by kernel smoothing and determining ether move distances by controlling the distribution pattern of sensitive attribute in a microdata and also maintaining diversity.

Keywords

Data Anonymization, Microdata, k-anonymity, Identity Disclosure, Attribute Disclosure, Diversity

I. INTRODUCTION

Need for publishing sensitive data to public has grown extravagantly during recent years. Though publishing demands its need there is a restriction that published social network data should not disclose private information of individuals. Hence protecting privacy of individuals and ensuring utility of social networ data as well becomes a challenging and interesting research topic. Considering a graphical model [35] where the vertex indicates a sensitive label algorithms could be developed to publish the non-tabular data without compromising privacy of individuals. Though the data is represented in graphical model after KDLD sequence generation [35] the data is susceptible to several attacks such as homogeneity attack, background knowledge attack, similarity attacks and many more. In this paper we have made an investigation on the attacks and possible solutions proposed in literature and efficiency of the same.
The remainder of the paper is organized as follows. Section 2 deals about voice network security systems. Section 3 deals about methodology for network security design. Computer Network abuse is depicted in Section 4. Section 5 concludes the paper and outline the direction for Future Work.

II. VOICE NETWORK SECURITY SYSTEMS

Paavo T. Kousa (1989) say voice network security systems reveals a secure environment for exchanging attacker to gain access when the voice message is transferred among users. The voice message is transferred among users. The important thing to be noted is that the voice message is exchanged between the users of remote locations.
In earlier network message system, the users mailbox was used to save the voice messages. In this system, first the voice message is loaded into the request, the message is distributed among the receivers. But the main drawback is the interruption of those messages created by an intruder with the voice message. Even encryption algorithms can be employed to exchange voice message. But still this is not appropriate for all the networks and also these system seems to be more expensive.
The recent network security systems consists of a base station and node station. Every message is encrypted and included in the message transmit protocol and then it is transmitted to the receiver.
The secret key and a public key both used for encryption. These two keys are used at the initial stage of producing a key. Each and every content of all the messages is encrypted and decrypted using the key produced before.
Initially, the base station selects a random number and adds it with the node encryption key and sends it to the respective key and sends it to the respective node station. The node station retains that random number by decrypting the message. This random number is used as a root for producing more random numbers. The node station encrypts the message with second random number and sends it to base station. Now base stations decodes and retains the random number. By this way the real information to be transmitted is exchanged between the base and node stations. This will result in a secure communication. The attacker will feel hard to access the voice message system.
Any number of node station can be employed in a network security system. Base station and node station exchanges messages bi-directionally. Before they transmit messages the base station will send a wake-up signal to the node station. In turn the node station will send a ready signal to base station. After this the connection is established and they start transferring messages.
Before transmitting the message , a checksum is also added to the message to identify the existence of attackers. But checksum is not employed to the system because it will erode the security altering technique of the system. Sometimes checksum are used in the systems to detect errors. A “checksum verifier” is used for testing the checksum used. During the transfer of message, the base system’s identification is sent to node station and the node systems identification is received by the base station.
This system provide secure way of exchanging messages between base and node stations. Eliminates the attackers to gain access to the system.

III. METHODOLOGY FOR NETWORK SECURITY DESIGN

D Graft (1990) says Security becomes the major issue in all network communication systems. To overcome this security issue, a new methodology for network security systems was designed with the help of OSI model and security architecture. This type of design methodology is required because the earlier designs like “ad hoc” did not show good results. So, the main aim is to identify the workability of this design methodology.
Design methodology for network security consists of three phases:
1.Specification phase-gathers all the system requirements and defines certain set of conditions to design.
2.Design phase-explains the system architecture, service mechanisms and protocols used.
3.Implementation Phase-Validation and verification analysis of performance and workability.
“Problem-centered Approach” is employed in specification phase. That is , prior to designing and implementing process, the problem to be solved is first analyzed properly. There are two components in a specification phase:
1.Identifying Requirements-the requirements are based upon the problem that we have taken. In a completely insecure system, the requirements will be more to protect the system whereas in a secure system, the requirements will be less. Anyway the tame idea is to increase the security that the system provides.
Initial step in identifying the requirements is to select a proper domain where all security services will function effectively. The security services and protocols to be used is collected in the name of application requirements. A detailed study is made on the security services used. Security management comprises of requirements needed for the management of security. Additionally, the key distribution methods are used for solving management issues.
2.Defining conditions- There are three basic conditions that we have to consider for designing. They are applicable standards, Network type and topology and organization.
A proper solution for the problem is designed in the design phase. The solution should fulfill the requirements specified in the specification phase. This phase defines the overall architecture of the security system. The security architecture includes all the functions needed to maintain the ”security” in a system. These functions are inserted into the 7 layers of OSI Model. However, placing all these functions in OSI Model will have many risks. The security mechanisms and the protocols are selected appropriately on the basis of conditions and services needed for the system. The protocol should not erode the security of the system.
Heberlein (1991) In the implementation phase, the design is developed into working product by distributing the design to the various software and hardware that are needed for verification and validation, testing obtaining performance and workability-satisfies all the requirements.
We have studied a new methodology for network security systems. But still we have few drawbacks. More concentration is required to select the appropriate security protocols. Security mechanisms and security protocols can’t be detached from each other. It does not support new developments. We can use this new design methodology for simple applications only.
John R. Corbin (1992) in this paper the author inform us that inorder to install the Network Security Monitor(NSM),either one of the following commands are used to install NSM from the tar file tar xvf nsm,tar while for the NSM distributed on tape, use tar xv command. Both the command will produce a new directory called NSM in the local directory. In this paper the author describes an detailed information about each and every command which is used for creating a file structure,and the operations that will be performed to attain a desired task.

IV. COMPUTER NETWORK ABUSE

Un authorization of data and manipulation of data by an intruder are the main computer abuses seen in “cyberspace”. Computer abuse is an illegal act in communication technology.
“The Alleged Problem” observes the computer abuses in a network. It is categorized into three subsections. The computer crime and security falls under first subsection. It examines whether the user accessing the system is authorized or not. Also tells about the usage and cost expenditure of methods employed. The second subsection observes the crimes committed by attackers. The third subsection represents the computer abuses in media.
Remote computing is the biggest advantage for computer users. It allows a user to access the computer from a remote location. But the drawback that we come across in remote computing is authentication. The general form that we user for authentication is passwords. Even though if a unique and strong password is used, an attacker gains access to the computer system very easily by guessing the passwords. So using a password is not a good idea for securing computer systems.
Password protection method remains insecure among local users also. “Shoulder-surfing” is a method used by the hackers to find the passwords. Apart from password authentication . we have many other methods to reduce the abuse in a system. Some of them are encryption techniques and call-back systems. Only a user in remote location can access the encrypted data sent using telephone lines. In call-back system, only a remote user with unique phone no can access the target system.
Now coming into abuse in media, the well known hacker “Kevin Mintick” hacked “Digital Equipment Corporation” by simply tracing telephone lines. Some case studies portrays that there are more “human” hackers than “computer” hackers. The “Hacking Ethic” was used to maintain the system and it allows a faithful hacker to access the computer system but does not cause any destructive to the computer system.
To avoid computer abuses, there are many criminal laws that can be followed. Commonly used one is “State and federal criminal laws”. It consists of three subsections. The first subsection deals with state crime laws. Second sub section deals with federal crime laws. It mainly focus on computer abuses like wire fraud and interstate transportation of stolen property. Eg: computer Fraud and abuse act and Electronic communications Privacy Act. The third sub section deals with the legal proceedings of criminal laws.
Many criminal laws fails because of the following circumstances. They are,
1.Arbitrary Spatial Distinction in cyberspace.
2.Risk in detecting criminal activity.
3.Difficulty in obtaining criminal identity.
4.Difficulty of proving criminal capability.
5.Absense of incentives to report Computer Abuse.
6.Absence of deterrence in criminal laws.
The above circumstances are seen in Ex-post criminalization method. Implementing this method is of no use. To compensate, EX-ANTE prevention method can be employed. Here also we have three subsections. The first two subsections uses indirect federal regulation. The third subsection observes whether the above methods are suitable or not.
The best way to avoid computer abuses is to use EX-ANTE prevention method. It gradually increases the computer security by employing proper sophisticated authorization methods.

V. CONCLUSION AND FUTURE WORK

Various methods developed for anonymizing data from 1989 to 1993 is discussed. Publishing microdata such as census or patient data for extensive research and other purposes is an important problem area being focused by government agencies and other social associations. The traditional approach identified through literature survey reveals that the approach of eliminating uniquely identifying fields such as social security number from microdata, still results in disclosure of sensitive data, k-anonymization optimization algorithm ,seems to be promising and powerful in certain cases ,still carrying the restrictions that optimized k-anonymity are NP-hard, thereby leading to severe computational challenges. k-anonimity faces the problem of homogeneity attack and background knowledge attack . The notion of ldiversity proposed in the literature to address this issue also poses a number of constraints , as it proved to be inefficient to prevent attribute disclosure (skewness attack and similarity attack), l-diversity is difficult to achieve and may not provide sufficient privacy protection against sensitive attribute across equivalence class can substantially improve the privacy as against information disclosure limitation techniques such as sampling cell suppression rounding and data swapping and pertubertation. Evolution of Data Anonymization Techniques and Data Disclosure Prevention Techniques are discussed in detail. The application of Data Anonymization Techniques for several spectrum of data such as trajectory data are depicted. This survey would promote a lot of research directions in the area of database anonymization.
 

References










































































APPEDIX