ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Developing Use Cases and State Transition Models for Effective Protection of Electronic Health Records (EHRs) in Cloud

V.M.Prabhakaran1, Prof.S.Balamurugan2, S.Charanyaa3
  1. PG Scholar, Department of CSE, Kalaignar Karunanidhi Institute of Technology, Coimbatore, TamilNadu, India
  2. Assistant Professor, Department of IT, Kalaignar Karunanidhi Institute of Technology, Coimbatore, TamilNadu, India
  3. Senior Software Engineer Mainframe Technologies Former, Larsen & Tubro (L&T) Infotech, Chennai, TamilNadu, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering

Abstract

This paper proposes new object oriented design of use cases and state transition models to effectively guard Electronic Health Records (EHRs). Privacy-An important factor need to be considered while we publishing the microdata. Usually government agencies and other organization used to publish the microdata. On releasing the microdata, the sensitive information of the individuals are being disclosed. This constitutes a major problem in the government and organizational sector for releasing the microdata. In order to sector or to prevent the sensitive information, we are going to implement certain algorithms and methods. Normally there two types of information disclosures they are: Identity disclosure and Attribute disclosure. Identity disclosure occurs when an individual's linked to a particular record in the released Attribute disclosure occurs when new information about some individuals are revealed. This paper aims to discuss the existing techniques present in literature for preserving, incremental development, use cases and state transition models of the system proposed.

Keywords

Electronic Health Records(EHRs0, Privacy, Microdata, Medical Healthcare System, Database Security

INTRODUCTION

Cloud computing appears to be focused on large scale of storage of information across multiple servers. Cloud computing undergoes several style of resources such as dynamism, abstraction and resource sharing. Day by day health issues and health problems are increasing so to maintain and monitor health data is important.
Cloud based technique helps health and clinical organization to concentrate more on improving quality of service of their health operations. MYPHR machine is a patient centric system. MYPHR are appealed to be the next generation consumer-centric information system that helps progress health care delivery, self-management and wellness by providing flawless and complete information, which increases understanding, capability and awareness. MYPHR machine designed to solve the problem in the health record portability and provide a tight bond relationship with the doctor or the institution and the patient. Main aspect of MYPHR machine design is to make the PHR data as portable.
The remainder of the paper is organized as follows. Section 2 deals about Survey of Literature of Techniques prevailing to protect EHRs. Basic Primitives and Terminologies are discussed in Section 3. Section 4 discusses about the Use Case Modeling and State Transition model of security system for cloud based PHRs. Section 5 enumerates the advantages of the proposed system. Section 9 concludes the paper and outlines the direction for Future Work.

LITERATURE SURVEY

C. Vecchiola, M. Kirley, and R. Buyya (2009) presented a distributed implementation of a network based multiobjective evolutionary algorithm called EMO (Evolutionary Multi-objective Optimization) by using Offspring (EMO) application. Recently, the easy access to Grid and Cloud computing infrastructures has made the deployment of
hierarchical models quite common. These models compose the previously discussed models to better exploit the heterogeneity of distributed computing resources that can be found within Enterprise Clouds or Computing Grids. The execution of the evolutionary algorithm is generally divided into layers and at each of the layers a different model can be used. The most common implementation is based on a two level structure which uses a multi-population coarse grained distribution model at the first level and a master-slave or a cellular model at the second level. The aim of Offspring is to minimize the code required to provide a distributed implementation of a population based meta heuristics without requiring the researchers to know distribution middleware APIs. It provides a friendly user environment for researchers in combinatorial optimization who do not want to be concerned about building interconnection software layers and learning underlying middleware APIs and provides a visual user interface that manages the execution of population based optimization algorithms, a set of APIs allowing researchers to write a plugin for this environment quickly. Results show that the model proposed by Offspring is effective when there is a real need for a distributed implementation. In order to be effective network based evolutionary algorithms require at least 1000 individuals and the model proposed by Offspring provides an increasing speed up when the number of individuals is only 300. A preliminary analysis of the overhead introduced by Offspring and the Cloud middleware used shows encouraging results for large population sizes. The distribution infrastructure provided by Offspring does not affect the performance.

A. Pegasus

E. Deelman, G. Singh, M.-H. Su, J. Blythe, Y. Gil, C. Kesselman, G. Mehta, K. Vahi, G. B. Berriman, J. Good,A. Laity, J. C. Jacob, andD. S. Katz (2005) proposed, Pegasus framework that can be used to map complex scientific workflows onto distributed resources. The authors surveyed the existing workflow management systems like WebFlow, GridFlow, GridAnt, Nimrod-G and ASCI Grid. Webflow provides a multileveled system for high performance distributed computing and a visual programming aid. Gridflow provides a two- tiered architecture with global Grid workflow management and local Grid sub workflow scheduling. GridAnt employs an ant workflow processing engine. Nimrod-G is a cost and scheduling based resource management and scheduling system. ASCI Grid (Accelerated Strategic Computing Initiative Grid distributed resource manager), includes a desktop submission tool, a workflow manager and a resource broker. All the existing strategies focuson resource brokerage and scheduling strategies. But Pegasus uses the concept of virtual data and provenance to generate and reduce the workflows based on data products which have already been computed. It prunes the workflow based on the assumption that it is always more costly to compute the data product than to fetch it from an existing location. Pegasus also automates the job of replica selection so that the user does not have to specify the location of input data files. Pegasus can also map and schedule only portions of the workflow at a given time, using partitioning techniques. The major functionality of Pegasus includes defining the set of available and accessible resources, resource selection and task clustering.

B. Taverna

T. Oinn, M. Addis, J. Ferris, D. Marvin, M. Senger, M. Greenwood, T. Carver, K. Glover, M. R. Pocock, A. Wipat, and P. Li (2004) developed a tool Taverna for the composition and enactment of bioinformatics workflows for the life science community. The Taverna project has developed a tool for the composition and enactment of bioinformatics workflows for the life sciences community. The tool includes a workbench application which provides a graphical user interface for the composition of workflows. These workflows are written in a new language called the Simple conceptual unified flow language (Scufl), where by each step within a workflow represents one atomic task. The Taverna workflow system is available as open source and can be downloaded with example Scufl workflows from http://taverna.sourceforge.net.

C. GridFlow

J. Cao, S. A. Jarvis, S. Saini, and G. R. Nudd (2003) developed GridFloW, a workflow management system for grid computing. GridFlow, includes a user portal and services of both global grid workflow management and local grid sub-workflow scheduling. Simulation, execution and monitoring functionalities are provided at the global grid level, which work on top of an existing agent-based grid resource management system. At each local grid, sub-workflow scheduling and conflict management are processed on top of an existing performance prediction based task scheduling system. A fuzzy timing technique is applied to address new challenges of workflow management in a cross-domain and highly dynamic grid environment. A case study is given and corresponding results indicate that local and global grid workflow management can coordinate with each other to optimize workflow execution time and solve conflicts of interest.

D. ICENI

N. Furmento, W. Lee, A. Mayer, S. Newhouse, and J. Darlington (2002) developed an open grid service architecture Imperial College e-Science Networked Infrastructure (ICENI) implemented with Jini. It is an extension of the current Grid in which information and services are given well-defined meaning, better enabling computers and people to work in cooperation. Jini :a technology of plug-and-play. Allow people to use networked devices and services as simply as using a phone today. Features of JINI include dynamic registration, service lookup, distributed object access and the platform-portability provided by java

E. GridAnt

K. Amin, G. von Laszewski, M. Hategan, N. J. Zaluzec, S. Hampton, and A. Rossi (2004) developed an extensible client-side workflow management system, called GridAnt. Design principles, functionality, and application of the proposed GridAnt workflow manager are also discussed. Features of the Proposed GridAnt tool: 1. Enabling Grid users to orchestrate complex workflows on the fly without substantial help from the service providers and 2.The Grid user might not be burdened with the intricacies of the workflow system GridAnt essentially consists of four components namely a workflow engine, a runtime environment, a workflow vocabulary, and a workflow monitor. Apache Ant is selected as the GridAnt workflow engine because of its extensibility and popularity in the Java community.

F. Triana

I. Taylor, I.Wang, M. Shields, and S. Majithia (2005) developed Triana, a distributed problem-solving environment. Triana enable a user to: 1.compose applications from a set of components, 2.select resources on which the composed application can be distributed and then execute the application on those resources

G. Kelper:

B. Lud¨ascher, I. Altintas, C. Berkley, D.Higgins, E. Jaeger,M. Jones, E. A. Lee, J. Tao, and Y. Zhao (2006) developed a „Kelper? a scientific workflow system to describe characteristics of and requirements for scientific workflows as identified in a number of application projects. Kepler, is a particular scientific workflow system, currently under development across a number of scientific data management projects. Kepler is a community-driven, open source project.
The above mentioned projects on Workflow Management System (WfMS), schedules tasks onto resources based on Ealeist Finishing Time or Earliest Starting Time or high processing capabilities. This approach is termed to be the Best Resource Selection (BRS) approach, where a resource is selected based on its performance.

BASIC PRIMITIVES AND TERMINOLOGIES

Cloud environment may describe a company, organization or an individual who uses a Web based application for every mission rather than installing software and storing data on a computer. Cloud environment involves in provide a functionality to outsource and encrypt the data. Cloud storage service is accessed through the cloud computer service, web service application programming interface or by a cloud storage gateway. The cloud based workspace is centralized providing easy functionality to share. The cloud environment can provide improvements in system efficiency & density. Cloud environment solve the problem of complicated configuration management, Decreased productivity, Limited accessibility and Poor collaboration. It has the capability to access all work, databases and other information from any device. Cloud environment involves in providing some basic network model for storage of data in the cloud. The basic network model for the cloud data storage and three different network entities are
1. User
2. Cloud storage server
3. Cloud service provide
a. User: An entity which has large data files to be stored in the cloud and relies on the cloud for data maintenance and computation can be either individual consumers or organizations.
b. Cloud Storage Server (CSS): An entity which is managed by Cloud service provider. Cloud storage is a subgroup of cloud computing. Cloud computing organizations offer users access to not only storage, but also processing power and computer applications mounted on a remote network. Cloud storage provides users with instant access to a wide ranging of resources and applications hosted in the infrastructure of another organization through a web service interface. Security of stored data and data transfer may be a concern during storing the sensitive data at a cloud storage provider.
c. Cloud Service Provider (CSP): A cloud provider is a company that compromises some constituent of cloud computinghas significant storage space and computation resource to maintain the user data. The Data owner encrypted some keywords about his data, and service provider supported the owner to retrieve his data by keywords and not allow others to retrieve. When supposing the role of cloud provider, an organization is accountable for making cloud services available to cloud customers.

USE CASE MODELLING

Use case diagrams model the functionality of a system using actors and use cases. Use cases are services or functions provided by the system to its users. The components in a use case diagram include:
• Actors: Actors represent external entities of the system. These can be people or things that interact with the system that is being modeled. For example, if we are modeling an online store we have many actors that interact with the store functionality. The customer browses the catalog, chooses items to buy, and pays for those items. A stocker will look at the orders and package items for the customer. A billing system will charge the customer's credit card for the amount purchase.
• Use Cases: Use cases are functional parts of the system. When we say what an actor does, that's a use case. The customer "browses the catalog", "chooses items to buy", and "pays for the items". These are all use cases. Many actors can share use cases. If we find a use case that is not associated with any actor, this may be a unnecessary functionality.
• Associations: Associations are shown between actors and use cases, by drawing a solid line between them. This only represents that and actor uses the use case.
There are also two kinds of relationships between use cases:
• Includes : Use cases that are associated with actors can be very general. Sometimes they "include" more specific functionality. For example, the "pump gas" use case that is associated with the customer includes three use cases: Choose Gas Type, Fill Tank, and Calculate Total. Includes relationship is represented by dashed arrows that point to the included functionality. Beside the arrow is <<includes>>.
• Extends : An extension use case is an insertion to the base use case. For example, some stores may allow for different payment options like credit card, debit card, or cash on delivery. These specific functionalities are extension of the general "pay for items." Extends relationship is represented by dashed arrows that point to the base functionality. Beside the arrow is <<extends>>

CONCLUSION AND FUTURE WORK

This paper proposed new object oriented design of use cases and state transition models to effectively guard Electronic Health Records (EHRs). In this paper, the issue of outsourcing of data in cloud is addressed by the method of key generation for cloud user. Cloud computing, besides providing a maximized effectiveness of shared resources, also provides an easy way of storing and retrieving data. Personal Health Records (PHRs) are designed to maintain lifelong details of patients. Automated Patient Identifier and Patient Care System is designed to count hospitalized patients based on the concept of Current Procedure Terminology (CPT) manager. Cloud storage service is accessed through the cloud computer service, web service application programming interface or by a cloud storage gateway. The cloud based workspace is centralized providing easy functionality to share. The cloud environment can provide improvements in system efficiency & density. As a part of future work, we have planned design the UML diagrams to look into the problem and to increase the clarity and to implement the uploading of encrypted medical data in cloud and in the process of creating individual cloudlets for preventing unauthorized user.
 

Figures at a glance

Figure 1 Figure 2
Figure 1 Figure 2
 

References