ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Implementation of optimized cost, Load and Service monitoring for Grid Computing

K.G.S. Venkatesan1, AR. Arunachalam 2, S. Vijayalakshmi3, V. Vinotha4
  1. Research Scholar, Dept. of C.S.E., Bharath University, Chennai, India.
  2. Research Scholar, Dept. of C.S.E., Bharath University, Chennai, India.
  3. Department of Computer Science & Engg., Bharath University, Chennai, Tamil Nadu, India.
  4. Department of Computer Science & Engg., Bharath University, Chennai, Tamil Nadu, India.
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering

Abstract

Managing resources and smartly valuation them on computing systems may be a difficult task. Resource sharing demands careful load leveling and sometimes strives to attain a win-win state of affairs between resource suppliers and users. Toward this goal, we consider a joint treatment of load leveling and valuation. we have a tendency to don't assume static valuation to see load leveling, or the other way around. Instead, we have a tendency to study the link between the worth that a computing node is charged and therefore the load and revenue that it receives. We find that there exists associate degree optimum worth that maximizes the revenue. we have a tendency to then contemplate a multi-user surroundings and explore however the load from a user is balanced on processors with existing hundreds. Finally, we have a tendency to derive associate degree optimum worth that maximizes the revenue in the multi-user surroundings. we have a tendency to appraise the performance of the papered algorithms through simulations.

Keywords

Brokers, Cloudsim kit, InterCloud

I. INTRODUCTION

In this paper, we consider a grid computing system where compute nodes are heterogeneous and their prices vary dynamically. We do not assume that they are fully cooperative and unselfish. Further, we consider a multi-user environment where the current load needs to be balanced onto nodes with existing loads. Finally, we consider different objectives of users and providers where the user wants to minimize the cost and response time while the provider wants to maximize the revenue. This paper first studies how load balancing and pricing influence each other where the load on nodes and their charged prices are dynamic. We find out that the provider’s revenue is maximized when its node is charged at a certain optimal price. The broker is implemented which is used to find out the best service provider for the given user by considering the users revenue. This optimal price can be determined given the output of the underlying load balancing approach. Then the load is re-balanced with respect to these new optimal prices for the current job. Therefore, pricing and load balancing are “mutually aware” [2].
Individual members of the community contribute computing cycles, storage, services, and communication information measure to the pool of resources on the market to the whole community; resources further as shoppers of resources may belong to totally different body domains. During this case it's tough to plot international resource allocation policies and there's no central authority to enforce international policies and schedules [1]. A broker mediates between producers and shoppers in numerous body level. Market-oriented economies have proven their benefits over different suggests that to manage and manage resource allocation in social systems .It looks affordable to adapt a number of the thriving ideas of economical models to resource allocation in large-scale computing systems and to review market destined resource allocation formulas.

II. LITERATURE SURVEY

In this paper we tend to discuss AN economic model for resource sharing in large-scale distributed systems. we tend to show that given a specific set of model parameters the satisfaction reaches AN optimum; this worth represents the proper balance between the utility and therefore the worth purchased resources. Our results make sure that brokers play a really necessary role and might influence absolutely the market [5].
Individual members of the community contribute computing cycles, storage, services, and communication information measure to the pool of resources on the market to the whole community; resources further as shoppers of resources may belong to totally different body domains. during this case it's tough to plot international resource allocation policies and there's no central authority to enforce international policies and schedules. A broker mediates between producers and shoppers in numerous body level. Market-oriented economies have proven their benefits over different suggests that to manage and manage resource allocation in social systems .It looks affordable to adapt a number of the thriving ideas of economical models to resource allocation in large-scale computing systems and to review market destined resource allocation formulas The formula that's used is BROKERING ALGORITHM: The algorithm performed by the broker. the buyer request, is elastic. The target utility is given by t and it is satisfying size. The cardinality specifies the quantity of resource suppliers to be came back by the broker [7].
Cloud computing delivers infrastructure, platform, and software package (application) as services, that square measure created on the market as subscription-based services in an exceedingly pay-as-you-go model to shoppers. These services in business square measure severally named as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and software package as a Service (SaaS). Developers with innovative ideas for brand new web services not need massive capital outlays in hardware to deploy their service or human expense to work it .In existing systems don't support mechanisms and policies for dynamically coordinative load distribution among totally different Cloud-based information centers so as to see optimum location for hosting application services to attain affordable QoS levels [4]. Further, the Cloud computing suppliers square measure unable to predict geographic distribution of users overwhelming their services, thus the load coordination should happen mechanically, and distribution of services should modification in response to changes within the load. The planned InterCloud setting supports scaling of applications across multiple marketer clouds. We have valid our approach by conducting a collection of rigorous performance analysis study victimization the CloudSim toolkit. The results demonstrate that federate Cloud computing model has large potential because it offers vital performance gains as regards to interval and value saving below dynamic work situations. The techniques that square measure used here are Cloud arranger (CC):The Cloud arranger service is chargeable for the management of domain specific enterprise Clouds and their membership to the general federation driven by market-based commercialism and negotiation protocols. It provides all the setting for applications in an exceedingly federation of Clouds [9].
Cloud Broker (CB) : The Cloud Broker performing on behalf of users identifies appropriate Cloud service suppliers through the Cloud Exchange ANd negotiates with Cloud Coordinators for an allocation of resources that meets QoS desires of users.
While a hybrid execution setting is also wont to meet time constraints, users should currently attend to the prices related to information storage, information transfer, and node allocation time on the cloud. during this paper, we tend to describe a modelling driven resource allocation framework to support each time and value sensitive execution for dataintensive applications dead in an exceedingly hybrid cloud setting. Purpose-built clusters permeate several of today’s organizations, providing each large-scale information storage and computing. The cloud’s key options embrace the pay-as-you-go model and physical property. Users will instantly scale resources up or down in keeping with the demand or the required interval [3]. This ability to extend resource consumption comes while not the price of overprovisioning, i.e., having to get and maintain a bigger set of resources than what's required most of the time, that is commonly the case for ancient in-house clusters. In general, cloud physical property is exploited in conjunction with native work out resources to make a hybrid cloud to assist meet time and/or value constraints. This paper explores resource allocation within the aforesaid hybrid cloud setting. We tend to describe a model-driven resource allocation framework to alter time and value sensitive execution for data-intensive applications dead in an exceedingly hybrid cloud setting. What is more, we tend to think about the analysis of knowledge that's split between a neighbourhood cluster and a cloud storage. There square measure 2 algorithms that square measure been employed in this method. HEAD NODE ALGORITM : This defines however the pinnacle node handles the resource allocation requests. First, a cluster’s master node requests for jobs from the pinnacle node. the pinnacle node accepts the request and prepares a gaggle of jobs whereas considering neighbourhood. After the roles square measure ready, the pinnacle node determines the new variety of cloud instances in keeping with the performance of the requesting cluster thus far. Computing variety of Instances: The calculation of the quantity of instances is given in this formula. The model is dead with the cluster parameters and structures containing the cloud evaluation contract, that is then compared with a user’s value constraints. The evaluation contract organization represents the agreement between user and therefore the cloud service supplier. It provides the specification of the resources and therefore the value data of running AN instance and transferring information, i.e., the constants in our model [10].
Over the last years, ability among resources has been emerged jointly of the foremost difficult analysis topics. However, the commonality of the complexness of the architectures (e.g. heterogeneity) and therefore the targets that every process paradigm - together with HPC, grids and clouds - aims to attain (e.g. flexibility) stay constant. this is often to with efficiency orchestrate resources in an exceedingly distributed computing fashion by bridging the gap among native and remote participants. Initially, this is often closely connected with the programming thought that is one in all the foremost necessary problems for coming up with a cooperative resource management system, particularly in massive scale settings like in grids and clouds. Cloud has nice capability to supply scalable and rigid elastic services, that is one in all the foremost closely scrutinized analysis areas in wide-scale computing systems. In such environments, the Brobdingnagian computing resources, that reside at a far off location, supply on demand variedpriced versatile services, together with hardware, software package and developers’ platforms. The elastic level on demands accessibility might be leveraged towards AN improved quality of service. during this work, we tend to take the read and outline InterCloud of entomb-collaborative and inter cooperative enterprises as a temporal auto-scaling resource formation within which services and resource exchange happen among varied clouds however additionally amongst different infrastructures on augment service quality and supply a complete satisfaction for a large vary of client numerous needs. Such e-infrastructures ought to embrace however not restricted to clusters, grids, high performance and turnout computing [6]. In general, the InterCloud approach expand the cloud capabilities in terms of services with the aim of achieving a wider distribution of resources, however by retentive international resource utilization equilibrium among varied resource pools. This includes varied algorithms enforced within the simulation machine can permit the extraction of the first data-set that it'll be that base of experimentation.1: Integrated Cloud Interface: The pseudo code tells the practicality of the Cloud Interface. The last line is chargeable for assembling information from the broker (who acts on behalf of a user) and redirects a message with a standard configuration setting to every of the 3 simulators. Finally, the results come back in AN asynchronous sequence back to the interface that compares best execution times (for more evaluation) and send results (first return 1st send) back to the user through the broker. 2: The HPC, HTC computer hardware (Alea): This pseudo-code illustrates the practicality of the system once employment submission arrives within the Aleaschedular. This includes the mental representation of the Aleamachine. The last line gets the configuration information from the Cloud Interface and starts the simulation. Finally, the results square measure remit to the Cloud Interface for interpretation.3. The Cloud computer hardware (CloudSim): The cloud simulation is enforced among the CloudSim machine that contains 2 native programming algorithms (FCFS in area and time shared fashion). Additionally the Cloudsim implements 2 simulation programming cases for performing arts experiments in programming of VMs to Hosts and cloudlets to VMs. This pseudo-code illustrates the practicality of the machine [8].

III. EXISTING SYSTEM

Managing Resources and Pricing them is a Challenging Task. There is no Win - Win Situation between resource Providers .In the existing paper the users are not provided with the proper service providers. The cost of the user was taken into account and they used load balancing technique to perform the work effectively. We do note that users can choose alternative algorithms if they consider just one objective or their jobs follow heavy-tail distribution and the arrival rate is moderate or heavy. There is no load balancing property. To determine it, only aggregated information on processing speeds and prices of computing nodes is required [11]. We developed pricing algorithms for scenarios where the load arrives at the same time or at different time. times and then the cost equals to the proposed optimal pricing approach without the limit. This is because that the limit is too loose and does not affect the price. This also confirms that the high cost at high load for the heavy-tail distribution is due to the high optimal price charged (for the large fraction of the arrival rate). There was no quality of service provided to the user. The work was done on the virtual environments only [23].

IV. PROPOSED SYSTEM

We utilize grid server for effective usage of resource rather using server. The Grid server works in the parallel processing mechanism, so that it ensures the effective utilization of the resource. The Grid Server also stores the User information in their database. Broker plays a vital role between users and Resource Providers [25]. User will give the data that is to be processed to the broker. Before accessing data from the user the broker will check the authentication from the user. Once the user gets in it need to provide its login id and its password to provide its authentication. After authenticating the broker in turn will be connected with the service providers which will provide the required service to the user [13]. The broker first takes into account about the cost eligibility of the user and then selects the service provider according to their efficiency. After getting the efficiency from the user regarding cost, broker will select the required service providers and then he will split up the job between them. Here one job is split between two service providers so as to know the maximum efficiency of each service providers. For example if there is three jobs say job1, job2 and job3 [28]. The first service provider is given both the job1 and the job2. The second service provider is given with the job2 and the job3 and the third service provider is given with the job3 and job1 in turn. The after they execute all the work that is given the best service provider is found out and given to the user for their fullest satisfaction [12]. we identify the best resource and all the jobs are executed simultaneously between the different service provider and we use round robin technique to dispatch the every jobs to the service provider, by calculating cost effectiveness and Quality.

V. CLASSIFICATION OF THE TOTAL FRAMEWORK

Modules that are deployed with this paper are :

1.Grid Servers Deployment
2.Broker Deployment
3. User Request
4. Cost Negotiation by Broker
5. Work Split & Parallel Processing
6. Identification of Best Resource
A. Grid Servers Deployment
We utilize grid server for effective usage of resource rather using server. The Grid server works in the parallel processing mechanism, so that it ensures the effective utilization of the resource. The Grid Server also stores the User information in their database. Also they will verify the User every time the Users logs into their corresponding account. Also the Gird Server will maintain the database to store the data in the database. These data will be requested by the Users of the Grid Network [25].
B. Broker Deployment
Service Broker is designed around the basic functions of sending and receiving messages. Each message forms part of a conversation. Each conversation is a reliable, persistent communication channel. Each message and conversation has a specific type that Service Broker enforces to help developers write reliable applications [14].
C. User Request
In this module we are going to create a User application by which the User is allowed to access the data from the Server. Here first the User wants to create an account and then only they are allowed to access the Network [29]. Once the User creates an account, they are allowed to login into their account to access the application. Based on the User’s request, the Server will respond to the User. All the User details will be stored in the Database of the Server. In this Paper, we will design the User Interface Frame to Communicate with the Server through Network Coding using the programming Languages like Java/ .Net.
D. Cost Negotiation by Broker
In this module every request is handled by the broker and he will acts as intermediate between service provider and user .And main use of the broker is to allot the work to the user before they allot the work to user they have check the cost .Based on the cost effectiveness they have allot to the user [15].
E. Work Split & Parallel Processing
In this module we explained about how the work is partitioned and how they are executed parallelly if user request. In this we use three service provider and will assign job1 and job2 to one service provider job2 and job3 to second service provider and job3 and job1 to third service provider .By splitting the jobs we easily executed the user request and work load of server in reduced [21].
F. Identification of Best Resource
In this module we going to identify the best resource and all the jobs are executed simultaneously between the different service provider and we use round robin technique to dispatch the every jobs to the service provider .By calculating cost effectiveness and Quality of service the best resource is identified and allotted to the user [22].

VI. RESULT ANALYSIS

In this section, we tend to concisely discuss however our best evaluation theory will be employed in a distributed manner wherever every owner is autonomous. associate owner determines its best price and revenue exploitation the aggregate info (Si;1 and Si;2) provided by the broker. Home owners area unit synchronic and in every iteration, each owner calculates its best price and adjusts to that. Note that every owner assumes that the prices of all the opposite nodes area unit unbroken mounted [32]. Therefore, owners will calculate their best costs severally and at the same time. Associate owner might receive magnified or decreased (or zero) fraction of the arrival rate and revenue. The algorithm starts with the at first elite nodes and within the iterations the nodes within the set [33]. In every iteration, every owner sends its new value, that returns the (new) arrival rate fraction and therefore the updated aggregated info required to calculate the best price for consecutive iteration. Note that during this situation the broker is not concerned in evaluation selections [34].
 
This situation will be viewed as a non-cooperative game among call manufacturers (owners). The state for the sport could be a strategy profile with the property that no owner will increase its expected revenue by dynamic its value given the opposite owners’ costs. In different words a strategy profile could be a same equilibrium if no owner will profit by deviating unilaterally from its value to a different possible one. a vital question is whether or not this algorithmic program will converge to the Nash equilibrium during this algorithmic program, each owner iteratively adjusts its value to the new best value until no owner will receive additional revenue by unilaterally changing its value (e.g., the Nash equilibrium is reached) [31]. That is, the expected revenues for the set of nodes used for load equalisation all stay a similar because the previous iteration. The only known results regarding the convergence to the Nash equilibrium area unit for distributed load equalisation algorithms with linear and strictly increasing link prices. The convergence proof for quite 2 players with general value functions remains associate open down side [30]. The authors of, have incontestable exploitation simulation experiments that their distributed load equalisation algorithms converge to the Nash equilibrium in distributed systems and procedure grids. This distributed autonomous evaluation algorithmic program will be used for cloud environments wherever there may exist multiple suppliers. Therefore, there may well be multiple brokers, like the situation delineated in InterCloud [20]. In such a localized design, these brokers May move with one another and use the distributed evaluation algorithm to autonomously verify costs through iterations. We tend to note that for it to be used with virtualized environments, that is common for cloud computing, The configurations of virtual machines (VMs) got to be considered in terms of allocation of their capability for user tasks. Specifically, node capability must be priced based on a further issue of VM configurations. A future analysis of the planned best evaluation work in a realistic cloud setting, like InterCloud or Hybrid Cloud, would be helpful to analyse the application potential of this add such environments [36].

VII. CONCLUSION AND FUTURE WORK

In the paper, we addressed an important problem that integrates load balancing with pricing to provide a win-win situation between resource owners and users. We found that there exists an optimal price that maximizes the revenue for each owner [24]. To determine it, only aggregated information on processing speeds and prices of computing nodes is required. We developed pricing algorithms for scenarios where the load arrives at the same time or at different time instances possibly from multiple users. We developed algorithms for a global approach with the objective to optimize the system-wide performance and agreedy approach with the objective to optimize the performance for the current load (from a user) . Through simulation studies, we demonstrated the proposed algorithms can achieve a response time close to MinRT and a cost acceptable to both users and providers. Therefore, they perform the best considering the two objectives of time and cost. We do note that users can choose alternative algorithms if they consider just one objective or their jobs follow heavy-tail distribution and the arrival rate is moderate or heavy. We also note that a user can make the choice based on how much he would like to pay for better performance [35]. Our optimal price theory can help an owner decide its optimal price and revenue, which helps it decide whether to process the current job and how to price its resource. In the future, we plan to test the proposed algorithms on actual platforms with realistic applications or workloads. Such an investigation together with virtual machines would help understand the potential of this work for cloud environments.

VIII. ACKNOWLEDGMENT

The author would like to thank the Vice Chancellor, Dean-Engineering, Director, Secretary, Correspondent, HOD of Computer Science & Engineering, Dr. K.P. Kaliyamurthie, Bharath University, Chennai for their motivation and constant encouragement. The author would like to specially thank Dr. A. Kumaravel for his guidance and for critical review of this manuscript and for his valuable input and fruitful discussions in completing the work and the Faculty Members of Department of Computer Science &Engineering. Also, he takes privilege in extending gratitude to his parents and family members who rendered their support throughout this Research work.
 

Figures at a glance

Figure 1
Figure 1
 

References