ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Analysis and Optimization Techniques for Sustainable Use of Electrical Energy in Green Cloud Computing

Dr. Vikash K. Singh1 and Devendra Singh Kushwaha2
  1. Assistant Professor, Department of CSE, I.G.N.T.U, Amarkantak, India
  2. Assistant Professor, Department of CSE, I.G.N.T.U, Amarkantak, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering

Abstract

Energy efficiency in all aspects of human life has become a major concern, due to its significant environmental impact as well as its economic importance. Information and Communication Technology (ICT) plays a dual role in this; not only does it constitute a major consumer itself (estimated 2-10% of the global consumption), but is also expected to enable global energy efficiency through new technologies tightly dependent on networks (smart grid, smart homes, cloud computing etc.). To this purpose, this research-work studies the problem of sustainable use of electric energy in cloud computing. As this subject has recently become very active in the research community, there is parallel research towards several research directions. In this work, the problem is being examined from its foundations and a solid analytical approach is presented. Cloud computing promises a new era of service delivery and deployment in such a way that every person can access any kind of services like storage, application, operating system and so on from anywhere any time using any device having internet connection. Cloud computing opens new possibilities approaching sustainable solutions to deploy and advance their services upon that platform. Sustainability of Cloud computing is to be addressed in terms of environmental and economic effects. In recent years, there have been two major trends in the ICT industry: green computing and cloud computing. Green computing implies that the ICT industry has become a significant energy consumer and consequently, a major source of CO2 emissions. Cloud computing makes it possible to purchase IT resources as a service without upfront costs. In this research, the combination of these two trends, green cloud computing, will first be evaluated based on existing research findings, which indicate that private clouds are the most green option to offer services. Case studies of state-of-the-art companies offering green hosting services are presented and incentives affecting their energy-efficiency development are analyzed. The results reveal that currently there is no demand in the market for green hosting services, because the only incentive for companies is low costs. Presently, massive energy consumption in cloud data center tends to be an escalating threat to the environment. In this research work we explore the energy efficient approaches inside data centres from the site and IT infrastructure perspective incorporating Cloud networking from the access network technologies and network equipment point of view to give a comprehensive prospect toward achieving energy efficiency of Cloud computing. Traditional and Cloud data centres would by compared to figure out which one is more recommended to be deployed. Virtualization as heart of energy efficient Cloud computing that can integrates some technologies like consolidation and resource utilization has been introduced to prepare a background for implementation part. Finally approaches for Cloud computing data centres at operating system and especially data centre level are presented and Green Cloud architecture as the most suitable green approach is described in details. In the experiment segment we modelled and simulated system and studied the behaviour in terms of cost and performance and energy consumption to reach a most appropriate solution.

Keywords

Optimization, Cloud.

INTRODUCTION

...all people, regardless of race, gender, income, or geography, have a moral right to a healthy, sustainable environment.
Firstly, the novel research area of energy efficiency in ICT networks is reviewed and research directions are identified and categorized. As a first step, we have managed to formalize a theoretical framework which can reflect all the important parameters and can enable the design of optimization algorithms. Thus, the developed model allows us to represent the QoS and power consumption as well as the effect of the control and represents the traffic equations in closed form. Moreover, the usage of composite optimization goals is proposed, comprising both of power consumption and QoS metrics. Using the network model, a gradient descent optimization algorithm is built, which can run in O(N3) time complexity in order to optimize the composite cost function. Based on power consumption characteristics of current and predicted future networking equipment, several case studies are presented with different optimization goals. The optimization results are evaluated and faster gradient-descent based heuristic algorithms are proposed.
There are two reasons which resulted in high energy consumption in cloud data center: one is rapid increasing of computers as well as the number of cloud users, which results in a significant amount of energy consumed by cloud data center due to their massive sizes; another reason is that resources allocation is not reasonable in cloud computing. Resources (such as CPU, disk, memory, and bandwidth) allocation becomes a key problem which needs to be resolved, as unreasonable resources allocation can cause more energy consumption in cloud data center. Because resources allocation algorithm with high energy efficiency can greatly reduce the energy consumption, it has been widely studied in the field of cloud computing. There are three broad goals in the problem of energy efficient resources studies: (1) reduce in quality of service reasonable and minimize energy consumption; (2) given a total energy, maximize the performance; (3) make the performance and energy objectives simultaneously met. To be of practical importance, resources allocation in cloud data center is not only to reduce energy consumption but also to satisfy Quality of Service requirements or Service Level Agreements. In cloud data center, virtualization technology plays an important role to satisfy the requirements of both energy and Quality of Service. Several servers are allowed to be consolidated to one physical node as virtual machine (VMs) by virtualization. This technology can greatly enhance the utilization of resources in applications.
Sustainability is a long lasting welfare in terms of economy and environment. Nowadays in the field of information and communication technology, businesses and companies care much more about reaching a sustainable strategy for their operations. The foremost reason is to reduce their carbon footprint and environmental impacts together with lowering the operational costs. Cloud computing is offered as a sustainable tool to address and fulfil these goals. Cloud computing is an emerging technology that is becoming widespread as it enables accessing computing resources such as applications, storage, services, video games, movies and music on demand in such a way that the Cloud clients need not have any idea how or from where they are receiving these contents. The only thing they needed is a broadband connectivity to the Cloud.

BACKGROUND

Our aim in this research is to examine under which circumstances the energy efficient routing will have significant savings and its effect on the network delay. Also, we seek to explore the limitations of the gradient descent and propose heuristics and simplified solutions. The evaluation of any energy saving solution largely depends on the energy consumption characteristics of the network infrastructures. Obtaining energy consumption characteristics for real network nodes is challenging and has yet to be fully achieved. Moreover, as there is ongoing research towards energy saving solutions at the hardware level, the overall picture of the network power consumption behaviour is expected to change. A generalized power consumption model is presented which describes the devices’ power consumption as a function of their load and can be adapted to the specific characteristics of each scenario. The power savings often come at the cost of network performance degradation in terms of increased delays. Thus, combined optimization goals are proposed and examined, consisting both of power consumption and delay metrics, in order to moderate the increase in delay when required.Finally, as the gradient descent optimization would be slow for online calculations, a faster heuristic that could be used online is proposed. This heuristic can perform a routing change that leads to a more energy efficient state, instead of searching for the optimal state that would be time consuming.
Cloud computing is a concept involving different issues, concerns, technologies. Reaching to a global comprehensive definition seems to be defined arbitrarily for each IT organization or company. In simple words Cloud computing is a collection of hardware, networks, interfaces, services and storage providing feasibility to deliver everything such as social networks or collaboration tools (video conferencing, webinars, document management) as a service over internet whenever and wherever you need on-demand.

LITERATURE REVIEW

Cloud Computing has the potential to have a massive impact, positive or negative, on the future carbon footprint of the IT sector. On the one hand, Cloud Computing data centres are now consuming 0.5% of all the generated electricity in the world, a figure that will continue to grow as Cloud Computing becomes widespread particularly as these systems are “always-on always-available”. However, the large data centres required by clouds have the potential to provide the most efficient environments for computation. Computing on this concentration and scale will drive cloud providers to build efficient systems in order reduce the total cost of ownership (TOC) as well as improve their green credentials. Even in local data centres, moving to a Private Cloud system can tap into these benefits and steps can be taken to apply solutions from large-scale public clouds. For example, by accepting the performance degradation that is inevitable with a virtualized system, many current servers can migrate to a lower number of physical machines, enabling surplus equipment to be powered off. This is a simple example, but one which theoretically can have a significant impact on energy consumption. The main aim of Energy-Aware Computing is to promote awareness of energy consumption in both soft and hard systems. The unique position of Cloud Computing allows this area to be brought into sharper focus and will go some way to improving the carbon footprint of IT now and in the future. Despite this progress, Enterprise has been reluctant to take up Cloud Services over fears in the areas of security, privacy and administrative control. These companies would rather employ their own people to administer hardware they own, on premises, with controlled access and established security procedures.
Cloud computing is a highly scalable and cost-effective infrastructure for running HPC, enterprise and Web applications. However, the growing demand of Cloud infrastructure has drastically increased the energy consumption of data centres, which has become a critical issue. High energy consumption not only translates to high operational cost, which reduces the profit margin of Cloud providers, but also leads to high carbon emissions which is not environmentally friendly. Hence, energy-efficient solutions are required to minimize the impact of Cloud computing on the environment. In order to design such solutions, deep analysis of Cloud is required with respect to their power efficiency. Thus, in this research, we discuss various elements of Clouds which contribute to the total energy consumption and how it is addressed in the literature. We also discuss the implication of these solutions for future research directions to enable green Cloud computing. The research also explains the role of Cloud users in achieving this goal.
Much work has been presented for energy efficiency in wireless networks; energy efficiency in wired networks has only recently drawn attention. The problem of energy aware Internet was first addressed by Gupta and Singh. The authors suggest the idea of energy conservation in Internet systems, proposing possible research directions: putting subcomponents such as line cards into sleep or clock the hardware at a lower rate, change routing so as to aggregate traffic along a few routes only while allowing devices on idle routes to sleep and modify the Internet topology in a way that supports route adaptation and sleeping. As described in recent surveys several techniques have been recently proposed in order to enable energy efficiency in networks. Although many other classifications would be possible, the research on green ICT can be classified in the following branches:
Measurements and power consumption models: There is still little knowledge on how each networking component (hardware, software/applications, network traffic) contributes to overall energy consumption, which is vital for the design of energy-saving systems and architectures. Thus, a lot of work has been devoted on measuring different networking equipment and building models for energy consumption of network equipment
Energy efficient hardware: This branch of research attempts to propose improvements on hardware to increase energy efficiency. Adaptive Link Rate (ALR), changing operating rate through Dynamic Voltage Scaling (DVS) as well as enabling sleep modes are being examined towards the ideal case of energy proportionality.
Energy-aware routing and network management: In this category, research focuses on the potential energy savings by modifying network states and routing policies, depending on different assumptions of network power consumption behaviour. In other words, this research area is based on results and trends of the previous two research categories. The focus is on quantifying possible energy savings, under the condition of different hardware’s and power models, and proposing algorithms for network energy optimization. This research work of this research, falls within this category, thus relevant proposed methods are extensively analysed.

METHODOLOGY

A very important aspect of the problem is to first measure and model the power consumed in network and cloud components. From these measurements it is shown that the base system is the largest consumer, so it is best to minimize the number of chassis at a given point of presence (PoP) and maximize the number of cards per chassis. Then, a generic model for power consumption of router power consumption is built based on a system with different configuration and operating conditions. This model, which reflects the dependence on the power needed for the chassis, the installed line cards and the traffic profile on the device, is applied to a set of network topologies and traffic matrices.
Energy efficient hardware: In this category several works consider hardware changes at the individual PC, switch or router level in order to achieve energy savings. The first approach is based on rate adaptation of individual links based on their utilization. These performance states try to reduce the energy consumed when actively processing packets by lowering the rate at which work is processed. The second approach puts network interfaces to sleep during idle periods. The sleep states try to reduce energy consumed in the absence of packets. For the realization of this approach small amounts of buffering are introduced and the resulting bursts and delays are moderated. According to the results both sleeping and rate adaptation can lead to significant energy savings, with the trade-off between them depending primarily on power profile of hardware capabilities and network utilization. Another technique is that of interface proxying, which transfers the management of traffic to a dedicated entity. This external proxy stores all packets and replies to requests, enabling powerhungry network nodes to sleep for longer periods. A working group of IEEE recently established IEEE 802.3az-2010 standard also known as Energy Efficient Ethernet (EEE) which describes the node mechanisms for enabling sleeping of links, leaving space for research in relevant policies. As Ethernet is a widely adopted networking interface, a fraction of savings in the operation of Ethernet will translate in large overall energy savings. In the legacy Ethernet standards for interfaces of 100M and higher, the circuitry is required to remain powered up whether or not data is being transmitted. The reasoning behind this was that the link must be maintained with full bandwidth signalling so that it is ready to support data transmission at all times. When there is no data they transmit an auxiliary signal called IDLE, used to align transmitters and receivers. This active idle state results in comparable power consumption regardless of whether there is data on the link. Moreover, as the complexity of the interfaces increases for larger data rates, the power consumption also increases significantly. Moreover, the majority of the presented approaches, present specific cases of network routing policies and hardware capabilities, i.e. sleep modes, adaptive rates etc, which could become obsolete in case future hardware design follows different direction. In contrast, in this work the problem of a given packet network, with given hardware capabilities and power consumption characteristics is first presented and a generalized model for energy efficient routing control is built. Having in hand the network model, a gradient descent optimization algorithm is used, in order to explore the potential savings from routing control. Several specific cases of power consumption characteristics and objectives are examined.
Energy Efficiency: In this section we describe two main approaches for energy efficiency of Cloud computing: Cloud Data centre and Cloud networking. The first one is of most importance as it consumes the majority of energy inside Cloud. Therefore if we succeed in energy efficiency of Cloud data centre it would mean that we have almost converted the Cloud environment to a green one. But we cannot ignore the effects of the networking part; therefore we try to reach a green solution for this part as well to address all trends that affects Cloud energy consumption.
Data Centre Energy Efficiency: As we mentioned before data centres are the most energy consumers inside the Cloud whereby they consume large amount of electrical power of Cloud, therefore decreasing energy consumption of Cloud data centre leads to a more sustainable and energy efficient Cloud computing. This section is an overview of effective approaches and aspects for energy efficient data centres. It will cover IT equipment, cooling systems (chillers, pumps and fans), air conditioning, power systems, and energy source. Energy consumption is classified into two categories as IT and site infrastructure where total amount of energy consumed in each is almost equal. Majority of energy consumed in site and IT infrastructure for cooling/air systems and powering servers respectively. Lighting has a very minor impact on energy usage compare to previous factors.

CONCLUSION & DISCUSSION

In this research, we presented Cloud computing as a sustainable solution for IT businesses to deploy their applications upon that. Then we challenged issues regarding sustainable electric energy for Cloud computing as an important concern of IT industry to operate in a more sustainable manner from the aspects of economy and environment.
Discussions and Assumption: The field of “green computing”, which addresses the environmental costs of computing, is still young. This research highlights several avenues for further exploration. The greenhouse gas reduction protocol presented in research covered a broad range of potential applications, but other situations remain unaddressed. Data centre adaptation on the basis of local renewable energy availability, the power price, and the estimated carbon intensity, or to provide ancillary services, are not yet accounted for in the protocol. It would also be worth investigating the footprint of approaches like “cycle-stealing” or the use of spot instances available at below-normal prices when excess cloud capacity exists. A better understanding of the impact of the high fraction of financial and environmental costs coming from the operational phase of the equipment’s lifecycle would also be beneficial. Other researchers have attempted to reduce maximum power draw to reduce the capacity costs that may account for a relatively large fraction of power bills, but approaches like incremental-cost-based transmission pricing, the use of interruptible transmission services, and possible deductions for duplication avoidance or customer-owned equipment may enable such costs to be greatly reduced for data centres located near renewable energy sources. Such opportunities should be further explored. Assessment of the impact of simultaneously providing ancillary services in data centres connected to different power grids in different regions would also be useful.
• Response time reduces when the service approaches the users due to less effect caused by internet latency and bandwidth. It indicates that when we have more than one data center, the users are served from the data center which is closer to them therefore they tolerate less effect caused by internet latency and bandwidth and this results in shorter and better response time. In other words decentralization of data centers will increase performance.
• Data center transfer cost for each data center fall since we distribute data centers and put them closer to user bases as data transfer per data center reduces due to dividing the population across data centers.
• We learned that the more we use virtualization technology (VM), the more we improve response and processing time, thus virtualization has a significant role in Cloud computing even if we add data center but decrease VMs the processing and response time increases adversely.
• Processing time can be enhanced by applying load balancing at the data center and virtualization level using peak load sharing and throttling respectively.
• Response time can be improved by applying load balancing at the virtualization level using throttling.
• The more VMs used (higher consolidation ratio) the more we can save energy cost and consumption.
• Applying virtualization just to IT infrastructure is not capable to increase energy efficiency of data center considerably. Thus it must be applied to site infrastructure as well to gain the ideal and acceptable saving in terms of energy cost and consumption.

EXPECTED OUT COMES AND FUTURE WORK

Our broad expected outcome is that the energy consumption of cloud computing needs to be considered as an integrated supply chain logistics problem, in which processing, storage, and transport are all considered together. Using this approach, we will show that cloud computing can enable more energy-efficient use of computing power, especially when the users’ predominant computing tasks are of low intensity or infrequent. However, under some circumstances, cloud computing can consume more energy than conventional computing where each user performs all computing on their own PC. Even with energy-saving techniques such as server virtualization and advanced cooling systems, cloud computing is not always the greenest computing technology.
In conclusion, by simply improving the efficiency of equipment, Cloud computing cannot be claimed to be Green. What is important is to make its usage more carbon efficient both from user and provider’s perspective. Cloud Providers need to reduce the electricity demand of Clouds and take major steps in using renewable energy sources rather than just looking for cost minimization. Real-world systems are often distributed over multiple data centres and the energy implications of this deserve further consideration. Tests were run with virtual machines using the KVM hypervisor to evaluate the energy overhead of migration. Conclusions were drawn as to the likely overhead of other services during migration but further testing using other hypervisors and other cloud services should be conducted to verify their performance. The impact of data compression and deduplication techniques on migration overhead should also be explored further. Determining when migration of particular workloads can be performed with low overhead can reduce replication costs. Incorporating information about the migratability of each application being executed might play a role in future service pricing along with information about its deferability. As networks become more energy proportional and green routing techniques are more widely used, the energy costs and carbon emissions associated with the network should be evaluated more closely.

Figures at a glance

Figure 1
Figure 1
 

References