Optical Burst Switching (OBS) is a proposed new communication technology that seeks to expand the use of optical technology in switching systems. In this paper we propose a scheme to minimize the contention and to decrease a burst loss probability at OBS network. The key idea of the paper is that buffering is implemented in electronic domain. In addition we elaborate our proposed contention avoidance mechanism and system performance using burst loss probability, steady state throughput, load balancing and energy is presented. We also show through simulation that the proposed protocol is a viable solution for effectively reducing the conflict and increasing the bandwidth utilization for optical burst switching.
Keywords |
Just-In-Time (JIT), Optical Burst Switching (OBS), Loss Probability, Contention avoidance, Throughput |
INTRODUCTION |
Contention resolution is necessary when two or more bursts try to reserve the same wavelength of a link in same time. This
is called external blocking. In OBS, when two or more bursts contend for the same wavelength and for the same time
duration, only one of them is allotted the bandwidth. The novel idea of this kind of networks is to keep the information in
the optical domain as long as possible. This allows the system to overcome the limitations imposed by the electronic
processing and opto-electronic conversion, leading to high-speed data forwarding and high transparency. In principle, the
OEO conversion limits the overall transmission speed of the optical fiber system. Thus, many research work addressed this
problem and many suggestions aimed to overcome the OEO hurdle and build an All Optical Network (AON). On the way
to an AON, and especially due to lack of advanced optical devices that can effectively replace their peer electronic devices,
optical burst switching has gained a great potential as it represents a good compromise between Optical Circuit-Switching
(OCS) and Optical Packet Switching (OPS). In this architecture, electronic switches are replaced by optical switches that
can handle the optical information. In this paper we will be interested in Optical Burst Switching (OBS) as a forwarding
technique. In OBS, data packets are collected into bursts according to their destination and class of service. Then, a control
packet is sent over the specific optical wavelength channel to announce an upcoming burst. The control packet, called also
Optical Burst Header (OBH), is then followed by a burst of data without waiting for any confirmation. The OBH is
converted to the electrical domain at each node to be interpreted and transformed according to the routing decision taken at
the nodes, and pertinent information is extracted such as the wavelength used by the following data burst, the time it is
expected to arrive, the length of the burst and the label, which determines the destination. This information is used by the
switch to schedule and set-up the transition circuit for the coming data burst. This scenario implies the following. |
• Since OBS is designed to be employed mainly in long haul optical networking, one-way reservation protocols
like “just-enough-time” (JET) and “just-in-time” (JIT) are the most suitable to reach an ultra-low-latency burst transport.
Indeed, the delay of two way reservation protocol would degrade the service drastically. |
• The burst must wait at the ingress node for a predetermined time, called offset time, to account for the Control
Packet (CP) processing time. This way, the burst will arrive at the core node only when the switch fabric is configured to
bypass it. |
• In the core nodes, the control packets contend for available resources, i.e., Wavelength Division Multiplexing
(WDM) or Optical Code Division Multiple Access (OCDMA).Consequently, failing Control Packet(CPs) and their ensuing bursts will be blocked, which, in turn, results in the loss a large number of packets, as one burst may extend from one
packet to a whole session. |
Many efforts have been exerted by researchers to present mathematical models which analyze the performance of
OBS networks. Shalaby proposed a simplified mathematical model to study the performance of an OBS core node
assuming Bernoulli distribution for arrivals per time slot, which proved to be a good assumption until a certain traffic load
when compared to the simulation results that assumed Poisson distribution for arrivals. Morsy et al. proposed an enhanced
mathematical model for the performance evaluation of OBS core nodes in order to relax some of the constraints given.
addition, researchers addressed the contention problem in many occasions. Akar et al. laborated on a Wavelength Division
Multiplexing (WDM) system and suggested using wavelength conversion for contention resolution. Sowailem et al.
proposed a new system that employs the code domain instead of the wavelength domain. In fact, they adopted Spectral
Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA) techniques. They have shown that this SACOCDMA
system outperforms the traditional WDM system, as it can handle more users, but it suffers from complexity. In
both systems, the improvement in the system performance is correlated with the number of converters. |
The aim of this paper is to add a new feature, namely control packet buffering, to the MAC layer of the OBS
network as a new contention resolution technique. This feature does not depend on the medium access technique and might
be regarded as a new modification to the JIT one-way reservation protocol. Therefore, it can be easily implemented either
above SAC-OCDMA or WDM based optical layer. The key idea of this feature is that the Control Packet that fails in
reserving its required resource will not be dropped immediately, rather electronically buffered for some threshold time x
which is determined at the ingress node according to each burst duration. Mean while, the required resource may be
released and consequently delayed reserved for the new burst. Otherwise, the Control Packet (CP) will be dropped, and the
ensuing Data Burst (DB) will be lost. This way, the probability is dropped, namely the per node burst loss probability, is
decreased. This suggestion requires some modifications in the burst offset time, in order to avoid the burst arrival while the
core nodes are still not ready to bypass it. |
This paper is organized as follows. The system description is presented in Section II. Section III is devoted for the
performance analysis. In Section IV, we present some numerical results for the derived performance measures. Finally, our
conclusions are given in Section V. |
SYSTEM DESCRIPTION |
JIT One-Way Reservation Protocol |
The JIT one-way reservation protocol is one of the main protocols suggested to be used in optical burst switched
networks. As explained, the protocol is in general based on two main features. |
• Immediate channel reservation: After CP processing, the core node immediately reserves the required resource, if
available, and a channel busy period is declared although the burst has not arrived yet. |
• Explicit channel release: The resource is maintained busy till the core node receives an explicit release signal.
This takes some time after the burst switching process. |
In JIT a wavelength is reserved for a burst immediately after the processing of the corresponding control packet . If a
wavelength cannot be reserved at that time, then the control packet is rejected and the corresponding burst is dropped.JIT
has the highest blocking probability over JET and horizon scheduling. |
In OBS Reservation is considered immediate if the wavelength is reserved immediately upon arrival and processing of
the control packet and delayed if the reservation period is delayed until the time when the burst is expected to arrive.
Release is considered immediate if the wavelength is released immediately. Therefore, four possible categories of
scheduling are possible, of which delayed reservation with immediate release, often referred to as just-enough-time (JET),
and immediate reservation with delayed release, often referred to as just-in-time (JIT). |
Fig1 shows how immediate reservation works, by considering the operation of a single output wavelength of an OBS
node. Each such wavelength can be in one of two states: reserved or free. Figure 1 show two successive bursts, i and i + 1,
successfully transmitted on the same output wavelength. As we can see, the setup message corresponding to the i-th burst arrives at the switch at time t1, when we assume that the wavelength is free. This message is accepted, the status of the
wavelength becomes reserved and, after an amount of time equal to the offset, the first bit of the optical burst arrives at the
switch at time t2. The last bit of the burst arrives at the switch at time t3, at which instant the status of the wavelength is
updated to free. |
Let t be the time a setup message arrives at some OBS node along the path to the destination user. As the figure
shows, once the processing of the setup message is complete at time t+Tsetup, a wavelength is immediately reserved for the
upcoming burst, and the operation to configure the OXC fabric to switch the burst is initiated. When this operation
completes at time t+Tsetup +TOXC, the OXC is ready to carry the burst. Note that the burst will not arrive at the OBS node
under consideration until time t + Toffset. As a result, the wavelength remains idle for a period of time equal to (Toffset -
Tsetup - TOXC). Also, since the offset value decreases along the path to the destination, the deeper inside the network an
OBS node is located, the shorter the idle time between the instant the OXC has been configured and the arrival of the burst. |
Proposed Control Packet Buffering With JIT |
A detailed description of the network ingress, egress and core nodes is presented. In this paper, we are simply interested
in explaining the Control Packet buffering feature. Thus, we adopt the case of no resource conversion. According to our
proposal, new functions must be added to the ingress and core nodes as follows: |
In addition to its main job, the ingress node assigns to each Control Packet prior to its transmission a threshold time that
is directly proportional to its burst length. Furthermore, it increases the offset time of the burst at the ingress node by c x
the assigned waiting time. Where c is the expected number of congested hops on the expected way of each burst. This can
be easily calculated at the ingress node based on the congestion statistics (this process is incorporated in the offset time
generator Fig. 3). Here, it should be noticed that the increment in the offset time is not constant for all bursts, as the
assigned waiting time and the parameter differ from one burst to another. This variable offset time is necessary to help
resolving the contention problem. The purpose behind having limited buffering time is that uncontrolled waiting time might
cause intolerable delays. In addition, it might be longer than expected and the Data Burst might arrive before reserving the appropriate resources. This way the buffering will not only be useless but it will also cause a waste of other resources
already reserved in precedent nodes. Furthermore, the proportionality between the threshold time and the burst length
implies that the burst loss probability will follow the burst length. In other words, it will be less likely to block bursts
comprising larger number of packets. Finally, we may summarize the exact difference of CP buffering feature compared to
standard offset-time-based QoS provisioning: The offset time based QoS provisioning is essentially concerned with
classifying bursts according to their priority and assigning different extra offset times to different classes so that higher
priority classes have privilege over other classes mainly in the burst loss probability. The purpose behind this technique is
to achieve higher reliability for mission critical and real time applications by providing lower blocking probability, lower
time jitter, etc. On the other hand, our proposal focuses on fairly improving the system burst loss probability by allowing
the blocked CP to be saved in the core node buffer for a predetermined time, as meanwhile the contended resource might be
released. Moreover, since longer bursts carry larger amount of information, judicious waiting time (patience) assignment
implies making the waiting time (patience) proportional to the burst length. Consequently, the CP buffering feature, as
suggested in our paper, does not isolate traffic classes. However, the flexibility of the proposed feature and mathematical
model make it possible to investigate the introduction of QoS issues with JIT protocol. |
PERFORMANCE ANALYSIS |
In this section, the performance evaluation of the signaling and reservation protocol named as just-in-time domain level
signaling (JIT-DS) has been performed. The performance is measured on the basis of offset-time delay and end-to-end data
transmission latency. |
Offset-Time Calculation |
Offset time of the JIT can be calculated with number of hops and weight with the hop to another hop which is
calculated as |
Toff= ((wt*10)+(h[c]*10)). |
Burst Loss Probability |
Our next target is to calculate the per node burst loss probability. First, let us explicitly define the two cases in which a
burst will be lost. |
1. When a CP finds the system full upon arrival, i.e., its required resource is reserved and the buffer assigned to this
resource is full. Thus assuming that the buffer size is m and based on the property of the Poisson process. |
2. When a CP joins the queue, but reneges. As defined earlier, provided that this CP joined the position of the queue, this
is the event .In order to find it is assumed that the CPs are served in a first in first out (FIFO)
manner and then the movement of this CP is tracked from its initial position to its departure position. |
Arrival Rate |
Arrival rate of an system can be calculated with an weight of an system to each hops.arrival rate is represented as
ar. |
ar=(((w*10)-i)/w); |
Offered load |
Offered load of an system is generated in this system by a arrival rate and system load.which can be calculated as |
ar=(((w*10)-i)/w); |
ef=(ar*10)/w; |
Where, ar is the arrival rate, ef is effective load |
Offered load is derived to be ef+h[b]. where ef is effective load and h[b] is hops. |
Threshold Time |
Threshold time of an system can be calculated hops in an burst .threshold time is represented as Tth. |
Tth= (h[c]*10); |
Throughput: |
The second valuable parameter to measure the system performance would be the steady-state system throughput β,
which is defined to be the number of successful bursts within a time interval equal to the burst duration. Thus |
β=(Average arrival/Burst duration) X probability of success. |
Energy: |
Energy of an system is generated in this system by a effective load which can be represted by ef,
Energy can be calculated as |
lp=ef/(1+ef); |
ef=(ar*10)/w ; |
where,ef is the effective load,lp is the energy. |
NUMERICAL RESULTS |
In this evaluation, we assume a buffer size m=5, an average burst length L=1000 kbits, hops h=10, and apply this
proposal to a WDM system with 62 channels with bit rate of 100 Gbps for each single user. First, the per node burst loss
probability is plotted in Fig. 5 versus the offered load under different values of loss probability. Needless to say, the per
node burst loss probability increases with offered load, since the higher the offered load, the more expected to find
resources reserved. Moreover, inspecting Fig 4, we find that the burst loss probability curve is improved by reducing the
reneging rate, i.e., by increasing the average patience time. This is quite expected, as this means that the Control Packet is
allowed to wait longer time in the queue before quitting. Simply stated, it will be more likely for the required resource to be
released before the core node discards the buffered CP. |
In Fig 5 the steady-state throughput is portrayed versus the average burst arrivals per burst duration. Observing this
figure, we find a normal behavior of the system, in which the system throughput increases rapidly with small values of
average burst arrivals, then gradually as the number of arrivals increases. This interesting effect appears and becomes more
obvious with the grow in the average arrivals. That is, the proposed feature makes the system capable of handling higher
traffic and allowing the control packet to wait longer time in the buffer strengthens this capability. |
In Fig 6 the explicit relationship between the MAC burst loss probability and the average threshold waiting time is
illustrated. This figure indicates the improvement in the system behavior will be at the expense of the delay that the burst
would experience. This way the system performance would be enhanced with a limited number of resource converters. This
means that less than one of every 100 Control Packets would be saved in the buffer and hence the effect of the waiting time
on the traffic of the precedent nodes can be safely neglected. |
Modification of the paper is done by parameter energy, which impacts reduction of loss probability and congestion,
leading to low traffic flow of burst determining the energy taken for data burst transmission from source to destination.
Energy of the system is based on time calculation of the burst. Burst is inversely proportional to energy stating, energy
increases only when the time period between the burst is low. In Fig 7 the explicit relationship between number of burst
transmitted versus energy is depicted. Observing this Fig, we find the normal behavior of system in which the energy
decreases rapidly with the increasing value of transmitted burst. Thus this graph impacts on reducing loss probability and
congestion. |
CONCLUSION |
In this paper we have proposed a new solution to the contention problem in OBS networks by means of control packet
buffering. This suggestion can be easily implemented with an Just-In-Time(JIT) protocol without any extra requirement. |
Moreover, the buffering time is restricted to a certain value and the offset time is increased accordingly. The most
interesting part in this proposal is that this buffer is implemented in the electronic domain. This way the proposal has
damped the system complexity accompanied with optical domain solutions, like code or wavelength converters, fiber delay
lines (FDLs), etc. This way the system complexity would be strongly reduced with minor delay. It can also be used to
provide a QoS to the system by assigning longer threshold time to bursts belonging to higher priority classes. Busy tone is
calculated to the intermediate nodes and source and destination nodes, so that the system is analyzed to reduce congestion
for the upcoming burst transmission. |
|
Figures at a glance |
|
|
|
|
Figure 1 |
Figure 2 |
Figure 3 |
Figure 4 |
|
|
|
Figure 5 |
Figure 6 |
Figure 7 |
|
|
References |
- C. Qiao and M. Yoo, ?Optical burst switching (OBS)?A new paradigm for an optical internet,? J. High Speed Netw., vol. 8, pp. 69?84,Jan. 1999.
- X. Yu, J. Li, X. Cao, Y. , and C. C. Qiao, ?Traffic statistics and performance evaluation in optical burst switched networks,? J. Lightw.Technol., vol. 22, 12, no. 12, pp. 2722?2738, Dec. 2004.
- X. Yu, Y. Chen, and C. Qiao, ?Study of traffic statistics of assembled burst traffic in optical burst switched networks,? in Proc. Optical Net.Commun Conf. (OptiComm), 2002, pp. 149?159.
- S. Oh and M. Kang, ?A burst assembly algorithm in optical burst switching networks,? in Proc. OFC, 2002, pp. 771?773.
- T. Battestilli and H. Perros, ?An introduction to optical burst switching,? IEEE Commun. Mag., vol. 41, no. 8, pp. S10?S15, Aug. 2003.
- I.Baldine, G. N.Rouskas, H. G. Perros, and D. Stevenson, ?Jump-start: Ajust-in-time signaling architecture for WDM burst-switched networks,?IEEE Commun. Mag., vol. 40, no. 2, pp. 82?89, Feb. 2002.
- J. Y. Wei and J. R. I. McFarland, ?Just-in-time signaling for WDM optical burst switching networks,? J. Lightw. Technol., vol. 18, no. 12,pp. 2019? 2037, Dec. 2000.
- H. M. H. Shalaby, ?A simplified performance analysis of optical burstswitched networks,? J. Lightw. Technol., vol. 25, no. 4, pp.986?995, Apr. 2007.
|