Video Transmission in Wireless Mesh Networks

Check out more papers on Communication Computer Networking Digital Technology
Contents

1. Introduction:

Video transmission in Wireless Mesh Networks

Recently there has been research interest in supporting Wireless Mesh Networks (WMN). Wireless Mesh Networks provides cheap and efficient network connectivity in a large region. There are number of significant advantages by using multipath for video communications, such as load balancing, potentially higher video bit rate and improved error resilience (Vishnu Navda).

Video applications development has been impulsive. For provide round-the-clock surveillance of critical areas of their communities municipalities now organize video applications. During the real-time events video steam in wireless mesh networks monitor corridors and traffic grids. Business users depend on video conferencing to enhance the productivity and reduce travel. In large numbers consumers are accessing on-demand video sites, generating millions of exclusive video streams every day.

For getting the real time access the service providers are looking for ways to exploit the customer supplies. In this the service providers acquire real time access to delivering differentiate streaming and entertainment and on-demand video services (BelAir Networks).

Video over wireless mesh networks is in turn dynamic new applications, including mounting video cameras on buses, trains, police cruisers, ships, police cruisers, and ambulances. Due to these capabilities of video surveillance further the safety personnel and traffic responds make better and faster is allowed and the decisions are more informed (BelAir Networks).

1.1 The ideal video network:

BelAir Networks wireless mesh architecture provide carrier-grade performance for creating consistent, cost-effective video networks that scale to support municipalities, including local businesses, neighborhoods, school campuses and traffic corridors. The BelAir wireless network provides Quality of Service (QoS), and it is ideal to support bandwidth intensive applications such as multimedia and video, including the industry's lowest latency rate, with minimal jitter.

By using backhaul to transport the high bandwidth video substances the wireless mesh networks eliminate the usage of long cables. For these video substances the BelAir wireless mesh networks supply mobile access. The police cruisers and responders can share the real time coverage of incidents by using the mobile access it is supplied from wireless mesh networks.

BelAir Networks wireless mesh products can support video applications as well as other Public works, Public safety, and Public Access networks, as shown in Figure 1. For civil employees crystal clear voice service allowed the cities to accumulate on cellular costs by establishing their personal Voice over Internet Protocol (VOIP) network. Community groups, trades people, schools, visitors, taxpayers, and even remote municipal employees are easily access to government information resources and internet based information from anywhere in the world can be accessed by a Public Access Network. Capabilities enhance the value of the network and make a group of financial intelligence (BelAir Networks).

1.2 Ganges Wireless Mesh Network:

For monitoring this Ganges network we would follow the particular schematic diagram which as shown below

2. PROBLEM FORMULATION in Ganges Architecture:

2.1 Routing Problem:

In this routing streaming videos have high bandwidth requirements. The routing problem is to resolve the path s between the CAN and each video source. By using the available bandwidth effectively we can obtain a good throughput. In this all flows are finish at the CAN. And the CAN is the root and sources as intermediate sources of the tree.

The capacity of the channel is limited by total bytes that can be received by CAN in unit time. It is called as upper bound for the sum of the throughput of all flows. But the actual throughput is commonly much les than the sum of the throughput. The reason for this is all nodes are operating at the same frequency band. For accessing the channel the nodes are within each others sensing range. The intra-flow contention is occurred by carrying the same set of flows run with each other in a multi-hop path network. Intra-flow limits the total throughput along a multihop path in a network.

In contradictory, the capacity is shared between the flows and the throughput for each flow is reduced in the case of when one or more flows are combined together. For a single channel mesh network it is difficult to reduce the intra-flow contention, and also reduce the inter-flow contention in spatial routes for various flows and also improves the throughput for each and every flow (Vishnu Navda).

2.2 Loss of Packet and Delay-Jitter Problem:

Over multi-hop wireless networks, we can see two different types of losses in a packet that occur during in real time where we send the video through transmission. Firstly, due to these channel errors packets may be received and corrupted. To improve the reliability 802.11 MAC uses retransmissions. Secondly, every packet wi ll have its own deadline before it reaches to the destination and plays the video online in wireless network.

Packets those who are arrived late are considered as discarded or lost which causes to decrease the online video session. This will be attractive to loss and reduce the quality. In this case delay experiences flow of packet which can cause large variations in the network congestion and packet losses. A playback buffer is used to reduce the jitter. And that playback buffer adds some delay between the playback time and actual streaming. Before being played back the packets received in time are buffered. Moreover the lower buffer size requirement implies the lower delay. The playback buffer should not be empty for video to be played back without interruption.

3. System Design in Ganges Architecture:

In this system design, we explain the solution in which we can determine the maximum rate by which the video flows and the construction of routing tree. Following we will present several adjustments at the routers for reducing the flow of packet delay jitter and handling the packet losses can be improved by using the quality of the video streams.

3.1 Aggregation Tree Construction and Rate-based Flow Control:

A simple grid network is shown in fig 3(a) for a multi-stream aggregation in order to analyze the impact of the available bandwidth of each flow. And the fig 3(b) depicts two different sets of routes for each flow from root node to source node. With RTS-CTS enabled, all links compete with each other in both these tree instances.

Example to depict that the aggregate throughput of all contending flows is same no matter where they merge. (a) Connectivity Graph. Two instances of aggregation tree: (b) and (c).

For the first case (Figure 3 (b)), there are 6 competing transmitters and each get equal share of the channel capacity (i.e. C=6). So the highest achievable aggregate throughput at the root is 3 * C/6 = C/2.

For the second case(Figure 3 (c)) where the three flows combine before reaching the root, there are only 4 transmitter and thus every node now gets 1/4th share of the channel bandwidth. Thus the aggregate throughput at the root is C=4. But if the sources are limited to send at a rate C=6, then the intermediate relay node gets to use the remaining channel time i.e. C=2 and the aggregate throughput is same as in case 1. Thus with flow control, merging two or more contending flows does not impact the aggregate throughput.

Four flows aggregated along disjoint competing paths. (b) Missing some flows are as close to the source which is not possible to increase the path length for each flow increasing per throughput.

In this another advantage is, some sort of edges is utilized for video transmission in the case of two or more flows are merged together. For other flows, it will increase the chances of finding specially disjoints the path. Although they will be ultimately interfere at the root. The total bandwidth at the root is higher therefore per flow bandwidth is high.

3.2 Distributed Spatial Aggregation Tree Construction:

For each flow sequentially assigns the best routes in an algorithm called as Greedy algorithm. In this there is a source node denoted by v, for that each source node ‘v', the goal is to determine a path to the root‘s' that occurs contention only at the last few hops. This is known as Spatial-Path search. All these paths have length in L+1 hops, where L is the hope distance of node ‘v'. If this constraint is not satisfied for these paths, algorithm switches to the Compact-Path search.

Flow value f (u) is the number of flows carried by node u and also defines the Blocking value b (u) for a node u as the number of contending transmitters within the one hop neighborhood of u.

Now consider the search for a route from a root s to the source node v. With the help of Spatial-Path route search will be started. This considers a sub graph consisting of nodes belonging to the set {s Ï… N(s) Ï… R}

Where N(s) is the set of one hop neighbors of s, and R is the set of all nodes that have f (u) = 0 and b (u) = 0.

The cost of all edges is 1. When the length of the shortest path for a spatial path flow is successful if there exists one, is at most h (v) + 1.

3.3 Rate-based Flow Control Algorithm:

After establishing the routes, we have to check whether the aggregation tree can support the highest per flow bit-rate to efficiently share the network resources across the all flows. Rate control helps prevents some of the flows from forcefully sending traffic while other flows are starving. At the tree root the sources at the stream data can be handled, which is usually the problem, and per flow throughput reduces and packet losses in the network.

For resolve the optimal operating rate the binary search scheme is used. At a certain minimum constant bit rate all sources are streaming. For each step the throughput at the root is evaluated and the obtainable load at each source is doubled. Doubling the load stops when the throughput of one or more flows drops below the obtainable load.

The load is compacted in the subsequent step by half of the preceding incremental load in the similar to standard binary search. In this the reduction is continued till the throughput and the obtainable load matches again. The above two processes of reducing by half and doubling are repeated till the throughput of each flow sink stabilizes to the greatest value.

3.4 Delay Jitter Reduction Techniques:

For smoothening the packet delay jitter, a per flow playback buffer is maintained by CMN. For a flow the buffer may depends on the size of the buffer and one-way latency of the path.

Larger buffer size introduces delay in the playback of a real time video. It is desirable to have a small bounded delay in order to use a limited playback buffer. In order to reduce the end-to-end delay variations we are designing the following optimizations for the intermediate routers.

3.5 Packet Reordering Schemes:

In that schemes middle router records of the packets, this is one of the queue based on the following two criteria. Firstly, lower delay budget needs to be delivered with higher delay budget. Thus, the records based on the delay budget for each packet. Secondly, it is carrying traffic from multiple flows, and the instantaneous through for a particular flow drops below the assigned bit-rate.

The packets will experience a larger delay as they are behind in the queue. While increasing the average delay packets only by a small fraction. A router constantly measures the instaneous, rate for each flow and assigns higher priorities to flow with lower instantaneous throughputs. If a particular flow is starving and the rate is below the allocated rate, assign a higher priority to packets of this flow alleviates the problem of flow starvation.

3.6 Early-Drop Scheme:

The expected time a packet time a packet in it's transmit queue reaches the CMN by using the path latency information to the CMN.

Problem formulation in Wireless Mesh Networks:

A wireless mesh network is considered as a group of nodes. In this wireless mesh networks we can imagine that the connectivity is exists among a group of nodes. Through the transmission the group of nodes is not interfering with each other and these are working at some scheduling mechanism or some physical layers mechanisms.

Multi-Channel Multi-Radio environment is the example of wireless mesh networks. The transmission across the group of nodes may not interfere with its nearest nodes will occur only the channels are assigned accurately between the radios. In another instances the physical layer or MAC layer uses the OFDM. The frequency carriers at each and every node can be transmitted appropriately, and also reduces the interference between the nodes. A wireless mesh network is a model and that model can be implemented as a graph, where is the group of wireless links. And is that graph is considered as a group of wireless links. In this wireless mesh networks we can calculate the capacity of the each and every wireless link.

Due to the transmission errors the mean packet loss probability is on a link can be assumed. In the wireless mesh networks we can consider a group of video communication sessions. In this network for each and every video session has a source node and destination nod. There is a set of particular paths denoted by,for each source and destination pair. The video stream is started at source node and the total rate of video stream is and the video streams are surrounded

by. By using the specific video recorder and the video sequence are used by the source node, the upper and lower bounds are resolute. The rate of video stream is split across the paths. We can interpret that this path is not selected for a particular path, so we are assigning a particular path rate to zero. In this way rate

Allocation is correlates with path selection.

Below conditions are must be satisfied for denoting an element in the root vector is,

4. Literature Review

4.1 Channel Video Transmission:

The importance of telecommunications across long distances is to exchange information by means of mail, radio, television, telephone and the internet. Samuel Morse sent his primary message over a telegraph line between Washington and Baltimore on May 24, 1844, which opened a latest page in the history of modern telecommunications. By means of improved technologies in the area of computers, telecommunications, semiconductors, and wireless communications are constantly varied the objectives and features. Per every day new applications will be coming in radio transmission. By using the wide range of signals, such as audio, pictures, text, and video people can take pleasure. But nowadays the information exchange must be very much less cost and extensively faster. Therefore there are different trends in sending the information from person to another person in telecommunications. Due to these reasons the speed of exchanging the information is high.

Modern telecommunications allow the exchange of information among “any one”, at “any time”, and “any where”. For the representation of video Signals require large amount of data compared to other types of signals, namely text, audio, and images. For the network quality video the bandwidth is concerning 45 megabits per second (MBPS) required by the National Television System Committee (NTSC).

For video objects as per the recommendation 601 of the International Radio Consultative Committee (CCIR) calls for a bandwidth of 216 Mbps. Based on the High Definition Television quality images a video object may requires a bandwidth of 880 Mbps and that data is placed across a elevated demand of storage and broadcast requirements. Generally video transmission applications typically require an end-to-end delay constraint. Video Transmission requires higher bandwidth and higher throughput due to the scientific developments. And also the video transmission applications namely video broadcasting, distance learning, video conferencing have the improved recognition. Across the cellular communication in higher data rate cellular networks, such as GSM-GPRS, CDMA and UMTS and video –display-capable mobile devices providing video capabilities to clients.

Applications:

The video transmission applications are classified into groups because of the nature of video applications. These Video applications determine the protocol environment and the constraints. Video transmission applications can be broadly classified into three categories based on delay constraints.

Conversional Applications:

These conversational applications include two-way video transmission across Ethernet, LAN, DSL, Wireless, mobile networks and ISDN such as video telephony, video conferencing and distance learning. These applications are characterized by very authoritarian end-to-end delay constraints and typically less than a few hundred milliseconds. They also implicitly need real time decoders and encoders. In a real time wee are using the feed back based source code. Especially for the encoders in another way the severe delay requirements normally limit and allow the computational complexity. For high-latency applications

Video download and storage applications:

On a server the preen coded code is stored for downloading the video applications. For downloading purpose we are using the reliable protocols namely HTTP, FTP. And in this the application threats are encoded with the bit stream and this is called as regular data file. By using the video encoding concept for an encoder we can reduce the computational complexity. For the optimized video coding it is possible to high coding efficiency. In this we are considering video storage concept also for better delay constraints and computational complexity. In this video storage we can not concern on error resiliency. In traditional video storage we can improve the compression efficiency; it is the ultimate goal of video storage.

Video Streaming applications:

This type of applications can be working in between in conversational and download applications. The complete video bit stream can be transmitted across the applications and these applications can be exceptional. Normally the preliminary buffering time is few seconds. In video streaming applications when a playback starts that means it is real time and it must be permanent and it is having any interruptions. In this the video stream can be transmitted and preen coded. The video to be streamed is send from a single server, but may be scattered in a multipoint, Point-to-point or even broadcast fashion.

4.2 Elements of video Communication Systems:

Below figure represents a block diagram of video communication systems. Video, rate control and decoder are the major components in video communication systems. In video transmission systems there are five important conceptual components.

  1. The source encoder is used to compact the video signal into media packets, for presently transmitting the media storage the packets are directly send to the lower layers.
  2. In packetization and coding the application layer can accusation.
  3. From sender to the receiver the transport layer performs the delivers media and congestion control for the best user experience while sharing the network resources with other users.
  4. The packets are delivered through the transport network to the client.
  5. Depending on some of the conditions the receiver decompresses and equipment the interactive user controls and delivers the video packets.

Video Transmission System Architecture

Since from number of years the video transmission system has been great significance because of coding of a signal or compression standards exists. In this the video transmission system compression standards are H.264, MPEG-4 and MPEG-2. Decrease the source redundancy is the best objective for compression. Due to the lossless compression the source reduces the bit rate. For radio transmission applications loss compression may require. Almost all communication systems have restricted bandwidth. These two requirements are conflicting and they establish the tradeoff between source and channel encoding. For representing the video sequence compression reduces the number of bits by exploiting both secular and spatial redundancy. On the other hand, to alleviate the effect of channel errors on the decoded video quality, redundancy is further back to the compressed bit stream.

For each video frame and video unit within the frame the source bit rate is forced or shaped. Depending on the channel state information (CSI) is reported by lower layers such as application layer and transport layers. Based on the channel arte the bit rate is estimated. The main theme of video streaming system is the information exchange across different layers. The network block represents the communication path between the sender and the receiver. This path may include subnets, routers and wireless links. The network has several paths that support QOS. Because of some problems the packets may be dropped due to the problem of congestion in a wireless networks. At the transport/application layers FEC is used for parity checks and also compact the packet losses. If the application is allowed the vanished packets might be retransmitted.

In the concept of de-packetizing the application and transport layers are responsible at the receiver side. The video decoder decomposes the video packets and displays the video frames in real time. This video is displayed continuously without distortion at the decoder. The video decoder usually employs error concealment techniques to alleviate the effect of packet loss. To obscure the lost information the concealment strategies will exploit the spatio-temporal correlations in the received video.

4.3 Network Interface:

The model between networks and applications known as network interface, it consists of five important layers such as Application layer, network layer, transport layer, the physical layer and the link layer.

The main utility of network layer is to compress the packetization video stream and send these packets are send across the network. In the network interface the common issues are channel coding including retransmission and monitoring the network condition. The QoS parameters are used to guide the transmission priority used in schemes such as FEC, power adaption and retransmission. This network interface part the system design should be focused in the research o video transmission.

4.4 Network protocols:

IP is the most commonly used network layer protocol. The IP protocol should provide the connectionless delivery services, by means of each packet can be routed separately and independently, regardless of the source and destination. IP provides the best efforts and variable services across the network.

Below fig shows the protocol stack of IP Network. In this IP network communication protocols are used in wide range.

Illustration of protocol layers

The TCP protocol is operated at the transport layer and it is un connection oriented protocol and it supplies reliable services. Through the ACK the TCP provides reliability. This TCP protocol has own congestion control mechanisms.

UDP is the alternative of TCP which together with IP is sometimes called as UDP/IP. UDP is a connection oriented protocol. UDP may not provide the reliable transmission across the network. It does not provide the sequencing of packets at the time of data arriving.

Secondly, UDP may not retransmit the loss of packets. However TCP provides unbounded delay because of constant retransmission. In this the UDP widely uses the video applications.

For constraining the bit rate to the applications, the congestion control which is additional should be deployed on the above of UDP when UDP/IP is being used. UDP is appropriate for video applications due to their strict delay constraint along with the QoS requirements.

A checksum capability is available in UDP to verify that the data has been arrived correctly or not. The transmission is done only for the packets which are correct to the application layer. The wired IP networks should consider this where due to buffer overflow, all packets will be lost. The received packets contain bit errors in a wireless IP network. Here, the packets which are having bit errors are useful for the application.

4.5 Error-Resilient Video Coding

Along with the source signs and achieves entropy if the video source coding removes all the idleness then a solo error appears at the source and will initiate a immense amount of deformation. To the channel errors the supreme source coding is not vigorous in further terms. Designing an perfect model or near-ideal source code moreover is an convoluted one particularly for video signals in view of the fact that the video source signals have memory recollection and time varying, and at some stage in encoding their statistical distribution possibly will not be presented (mainly for live video applications). Following source coding as a result the redundancy definitely remains constant. To a certain extent than keeping the entire concentration for removing the source redundancy completely, we should utilize that source. As already discussed in chapter 3, alternatively, at both the source-and channel-coding levels we realized that the nature of JSCC is to optimally add redundancy. Thus the lasting redundancy involving between the source symbols desires to be regarded after source coding as an inherent form of channel coding [94]. Channel coding and source coding is now and then can barely be differentiated, essentially, when JSCC was concerned.

The redundancy additionally added should avoid the inaccuracy transmission if we speak in general about that and it confines the alteration caused by the packet losses, and will smooth the progress of error detection, concealment and recovery at the receiver side. To get the most out of the error-resilience competence, error-resilient source coding “optimally” we need to add redundancy all through the source coding in order to adjust the application necessities such as computational capacity, channel characteristics and delay requirements.

We momentarily sketch out the video density principles, introduced the required vocabulary and emphasize the solution technologies before us reviewing the error-resilient source coding system apparatus for conversation of error-resilient source coding. In conclusion we spotlight on the argument of optimal mode collection, which represents the error-resilient source coding methodology that is used all the way through this monograph as an illustration for illustrating how to attain finest error-resilient coding.

4.6 Video Compression Standards

We in detailed way talk about one of the most extensively used video coding methods, in this segment, that of hybrid-based motion-compensated (HBMC) video coding. More than a few winning principles have been emerged appreciation to the pains from the academy and production in the earlier period, appropriate to the important major developments in digital video applications. The moving picture experts group (MPEG) family and the H.26A— family are the two most important families of video compression standards. These standards deal with a large collection of issues such as bit rate, picture quality, complications and error resilience are the submission oriented.

The H.26/AVC is the latest standard that is pointing to provide the state-of-art compression technologies. The ITU H.26L and the MPEG-4 committee in 2001 which is the outcome of the combination which is known as JVT (Joint Video Team), and it is a reasonable expansion of the preceding standards adopted by the two groups. So, therefore it is also called as AVC, MPEG-4, or H.264 part 10 [105]. On behalf of comparison and for over viewing the video standards, let us see [106]. It is very important to have a look of the decoder only which is particularized by all the standards, i.e., they normalize the syntax for the illustration of the programmed bit stream and characterize the decoding progression, but leave considerable flexibility in the plan of the encoder. For reducing the latitude in optimizing the encoder for particular applications is given permit ion for standardization by this approach [105].

Depending on the HBMC approach every part of the exceeding video compression standards are mentioned and distribute the same block diagram, as revealed in the fig. 4.1. The connected luma and chroma samples (16A— 16 region) in each and every video frame are offered by block-shaped units which are called as MBs (macro blocks).

The core of the encoder is action motion compensated prediction (MPC) as shown in the fig. 4.1(a). Motion estimation (ME) is the opening step in MCP, which is pointing to locate the region in the past reconstructed frame that top most matches each and every MB in the present frame. Involving the MB and the prediction region the offset is well-known as the motion vector. From the motion field the motion vectors which are differentially entropy determined. By applying the motion field to the earlier reconstructed frame where the mentioned edge is predicted and the motion compensation (MC) is the subsequent step in MCP. The displaced frame (DFD) which is acknowledged as the prediction error is obtained by minimizing the reference frame from the existing frame.

The three chief blocks that is quantization, entropy coding and transform which are processed by the DFD by following MCP. For using a transform with a chief reason is to decorrelate the information so that the connected energy in the transform domain is further efficiently represented and thus to encode the ensuing alter coefficients are greatly simpler. The transforms in image and video coding the distinct cosine transform (DCT) is one of the most extensively or broadly used which was suitable to its elevated transform coding gain and little computational difficulty. The compression gain is the most important cause for the quantization which introduces the defeat of data in an order. The entropy encoded is the quantized coefficients, e.g., by via arithmetic coding and Huffman. The DFD is the most primary one which is divided into 8 A— 8 blocks; the DCT is then applied to every block with the ensuing coefficients quantized. In a specified MB can be intraframe coded, in the majority block-based motion-compensated (BMC) values, the motion compensated prediction is used by intraframe code, or basically simulated from the in the past decoded frame.

Error resilient video-coding

Hybrid block-based motion-compensated video (a) encoder (b) decoder.

These prediction modes are denoted as Intra, Inter and skip modes, respectively. Designed for each MB the coding and Quantization are performed in your own way according to its mode.

5. Joint Source-Channel Video Transmission

In favor of every MB in consequence the coding parameters are normally or characteristically represented by its prediction mode and the quantization parameter.

As exposed in the figure, at the decoder, the inverse DCT (IDCT) is functionalized to the quantized DCT coefficients to acquire a reconstructed adaptation of the DFD; the reconstructed version of the recent frame is obtained by totaling the reconstructed DFD to the motion-compensated prediction of the nearby frame based on the in the precedent reconstructed frame.

The wavelet representation provides multiresolution/multiscale disintegration in addition to DCT-based video firmness of a signal with localization in both the time and the frequency. On behalf of both videos and still images one of the most compensation of wavelet coders is that they are liberated of blocking artifacts. They typically present continuous data rate scalability in count.

The discrete wavelet transform (DWT) and subband decomposition have gained greater than before fame in image coding outstanding the considerable donations in [107,108], JPEG2000[109] and others all through the previous decades. Active research has also been applied recently by the DWT to video coding [110,111,112,113,114,115]. Suband video codecs or 3D wavelet have established special concentration due to there inbuilt feature of occupied scalability from the above discussions. The disadvantage of these approaches has been their poor coding efficiency caused by incompetent sequential are filtering recently. Towards the standardization of wavelet-based scalable video coders a major breakthrough which has greatly improved the coding efficiency led to renewed efforts which have come from the contribution of combining lifting techniques with 3D wavelet or subband coding [116,117].

5.1 Error-Resilient Source Coding

For supporting error resilience we first review the video source-coding techniques in this section. Next to that, we gave the detailed review of the features defined in the H.263 and H.264/AVC standards that support error-resilience. We will not discuss MPEG-4 separately as the error-resilience modes defined.

5.1.1 General error-resilience techniques

Error resilience is achieved by adding up redundancy bits at the source coding level, which is perceptibly reduces the coding competence as mentioned above. How to optimally add reluctant bits to control the tradeoff between coding efficiency and error resilience is the resulting question. We need to identify the steps in the source-coding in order to address the question which results in corrupted bits causing significant video quality degradation.

Motion compensation introduces sequential dependencies stuck between frames which were discussed in this chapter 2, which leads to the errors in one frame propagating to prospect frames. Apply of the predictive coding for the DC coefficients and motion vectors introduce spatial dependencies surrounded by the image in totaling up. An error in one part of a picture will not only affect its will not only affect its neighbors in the same picture because of the use of the motion compensation but also the subsequent frames. To terminate the dependency chain the solution is to propagate the error. All these techniques are designed for this purpose like intra-MB insertion, independent segment decoding, reference picture selection (RPS), video redundancy coding (VRC) and multiple description coding. To add the redundancy at the entropy coding level is the second approach or technique towards the error resilience. For these some of the examples include like reversible VLCs (RVLCs), data partitioning method or procedure and resynchronization, which can assist to maximum the error propagation outcome to a slighter section of the bit-stream on one occasion when the error is detected. The error recovery or concealment of the errors effects, such as flexible macro block ordering (FMO) was helped by the third type of error-resilient source coding tools. Even though, lastly, the scalable coding was considered chiefly for the reason of the communication, next to the computation and display scalability in assorted environments which can afford a way for error resilience by utilizing irregular error protection (UEP) in the course of prioritized QoS transmission. The techniques which provide the error resilience was next provided by us with few more details.

Data partitioning:

This method functionality is appropriate for wireless channels where the bit inaccuracy rate is comparatively elevated. One MB of data including the differentially encoded motion vectors and DCT coefficients are packetized together in traditional packetization followed by the data of the next MB. In one packet the data of the same type of all MBs are grouped together to form a logical unit with an additional synchronization marker inserted between different logical units, yet, by using the data partitioning mode. Higher level of error resiliency is provided by this mode which enables a finer resynchronization within the packets. The synchronization at the decoder can be reestablished that is, when an error is detected and when the decoder detects the following secondary market, thus only by discarding the logical unit in which an error occurs, unlike the traditional packetization, and in that packet following the detected error where an error causes the decoder to discard all the data for all MBs. The figure 4.2 explain one characteristic data partitioning syntax defined in MPEG-4. Each and every slice is to be divided into up to three unlike partitions which were allowed by H.264/AVC syntax. In MPEG-4 and H.263++ Annex V also was defined by this functionality with different syntax definitions. Different importance is given classifiably or characteristically by these logical units in a single packet. For instance, the packet headers usually characterize the most significant unit which was followed by the motion vectors and DCT coefficients. When error suppression is used by the data partitioning which can be advantageous.

Joint Source-Channel Video Transmission

FIGURE: Packet structure syntax for data partitioning in MPEG-4

RVLC:

In both forward and backward directions the reversible VLCs enable decoding, when the errors are detected. A large amount of data can be salvaged in this way in view of the fact that only the portion among the primary MB in which an error was detected in both the forward and backward directions are discarded in this way. By using a symmetric code table this mode enhances the error resiliency by sacrificing the coding effectiveness. The DCT coefficients are still coded with the table used in the baseline but RVLCs are defined in H.263++ Annex V [120], where packet headers and motion vectors can be encoded using RVLCs, from the time when the corruption of DCT information frequently has fewer collision on the video quality compared to packet header and motion information.

Resynchronization:

This mode is targeted or pointed at synchronizing the operations of the encoder and decoder as the name implies when the errors are detected in the bit stream. It is usually combined with the data partitioning. In MPEG-4 it is defined that there are a number of or various approaches to resynchronization. Among them the video packet is one of the most important and significant approach, which is very similar in principle to the slice structured mode in H.263+. And next preceding one is the fixed interval synchronization approach, in the bit stream which requires the video packets to start only at allowable and fixed intervals.

Scalable Coding:

A hierarchy of bit streams is produced by layered video coding and scalable coding to the overall quantity where the different parts of an encoded stream have unequal contributions. For example, the available bandwidth is partitioned to provide UEP for different layers with different consequence, where, the scalable coding has inherent error-resilience profits, particularly if the layered property can be exploited in transmission. This technique is normally referred to as layered coding with transport prioritization [121].

Multiple descriptions coding (MDC):

A signal is coded into a number of disconnect or separate bit streams, where the MDC refers to a form of compression and each of which is referred to as a description. Two significant and important characteristics are discussed with this MDC: they are as follows, the first and the primary one is each and every description can be decoded independently that is not depending on whichever individual one to give a functional rebuilding of the unusual signal. Combining the more appropriately received descriptions which improves the decoded signal value is the second one. Thus, prioritized transmissions are not crucial by the MDC. The descriptions are self-determining by each one another, is a point that was mentioned worth fully and are characteristically given equivalent significance approximately.

Video redundancy coding (VRC):

The error resiliency is supported by the VRC (Video redundancy coding) by limiting the temporal dependencies between the frames introduced by motion compensation. The video sequence is dived into two or more subsequences in this scenario which was named as the “threads”. The threads are encoded independently or individually with out depending on any other coding by each other and every frame are assigned to one of the threads in a round-robin fashion. All threads converge into a so in regular intervals-called Sync frame, which serves as the synchronization point from which the latest threads begin or start. Without degradation this method habitually outperforms I-frames insertion while frames can for all time be generated from the integral thread, if in the least.

The error-resilience features defined in H.263 and A.264/AVC was momentarily discussed by us in the next section. The common techniques or methods described above will not be frequently repeated still if they are enclosed by the two standards

5.2 Error-resilience features in H.263+/H.263++/H.264

In H.263+, H.263++ and H.264/AVC a lot of numerous and various features were defined by pointing at supporting error resilience.

Slice Structure:

Replacing the GOB concept in baseline H.263 is defined in H.263+ Annex K in this mode. Group of MBs are considered in each and every slice in a picture and these MBs can be sorted moreover in scanning order or in a rectangle shape. In various ways we can justify the reason why this mode provides error resilience. The initial one is, without using the information from other slices (except for the information in the picture header) the slices is independently decodable, which helps to limit the region affected by the errors and minimize or decrease the error propagation. And the second one is, for each MB the loss probability is reduced or decreased fatherly thus the slice header itself serves as a resynchronization maker. The slice sizes are extremely stretchable, and can transfer and received in any order relative to one another, which can help to reduce or decrease the latency in lossy environment, is the third one described.

Independent segment decoding:

In H.263+ Annex R this independent segment decoding mode is defined. The picture segment (distinct as a slice, a GOB, or a numeral number of GOBs) boundaries are imposed in this type by not allowing the dependencies transversely by the fragment. Between well-defined spatial parts of a picture, this mode limits the error propagation and thus enhancing the error-resiliency capabilities.

Reference picture selection (RPS):

In H.263+ Annex N this RPS mode is defined which allows the encoder to select an earlier picture rather than the previous picture as the reference in encoding the present picture. Rather than the whole or complete pictures the RPS mode can also be applied to individual or independent segments. One method or technique that can be achieved by this mode, that is VRC mode technique which was discussed in the above section.

The error-resilience capability can be greatly enhanced by using this mode if a feedback channel is not available. For example, the encoder may select not to use this picture for future prediction, if the sender is formed by the receiver through a NACK that one frame is lost or corrupted during transmission, and instead choose an un-infected picture as the reference.

Flexible macro blocks ordering (FMO):

A Each and every slice group is a set of MBs which are defined by a macro block to slice group map, in the H.264/AVC standard, which defines to which slice group each macro block belongs. Interleaved mapping is one of the examples, where the MBs in one slice group can be in any scanning pattern and the group can consist of one or more number of foreground and background slices and the mapping can even be a checker-board-type mapping. As a consequence, a very bendable tool was provided by FMO to assemblage the MBs from different locations into a single slice committee.

Note that the forward error-correction mode (Annex H) is also considered for supporting error resilience that besides the above description [122].

5.3 Optimal Mode Selection

There are an excellent number of modes defined in each and every video standard to provide the error-resilience as described above. To work under different conditions such as different applications with different bit rate requirements are designed with different modes, or different infrastructures with different channel error rates and types. How to optimally choose those modes in practice is one question that arises in every ones mind by seeing the above discussion.

We limit our discussion on optimal mode selection that includes prediction mode (inter, intra, or skip), but we are not going to expand the discussion on this topic, in this section, and quantization step size for each MB or packet. The skip mode can be regarded as a extraordinary Inter mode while no error remains and motion information are coded.

These mode choice algorithms have conventionally paying attention on RD optimized video coding for error-free environment and on a particular frame BMC coding (SF-BMC) as well. This mode is divided into two trends. On mode selection using the multiple-frame from BMC (MF-BMC) is one of the most vital works. These approaches choose the reference frame from a group of previous frames than SF-BMC. The correlation between the compound number of frames is capitalized by MF-BMC methods or techniques to improve the compression efficiency and enlarge error-resilience, at the price of enlarged computation and superior buffers at both encoder and decoder.

With a given bit budget our aim is to achieve the best video delivery quality. By minimizing the estimated distortion D with the given bit budget R this can be mathematically explained by selecting unlike modes, where D is calculated taking into account the channel errors. It is perceptive to observe that for a given plan the source distortion directly relates to coding efficiency, while the channel distortion very much relates to the error resilience.

Speaking in general, to encode the DFD a fewer bits are normally needed than its corresponding the current region or place, since the DFD has a lesser energy and entropy. Intercoding has privileged compression competence for this cause and therefore fallout in worse source coding distortion than intra coding for the identical bit budget. In the mode choice problem with regard to the responsibility of the quantizer, approximately talking, the slighter the quantization step size, the lesser the source distortion but the bigger the channel distortion it may well cause (for the equivalent level of channel safeguard).

6. Channel Modeling and Channel Coding

The nature of the JSCC is to optimally add redundancy at the source coding level as discussed earlier, which is known as error-resilient source coding and at the channel coding level, which is known as channel coding. We have discussed about the formal also. Now in this chapter let us see or study later topics. Let us see initially channel models and channel-coding methods or techniques. So let us focus on the models and the methods or techniques used for video transmission applications.

6.1 Channel Models

Time-varying nature of channels is the most important channels that are derived from the channels. The growth or the expansion of mathematical models that exactly take into custody for the properties of a transmission channel is an exceptionally demanding but exceptionally significant subject. The concert of JSCC in general relies greatly on the precision of the channel state information to calculate approximately when the end system design required being adaptive to the altering channel conditions where the consequence stems from the truth that for improved video delivery performance.

At the application layer, the QoS is usually measured objectively by the end-to-end distortion for these video applications. According to the probability of source packet loss and delay is calculated by end-to-end distortion as discussed earlier. Therefore for the video applications there are two fundamental properties of the communication channel as seen at the application layer with the probability of packet loss and delay allowed for every packet to reach the destination.

The channel can be modeled at different layers for the video transmission over a network through various or several network protocol layers. How ever, the QoS parameters at the lower layers may not always or regularly reflect the QoS requirement directly by the application layer. The truncation and the packet loss appears where channel errors normally appear and this might not be a problem for the wired network. The channel is modeled in a continuous way at the network layer (i.e. the IP layer) for the wired channels like internet, given that the packets with errors are discarded at the link layer and are consequently not forwarded to the network layer. How ever, for wire less channels, beside the packet loss and packet truncation, the common type of error is bit error.

Hence for these wireless networks, the mechanisms that map the QoS parameters at the lower layers to those at the application layer which are particularly needed in order to coordinate the effective adaptation of QoS parameters at the video application layer.

6.1.1 Internet:

The truncation and the packet loss are the representative forms of channel errors in the internet. Queuing delays in the network can be an important delay component which is seen in addition. As a result, the internet can be modeled as a self-regulating time-invariant packet scoring through channel with unsystematic delays. A packet is characteristically considered discarded and loss in the factual time video applications if it does not arrive at the decoder before it is intended playback time. Therefore, two components are made by the packet loss probability. Those are the packet loss probability in the network and the probability that the packet experiences unnecessary delay. The overall probability by combining these two factors for the packet loss k is given by

Ͻk = _k + (1 aˆ' _k) A· νk,

Where _k is theprobability of packet loss in the network and νk is the probability of packet loss due to the unnecessary delay. We have

νk = Pr {aˆ†Tn (k) > }, where aˆ†Tn (k) is the network delay for packet k, and is the utmost acceptable network delay for this packet. This is shown in the fig. 5.1, where the probability density function (pdf) of network delay is plotted by taking into account packet loss.

A Bernoulli process or a two state or a kth –order Markov chain can be used as an example where the packet losses in the network can be modeled in a number of ways. The fig. 5.2 shows an example of two state Markov model with channel states h0 and h1. The channel state transition matrix is defined as A=|1-p p|

q q-1, where p and q are the probabilities of channel state probability is therefore computed as for states h0 and h1 respectively.

Probability density function of a network delay taking into account packet loss.

A two state Markov model.

To follow a self-similar rule where the underlying distributions are greatly tailed to a certain extent than following a poisson distribution where the network delay may also be indiscriminately varying. The shifted Gamma distribution is one of the comparatively uncomplicated models are used for characterizing the packet delay in the network.

6.1.2 Wireless Channel

Wireless channels show signs of upper bit error rates, when compared to their wire-line counterparts, characteristically have a lesser bandwidth and know-how the multipath vanishing and investigation belongings. We will not address the particulars of how to model wireless channels at the physical layer was discussed in this subsection. In this here we can spotlight on how the physical-layer channel state information can be translated into the QoS parameters such as holdup and packet loss at the link layer. The link layer packet loss probability depends on the packetization schemes used at the transport or link layer can be characterized how the application layer packet loss probability (the QoS parameters needed to calculate video distortion).

The wireless channel at the IP level can be treated as a packet erasure channel, as is “looked” by the application for the IP-based wireless networks, along with the internet. The transmission power used in transferring or sending every single packet and the channel state information (CSI) can be modeled as a function of the transmission power by the probability of the packet loss in this section setting. By increasing the transmission power will enlarge the received SNR in particular way, for a permanent transmission rate, and effect in a less significant probability of packet loss. So, therefore, this connection or the relationship can be modeled methodically or determined empirically.

An analytical model based on the notation of outage capacity is used in the earlier example of the former. And in this method, whenever the fading realization results in the channel having a capacity less than the transmission rate a packet is lost, i.e., Ï?k = Pr(C (Hk, Pk) ≤ R), where C is the Shannon capacity, Hk is the random variable representing the channel's fading, Pk is the transmission power used for packet k, and R is the transmission rate (in source bits per second). We understood independent bit errors based on just right the interleaving are resultant from the above discussions. Delay and complexity both are introduced by Interleaving and perfect interleaving is not achievable in a practical system, especially for the real time applications. All these video, internet, former interleaving concepts plays a vey significant role in the real time applications while every one is working in big organizations. Also, in addition, the radio channel BER is fundamentally given in (5.4) and (5.5) is normally the average BER which is along term parameter. The models of Markov are being used widely for explaining the nature of the channel errors. For instance, consider two-state Gilbert–Elliott model that is classical. In this model, one good state as well as one bad state with various BERs that are associated represents the states of the channel. The figure 5.2 illustrates this in which bad state is represented by h0 and good state is represented by h1. 1/q is the average bursty length.

FSMC which means a finite-state Markov channel model is the most accurate model for characterizing a fading channel. For instance, FSMC modeled a Rayleigh flat-fading channel for packet transmission system in which various bit error rates or receiver SNR characterizes each state. The state's average duration is almost equal to constant and it relies on the speed of the channel fading. The constant determines the number of states for ensuring that all the packets which are being received are present in one state completely and in the current state, the packet followed is present. It may be present in one among the two neighboring states.

Identifying the probability of packet loss on the top of the link layer by using the physical-layer channel model is shown in the above discussion. Queuing delay is the other QOS parameter which should be considered along with the packet loss for the transmission of real-time video over wireless networks. The method of deriving the link-layer QOS parameters like delay and bit-rate from the physical-layer channel parameters is not reflected explicitly by the physical-layer channel models. The analysis of the connection's queuing behavior is required for the deviations of the QOS parameters of the link-layer. Thus the above process is difficult to do. For achieving this, it is necessary to have a link-layer channel model which characterizes the QOS parameters of the link-layer directly particularly the queuing delay behavior. Two EC functions models a wireless link in the effective capacity model. The functions are the probability of a nonempty buffer which is γ = Pr {D (t) > 0} and the connection's QOS exponent, θ. The marginal cumulative distribution function (CDF) of the wireless channel that is underlying is reflected by γ. The Doppler spectrum of the physical-layer channel that is underlying is corresponded by θ. The link-layer channel model is characterized by the model which consists of a pair of functions {γ, θ}. From the physical-layer channel model, two functions are estimated by using an algorithm that is simple and efficient.

7. System design:

8. Screen shots:

9. Conclusion:

Both source coding and channel coding are separately designed according to the theory of Shannon's separation and can achieve the whole optimality. The main aim of source coding is to remove the redundancy from source and to achieve entropy. Whereas the main aim of channel coding is to achieve the transmission that is error-free with the involvement of redundancy. The error-free transmission of the source can be achieved when the source rate is less when compared with the channel capacity. If not, lowest achievable distortion will be given to the bounds of the theoretical by the rate distortion theory. For practical systems like video communications, the above hinges on the ideal channel coding is not realistic.

10. References:

[1] J. G. Apostolopoulos, T. Wong, W. Tan, and S. Wee, “On multiple description streaming in content delivery networks,” in Proc. IEEE INFOCOM, New York, Jun. 2002, pp. 1736–1745.

[2] Handbook of Evolutionary Computation, T. Back, D. Fogel, and Z. Michalewicz, Eds. New York: Oxford Univ. Press, 1997.

[3] A. C. Begen, Y. Altunbasak, and O. Ergun, “Fast heuristics for multipath selection for multiple description encoded video streaming,” in Proc. IEEE ICME, Jul. 2003, pp. 517–520.

[4] A. C. Begen, Y. Altunbasak, and O. Ergun, “Multi-path selection for multiple description encoded video streaming,” EURASIP Signal Process.: Image Commun., vol. 20, no. 1, pp. 39–60, Jan. 2005.

[5] H. Chernoff, “A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations,” Ann. Math. Statist., vol. 23, pp. 493–507, 1952

.[6] Z. Duan, Z.-L. Zhang, Y. T. Hou, and L. Gao, “A core stateless bandwidth broker architecture for scalable support of guaranteed services,” IEEE Trans. Parallel Distrib. Syst., vol. 15, no. 2, pp. 167–182, Feb. 2004.

[7] A. Elwalid, D. Heyman, T. V. Lakshman, D. Mitra, and A.Weiss, “Fundamental bounds and approximations forATMmultiplexers with applications to video teleconferencing,” IEEE J. Sel. Areas Commun., vol.13, no. 6, pp. 953–962, Aug. 1995.

[8] D. Eppstein, “Finding the shortest paths,” SIAM J. Comput., vol. 28, no. 2, pp. 652–673, Aug. 1999.

[9] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness. NewYork: W. H. Freeman&Co.,1979.

[10] T. Kuang and C. Williamson, “A measurement study of RealMedia audio/video streaming traffic,” in Proc. SPIE ITCOM 2002, Boston, MA, Jul. 2002, pp. 68–79.

[11] S. Mao, Y. T. Hou, X. Cheng, H. D. Sherali, and S. F. Midkiff, “Multi-path routing for multiple description video over wireless ad hoc networks,” in Proc. IEEE INFOCOM, Miami, FL, Mar. 2005, pp. 740–750.

[12] S. Mao, S. Kompella, Y. T. Hou, H. D. Sherali, and S. F. Midkiff, “Routing for multiple concurrent video sessions in wireless ad hoc networks,” in Proc. IEEE ICC, Seoul, Korea, May 2005, pp. 1229–1235.

[13] S. Murthy and J. J. Garcia-Luna-Aceves, “Congestion-oriented shortest multi-path routing,” in Proc. IEEE INFOCOM, San Francisco, CA, May 1996, pp. 1038–1036.

[14] I. Norros, “On the use of fractional Brownian motion in the theory of connectionless networks,” IEEE J. Sel. Areas Commun., vol. 13, no. 6, pp. 953–962, Aug. 1995.

[15] P. Papadimitratos, Z. J. Haas, and E. G. Sirer, “Path set selection in mobile ad hoc networks,” in Proc. ACM MobiHoc, Lausanne, Switzerland, Jun. 2002, pp. 1–11.

[16] E. Setton, X. Zhu, and B. Girod, “Congestion-optimized multi-path streaming of video over ad hoc wireless networks,” in Proc. IEEE ICME, Taipei, Taiwan, Jun. 2004, pp. 1619–1622.

[17] H. D. Sherali and C. H. Tuncbilek, “A global optimization algorithm for polynomial programming problems using a reformulation-linearization technique,” J. Global Optim., vol. 2, no. 1, pp. 101–112, 1992.

[18] H. D. Sherali and W. P. Adams, A Reformulation-Linearization Technique for Solving Discrete and Continuous Nonconvex Problems.Boston, MA: Kluwer Academic, 1999.

[19] K. Stuhlmuller, N. Farberand, M. Link, and B. Girod, “Analysis of video transmission over lossy channels,” IEEE J. Sel. Areas Commun., vol. 18, no. 6, pp. 1012–1032, Jun. 2000.

[20] W. Wei and A. Zakhor, “Path selection for multi-path streaming in wireless ad hoc networks,” in Proc. IEEE ICIP, Atlanta, GA, Oct. 2006, pp. 3045–3048.

[21] Z.-L. Zhang, Z. Duan, and Y. T. Hou, “On scalable design of bandwidth brokers,” IEICE Trans. Commun., vol. E84-B, no. 8, pp. 2011–202 5, Aug. 2001.17

Did you like this example?

Cite this page

Video transmission in wireless mesh networks. (2017, Jun 26). Retrieved November 21, 2024 , from
https://studydriver.com/video-transmission-in-wireless-mesh-networks/

Save time with Studydriver!

Get in touch with our top writers for a non-plagiarized essays written to satisfy your needs

Get custom essay

Stuck on ideas? Struggling with a concept?

A professional writer will make a clear, mistake-free paper for you!

Get help with your assignment
Leave your email and we will send a sample to you.
Stop wasting your time searching for samples!
You can find a skilled professional who can write any paper for you.
Get unique paper

Hi!
I'm Amy :)

I can help you save hours on your homework. Let's start by finding a writer.

Find Writer