Lower Energy Consumption

Check out more papers on Computer Engineering Computer Networking Electronics
ABSTRACT This research demonstrates that the optimization for lower energy consumption leads to cross layer design from the two ends namely physical layer and the application layer. This optimization for quality of service requirements demands integration of multiple OSI layers (Open Systems Interconnect) Beginning from the physical layer the probability of successful radio packet delivery is first explored. This probability along with network energy consumption is traded off to let CTP-SN (Cooperative Transmission Protocol for Sensor Networks) demonstrate that sensor node's co-operative radio transmission exponentially reduces the outage probability when the node density increases. On the other hand in MSSN (Sensor Networks with Mobile sinks) the probability of successful information retrieval on the mobile sink is explored. Optimal and sub-optimal transmission scheduling algorithms are then studied to exploit the trade-off between consumption and probability of successful radio packet delivery. In both the cases, optimizations lead to compound link layer and physical layer design. In the application layer Low Energy Self Organizing Protocols (LESOP) are studied for target tracking in dense wireless sensor networks. The application quality of service (QoS) under study is the target tracking error and network energy consumption. A QoS knob is utilized for controlling the tradeoffs between target tracking errors and network energy consumption. Direct connections are found between the top application layer and bottom MAC (medium Access Control)/ Physical layers. Moreover, unlike the classical OSI paradigm of communication networks, transport and network layers are excluded in LESOP in order to simplify the protocol stack. The Embedded Wireless Interconnect (EWI), which has been proposed to replace the existing OSI paradigm as the potential universal architecture platform, is an effort towards standardization. EWI is built on two layers, which are wireless link layer and system layer, respectively. A brief study of EWI is also carried out.
  • CHAPTER: INTRODUCTION
  • GENERAL
A network is a series of points or nodes interconnected by communication paths. Networks can interconnect with other networks and contain sub networks. There are different types of networks. This chapter discusses briefly about adhoc network, mobile adhoc network (MANET) and wireless sensor networks (WSN) respectively. WNS is a wireless network consisting of spatially distributed autonomous devices using sensors to cooperatively monitor physical or environmental conditions at different locations. Sensor node deployment, power consumption, topological changes are some of the differences between WNS and MANET .There are various applications of wireless sensor networks, among which is target tracking.
  • ADHOC NETWORK
An ad-hoc (or "spontaneous") network is a local area network or other small network, especially one with wireless or temporary plug-in connections, in which some of the network devices are part of the network only for the duration of a communications session or, in the case of mobile or portable devices, while in some close proximity to the rest of the network. In Latin, ad hoc literally means "for this," further meaning "for this purpose only," and thus usually temporary. The disadvantages of ad-hoc networks:
  • An ad-hoc network tends to feature a small group of devices all in very close proximity to each other.
  • Performance suffers as the number of devices grows, and a large ad-hoc network quickly becomes difficult to manage.
  • Ad-hoc networks cannot bridge to wired LANs or to the Internet without installing a special-purpose gateway.
  • MOBILE AD HOC NETWORK
“A MANET is an autonomous collection of mobile users that communicate over relatively bandwidth constrained wireless links”.[1_] As the nodes are mobile, The network topology may vary unpredictably and rapidly over time. The network is decentralized, where the nodes themselves must execute all network movement together with delivering messages and discovering the topology, (i.e., routing functionality will be integrated into mobile nodes). Ranging from small, diverse, static networks that are controlled by power sources, to large-scale, highly dynamic networks, mobile are the applications for the MANETs. The main disadvantages of MANET are listed below:
  • Regardless of the application, efficient distributed algorithms are needed by MANETs to determine link scheduling, routing and network organization.
  • In a static network, the shortest path is based on a cost function given from a source to a destination is usually the optimal route; the idea discusseed is not easily extended to MANETs.
  • The design of the network protocols for these networks is a problematic issue.
  • Factors such as propagation path loss, fading, variable wireless link quality, multi-user interference, topological changes and power expended, become irrelevant issues
  • WIRELESS SENSOR NETWORKS
The word wireless network may exactly be used to refer to any kind of network that is wireless, the term is most commonly used to refer to a telecommunications network whose interconnections between nodes is implemented without the use of wires, such as a computer network. A sensor network is a computer network of many, spatially distributed devices using sensors to monitor conditions at different locations, such as temperature, sound, vibration, pressure, motion or pollutants. A Wireless Sensor Network (WSN) is a wireless network consisting of spatially distributed autonomous devices using sensors to cooperatively monitor physical or environmental conditions at different locations. WSNs differ in many fundamental ways from MANETs. Among the differences that may impact the network and protocol design are
  • In a sensor network the number of sensor nodes can be several orders of magnitude higher than the nodes in an ad hoc network.
  • Very frequently the topology of a sensor network changes very.
  • Sensor nodes are prone to failures.
  • Sensor nodes are densely deployed.
  • Communication paradigm is broadcasted using Sensor nodes mainly, whereas most of the adhoc networks are purely based on the point-to-point communications.
  • Because of the large number of sensor and large amount of overhead the Sensor nodes may not have global identification (ID).
  • They (Sensor nodes) are limited in memory, computational capacities and power.
  • Constraints of WSN
The following are the constraints of WSN, which have to be considering while developing an application in wireless sensor networks.
  • Fault Tolerance: Individual nodes are prone to unexpected failure with a much higher probability than other types of networks. The network should sustain information dissemination in spite of failures.
  • Scalability: Number in the order of hundreds or thousands. Protocols should be able to scale to such high degree and take advantage of the high density of such networks.
  • Production Costs: The cost of a single node must be low, much less than $1.
  • Hardware Constraints: A sensor node is comprised of many subunits (sensing, processing, communication, power, location ?nding system, power scavenging and mobilizer). All these units combined together must consume extremely low power and be contained within an extremely small volume.
  • Sensor Network Topology: Must be maintained even with very high node densities.
  • Environment: Nodes are operating in inaccessible locations either because of hostile environment or because they are embedded in a structure.
  • Transmission Media: RF, Infrared and Optical.
  • Power Consumption: Power conservation and power management are primary design factors.
  • Challenges of WSN
In spite of the diverse applications, sensor networks pose a number of unique technical challenges due to the following factors:
  • Adhoc Deployment: Most sensor nodes are deployed in regions which have no infrastructure at all. A typical way of deployment in a forest would be tossing the sensor nodes from an aeroplane. In such a situation, it is up to the nodes to identify its connectivity and distribution.
  • Unattended Operation: In most cases, once deployed, sensor networks have no human intervention. Hence the nodes themselves are responsible for reconfiguration in case of any changes.
  • Unethered: The sensor nodes are not connected to any energy source. There is only a finite source of energy, which must be optimally used for processing and communication. An interesting fact is that communication dominates processing in energy consumption. Thus, in order to make optimal use of energy, communication should be minimized as much as possible.
  • Dynamic Changes: It is required that a sensor network system be adaptable to changing connectivity (for e.g., due to addition of more nodes, failure of nodes etc.) as well as changing environmental stimuli. Thus, unlike traditional networks, where the focus is on maximizing channel throughput or minimizing node deployment, the major consideration in a sensor network is to extend the system lifetime as well as the system robustness.
Since many of the constraints of WNS are to deal with usage of sensor nodes it's necessary to know about the sensors and sensor network architecture.
  • Sensor Network Architecture
In a sensor field the sensor nodes are usually scattered as shown in Figure2.1.Each scattered sensor nodes has the capability to route data and collect data back to the sink. A sensor is a type of transducer. Sensor is a device that responds to a stimulus, such as light, pressure or heat and generates a signal that can be interpreted or measured. A mote is the other name for it, that is capable of gathering sensory information, communicating with other connected nodes and performing some processing in the network.
  • Components
A sensor node is made up of four basic components a sensing unit, a processing unit, a transceiver unit, and a power unit as shown in Figure 2.2. They may also have additional application-dependent components such as a location finding system, power generator and mobilizer. Figure 1-1 Sensor nodes scattered in a sensor field Sensing units are usually composed of two subunits: sensors and analog-to-digital converters (ADCs). The analog signals produced by the sensors based on the observed phenomenon are converted to digital signals by the ADC, and then fed into the processing unit. The processing unit, which is generally associated with a small storage unit, manages the procedures that make the sensor node collaborate with the other nodes to carry out the assigned sensing tasks. A transceiver unit connects the node to the network. One of the most important components of a sensor node is the power unit. Power units may be supported by power scavenging units such as solar cells. There are also other subunits that are application-dependent. Most of the sensor network routing techniques and sensing tasks require knowledge of location with high accuracy. Thus, it is common that a sensor node has a location finding system. A mobilizer may sometimes be needed to move sensor nodes when it is required to carry out the assigned tasks. Figure 1-2 Components of sensor node    
  • Sensor Network Protocol Stack
The Figure 1.1 shows the protocol stack used by the sink and sensor nodes is given in Figure 1.3. This protocol stack combine routing awareness and power, communicates power competently through the wireless medium, integrates data with networking protocols and promotes cooperative efforts of sensor nodes. The protocol stack consists of different layer such as the physical layer, data link layer, network layer, transport layer, application layer, mobility management plane, task management plane and power management plane. Figure 1-3 Sensor network protocol stack The physical layer addresses the needs of simple but robust modulation, transmission, and receiving techniques. Since the environment is noisy and sensor nodes can be mobile, the medium access control (MAC) protocol must b e power-aware and able to minimize collision with neighbors' broadcasts. The network layer takes care of routing the data supplied by the transport layer. The transport layer helps to maintain the flow of data if the sensor networks application requires it. Depending on the sensing tasks, different types of application software can b e built and used on the application layer. In addition, the power, mobility, and task management planes monitor the power, movement, and task distribution among the sensor nodes. These planes help the sensor nodes coordinate the sensing task and lower overall power consumption.
  • Wireless Sensor Networks Application
Typical applications of WSNs include monitoring, tracking, and controlling. The specific applications are habitat monitoring, target tracking, nuclear reactor controlling, fire detection, traffic monitoring, Environmental monitoring, Acoustic detection , Seismic Detection , Military surveillance ,Inventory tracking ,Medical monitoring ,Smart spaces ,Process Monitoring ,Structural health monitoring ,Health Monitoring . Among these applications we will mainly study about target tracking in WSN.
  • Target Tracking
Target Tracking is estimating the location of the target and then proceeding to find the path or track of the target. Sensor nodes in WSN monitor the movement of the target.
  • SUMMARY
Thus we see that though there are many kinds of networks like adhoc network, MANET accuracy is higher in WNS. So with the help of the sensor nodes target tracking can be done accurately using WNS. There are many different types of protocols and methods to perform target tracking, which will be discussed in next chapter. The LESOP protocol design will be discussed in fourth chapter. The implementation of the protocol and the end result will be discussed in the fifth chapter. The future enhancement and conclusion will be discussed in sixth chapter.
  • CHAPTER: LITERATURE REVIEW
  • INTRODUCTION
Target tracking is one of the applications of WNS. Many different method, protocol and algorithm were adopted to detect and track the target. This chapter discusses briefly about the different algorithm, method, and protocol that were used to perform target tracking. They may include Distributed Online Localization, Cooperative Tracking, Collaborative Target Tracking, binary sensor model, Building and Managing Aggregates, Lightweight Sensing and Communication Protocols, on-Linear Measurement Model, Distributed State Representation, Optimizing Tree Reconfiguration ,Energy-Quality Tradeoffs, Entropy-based Sensor Selection Heuristic, An Activity-based Mobility Model and Trajectory Prediction.
  • DISTRIBUTED ONLINE LOCALIZATION
A distributed online algorithm is used here. The sensor nodes put to use geometric constraints. These are induced by radio connectivity and sensors in order to minimize the uncertainty of their locations. In order to improve their and moving target positions distributed online localization uses online observation of a moving target using sensor nodes. The nodes that act as reference nodes are pre-positioned into the network. The target is placed in an unknown position. Sensor nodes normally communicate with adjacent nodes. Size, ratio of known nodes, range of the radio and sensing rage are taken into consideration when this algorithm is implemented. It has been verified that this algorithm can track targets with more accuracy over time by better estimation of nodes and target positions through sensor observations. When the ratios of reference nodes are high, this method can be applied to enhance position estimation accuracy.
  • COOPERATIVE TRACKING
A binary detection sensor network is used in this area wherein the network sensor nodes can only establish if an object is within the maximum detection range. Information exchange between adjacent nodes refines location estimates improving precision of tracking. This algorithm is in two levels. In level one, local target position estimation is carried out. In the initial phase the target is assumed to be equal to the position of sensor node. The position data is recalculated as weighted averages to sensor location when new information about the node is made available. Closer nodes get more weight age. These estimations are aggregated to obtain the path of the object. In level two, a linear approximation of the path is calculated. This is done using the line-fitting algorithm on positions obtained in the previous level. The comparability of this approach to others such as distance measurements or angle of arrival measurements from sensors is confirmed by simulations. Because this approach puts to use binary-detection sensors it ends up being simpler and cheaper.
  • COLLABRATIVE TARGET TRACKING
This method is used to derive the effect of wireless network impairments on performance of the target tracking algorithm. It is applied in collaborative target tracking by using acoustic sensors. Target tracking by acoustic sensors demands multiple range or range estimation in order to carry out location estimation. Estimations are obtained by using a minimum of three range measurements. These are needed for a triangulation to precisely locate the target position. However, simultaneous range measurements are not possible as the target is mobile. Besides, over large networks maintaining measurements is tricks. Some measurements may be dropped or get delayed. Accuracy suffers because of this. SCAAT Kalman filter is used to manage global time synchronization. The Kalman filter maintains a time-stamped target state and updates the state when a single range or line of bearing is received. The de-jitted buffer is used to store the received measured with a time-out. These buffer storages save a huge amount of unnecessary measurements. Two types of nodes are found in this network. First type estimates a targets range/angle. The second one called fusion nodes fuse the individual measurements. This results in a network without any packet loss or reordering. In more practical networks that suffer packet loss and re-ordering the location error decreases as the buffering latency increases. Therefore the de-jitter buffer helps save out of order packets.
  • BINARY SENSOR MODEL
This models works on the assumption that each sensor in the network detects one bit of information and this information is broadcast to the base station. This bit examines if an object is travelling away or towards the sensor. While this predicts the direction accurately, it does not yield the correct location. In order to do this a particle filtering style algorithm is used for target tracking. Besides the one bit information an additional bit is also gathered from proximity sensors to point out the exact location of the object. This tracking algorithm works on three assumptions namely: sensors in a region can detect the target travelling towards or away from sensor, bit information from every sensor is available in the central repository for processing and finally additional sensor supplies proximity information as a single bit is present. The error in trajectory prediction is rather low and the broadcast of a single bit over the whole network is easily feasible. The base station was also able to respond to the sensor values broadcasting at higher rates. This solution is very practical to simple tracking applications.
  • BUILDING AND MANAGING AGGREGATES
This method introduces a decentralized protocol that constructs sensor aggregates in order to identify/count distinct nodes or targets in the field. Sensor aggregates are nothing but nodes that satisfy group predicate. Task and resource requirements are the parameters for grouping predicate. These aggregates are used for performing a task in a collaborative manner. The DAM(Distributed Aggregate management) protocol was introduced to support a representative and collaborative signal processing tasks. The DAM protocol forms many sensor aggregates in the sensor field. The following are assumptions are made about the networks : targets are single point sources of signal, sensors can mutually transfer information on the wireless within a fixed radius that that is higher than the mean inter-node distance, sensor times are synchronized to a global clock, and finally that the battery power limits network bandwidth. Within a variation of parameters this protocol has been observed to be effective in simulations.
  • LIGHTWEIGHT SENSING AND COMMUNICATION PROTOCOLS
There are quite a few lightweight sensing and protocols such as Expectation-Maximization like Activity Monitoring (EMLAM), Distributed Aggregate Management (DAM), and finally Energy-Based Activity Monitoring (EBAM). Protocol DAM was developed for target monitoring. Sensors used low cost amplitude sensing. DAM carries out its purpose of electing local cluster leaders. The sensors in the network are classified into clusters on the basis of their signal strength, with each cluster having only one peak. Every peak represents a target as well as multiple targets that are close together. Each peak is identified by comparing heights of neighboring sensors. Sensor nodes exchange information to their one-hop neighbors. The cluster head leader is elected in the first phase as described above. Each sensor node is joint with a cluster that is defined by the highest peak that can reach that sensor through a path. Every leader can communicate with one or more targets in a defined period. DAM would not be capable to differentiate when there are many targets in a single cluster. To solve this problem, EBAM calculates the number of targets within each cluster. It also provides a solution to count targets within every sensor clutter made up by the DAM protocol. They assume each target has equal amounts of power. Since a single target is known the number of target sensors in the clutter can be calculated from the total signal power calculated in the cluster. The third protocol EMLAM uses expectation maximization technique for intra-cluster target counting. It assumes targets are not clustered while entering the field. When moving together sensor leaders will exchange information to track targets. The new target positions are estimated using a prediction model. Minimum mean Square estimation(MMSE) is used for estimating location and target signal powers.
  • NON-LINEAR MEASUREMENT MODEL
In the nonlinear measurement model a particle filter approach is used for tracking targets in presence of spurious measurements, which provide information such as target wakes, multi-path and tethered decoys. In order to resolve the problem of intermittent measurements appearing behind the target a measurement function is derived. The centroid of one of the sensors may be disturbed to point behind the actual target position due to environmental effects - this is called wake effect. This particular filter accommodates this bias and models the filter with discrete hidden Markov model. Simulation results show that intermitten corruptions of measurement process can still track a target using particle filtering.
  • DISTRIBUTED STATE REPRESENTATION
In this method the state space model of physical phenomena is exploited. The dimension of the network increases with the presence and interaction of multiple targets. The issue of distributed sensing system to support monitoring in network has been dealt with in this method. Multiple target tracking is dealt with as an estimation problem. The position of target at time t is estimated using the state. Based on data collected, the actual location can be computed. Each target affects a local region of sensors in a distributed sensor network. In a multiple target neighborhood, the data will be shared across the network. Higher dimension tracking problem is broken into simple problems. Target states are decoupled into locations and identities. A joint estimation like centralized tracking approach is carried out when two targets move close to each other. While targets move away from each other, it will go back to single target tracking. However sorting out the confusion of two targets will require identity management.
  • OPTIMIZING TREE RECONFIGURATION
This method focuses on energy efficient detection and tracking mobile targets in introducing the concept of convoy tree based collaboration (DCTC) whose framework is to track the target as it moves. Along with the target, the sensor nodes move around. The tree is reconfigured to add and remove nodes as the target moves. DCTC is an optimization problem that solves to find a convey sequence with the lowest energy consumption in two steps. The first level involves an interception-based reconfiguration algorithm that reconfigures the tree for energy efficiency. The next step is for root migration. Results demonstrate that this scheme has the lowest energy consumption.
  • ENERGY-QUALITY TRADEOFFS
Here the energy efficient tradeoff of random activation and selective activation of the sensor nodes for localization and tracking of mobile targets has been studied. Many approaches namely naïve activation, randomized activation, selective activation based on trajectory prediction, and duty-cycled activation are applied here. This method gives the impact of activation/deployment of sensor nodes, their sensing range, the capabilities of activated/un-activated nodes, and the target mobility model. It was found in simulations that selective activation plus a good prediction algorithm provides more energy saving while tracking. Besides, duty-cycle activation displays better flexibility and dynamic tradeoff in energy expenditure while used with selective activation.
  • ENTROPY-BASED SENSOR SELECTION HEURISTIC
This method proposes a novel entropy based heuristic for sensor selection based on target localization. This involves selecting informative sensors in each tracking step which is carried by using a greedy sensor selection strategy. This involves repeatedly selecting unused sensors with maximal expected information gain. Its purpose is to evaluate the expected information gain that can be attributed to each sensor. This should yield on average the greatest entropy reduction of target location distribution. In developing this wireless network the uncertainty in localization reduction attributable to a single sensor is primarily effected by entropy distribution of sensors view on the location of the target and entropy of sensors sensing model for actual location. The heuristic is calculated by simulation to yield a reduction in the entropy while providing previous target location, its distribution, sensor locations, and sensing models.[2]
  • SUMMARY
All the methods discussed in this chapter uses OSI architecture and the protocol stack has application layer, transport layer, network layer, data link layer and physical layer respectively. But the proposed LESOP protocol does not use OSI architecture but cross-layer architecture. Transport and network layer are being excluded in LESOP protocol stack.
  • CHAPTER: DESIGN AND ALGORITHM
  • INTRODUCTION
A cross-layer design perspective is adopted in LESOP for high protocol efficiency, where direct interactions between the Application layer and the Medium Access Control (MAC) layer are exploited. Unlike the classical Open Systems Interconnect (OSI) paradigm of communication networks, the Transport and Network layers are excluded in LESOP to simplify the protocol stack. A lightweight yet efficient target localization algorithm is proposed and implemented, and a Quality of Service (QoS) knob is found to control the tradeoff between the tracking error and the network energy consumption.[3] This chapter discusses briefly about the modules and overall design of the LESOP protocol.
  • LESOP MODULES
The system module architecture of LESOP node is shown diagrammatically in Figure 3.1. The modules are named following the OSI tradition. The LESOP architecture virtually conforms to the proposed two-layer EWI platform. Inter-module information exchanges are done by messages and packets and busy tones do inter-node communications. Packets go through the primary radio, while busy tones are sent by the secondary wakeup radio. A set of inter-module messages, inter-node packets/tones, and module states for LESOP are defined. In wireless communications specifically, the Transport and Network layer are omitted to simplify the protocol stack. All the radio packets have one source address, which is the location coordinates Li of the source sensor node. They do not have a destination address, and are wirelessly broadcasted to the source neighborhood. In the LESOP design, the radio range is assumed to be two times larger than the sensing range. The assumption keeps the nodes set i.e., target detection nodes, within the range of each other. Application Layer (APP) Sensor (SEN) Wakeup radio (WAR) MAC Layer (MAC) Physical layer (PHY) Figure 3.1 LESOP Module[3]
  • Application Layer
The role of the Application layer is the overall control of the node functionalities. All the inter-node communications (packets or busy tones) start and end at the particular node Application layer. It can be in one of the following four states, IDLE, WAIT, HEADI, HEADII. The state diagram is shown in Figure 3.2. Figure 3-1 Definitions of messages, Packets, Busy tone and Module states [3]
  • IDLE State: Initially all the deployed sensor nodes are in IDLE state. In this state, it is assumed that the target is undetected in the neighborhood region of the node. The Application layer periodically polls the sensor (sending SEN_POLL message) and read the sensing measurement (retrieving SEN_MEASURE message). The time period indicates how fast the target can be detected after appearing in the surveillance region. More specifically, the random variable denotes the detection delay, which is the time difference between the time the target appears, and the first time that the target is detected. Once the target is detected, the Application layer sends through the wakeup radio the busy tone Ba, and transfers to HEADI state. Ba forces all the neighboring sensor nodes become active. On the other hand, if Ba arrives first, the Application layer sends SEN_POLL and transfers to WAIT state.
Figure 3-2 Application layer state diagram [3]
  • Wait State: In WAIT state, the Application layer first retrieves SEN_MEASURE message from the sensor. If the sensed measurement greater than the threshold measurement, it returns to IDLE state at the end of the track interval. Otherwise the detection coefficient is calculated locally and included in the DEC_INFO packet and forwarded to the MAC layer.
The first busy tone Bb indicates that the leader node H2 has been elected in the neighborhood. When the DEC_READY message is received from the MAC layer, the specific node becomes H2, if H2 has not been elected. Correspondingly, the Application layer transfers to HEADII state, and sends DEC_CANCEL message to the MAC layer to cancel the current DEC_INFO packet. If it is known that H2 has been elected upon receiving DEC_READY, the Application layer replies to the MAC layer with the confirmation DEC_SET message. The second busy tone Bb indicates that the target location estimation procedure has ended. When it arrives, the Application layer sends DEC_CANCEL message to the MAC layer, and transfers to IDLE state.
  • HEADI State: In HEADI state, the node behaves as the H1 node. The Application layer waits for the second busy tone Bb from the wakeup radio. As the desired Bb arrives, it sends TRACK_INFO packet through the primary radio, and waits for the acknowledgement, TRACK_ACK packet, from H2 node. After the exchange, the Application layer goes to the IDLE state. If the second Bb does not arrive within the track interval limit, the node decides that the target has disappeared or errors have occurred. Application layer transfers to IDLE state, and the track record is then forwarded to the sink by other mechanisms.
  • HEADII State: In HEADII state, the node behaves as the H2 node. First, Bb busy tone is broadcasted through the wakeup radio, which announces that H2 has been elected. RADIO_ACT message is then sent to set the Physical layer in RECEIVE/IDLE state (turning on primary radio).
The Application layer receives DEC_INFO packets from the neighborhood in sequence. The detection information fusion process is then executed as described in LESOP protocol algorithm. Once the terminating condition is met (i.e. determining optimal number of nodes), or the track interval time limit is reached, the target location is estimated by Optimal Linear Combing method. The second Bb is then broadcasted through the wakeup radio, indicating that the estimation procedure has finished. After the broadcasting of the second Bb, the Application layer waits for TRACK_INFO packet from H1, and responds with the acknowledge, TRACK_ACK packet. The Application layer then sends a RADIO_SLE message to set the Physical layer in SLEEP state (turning off primary radio). When the track interval time is reached, Ba is broadcasted though the wakeup radio and the Application layer transfers to HEADI state.
  • MAC Layer
The MAC layer receives the DEC_INFO packet from the Application layer. It calculates a time delay for the DEC_INFO packet. It waits until the expiration of the time delay to perform radio carrier sensing. If the primary radio channel is busy, the MAC layer waits for another time delay which is the DEC_INFO packet transmission delay. When the radio channel is free, DEC_READY is sent to the Application layer. If the response is DEC_SET, the DEC_INFO packet is forwarded to Physical layer and broadcasted. Else, if the Application layer response is DEC_CANCEL, the DEC_INFO packet is deleted in MAC. At anytime when DEC_CANCEL message is received, the current DEC_INFO packet waiting in the buffer is deleted. After receiving TRACK_INFO or TRACK_ACK packets from the Application layer, the MAC performs radio carrier sensing, and waits until the radio channel is free. The TRACK_INFO or TRACK_ACK packets are then forwarded to the Physical layer and broadcasted. The MAC layer also forwards all the received packets from the Physical layer to the Application layer. A collision of DEC_INFO packets can occur when the difference between the MAC time delays of two nodes is less than the range of the radios. Since range is small in sensor networks, the collision probability is practically small. The LESOP protocol is virtually robust to the collision, since H2 can ignore the collision, and wait for the next successfully received DEC_INFO packet. The channel error control coding (ECC) functionality is added to the MAC layer. Traditionally, ECC is defined at Data Link layer, and MAC is a sub-layer of Data Link layer. It provides us with an efficient way of presentation.
  • Physical Layer
The Physical layer of primary radio is responsible for broadcasting the radio packets to the node's neighborhood, which in our simplified model is a circular region with radius as range of radios. It also supplements carrier sensing capability to MAC layer, and detects radio packets collision on primary radio. The Physical layer can be in one of the three states, TRANSMIT, RECEIVE/IDLE, and SLEEP, which correspond to the three modes of primary radio, transmitting, receiving/idle, and sleeping, respectively. When receiving the forwarded packets from the MAC layer, the Physical layer goes to TRANSMIT state, and returns to the previous state after transmission. The Application layer configures the Physical layer in RECEIVE/IDLE or SLEEP states, by RADIO_ACT or RADIO_SLE messages, respectively.
  • Wakeup Radio and Sensors
The wakeup radio and the sensor modules are under control of the Application layer. Wakeup radio broadcasts the busy tone forwarded from the Application layer, and sends the detected busy tone to the Application layer. After receiving SEN_POLL message from Application layer, the sensor module is activated, senses and responds the sensing measurement by sending SEN_MEASURE message.[1]
  • HIGH LEVEL LESOP PROTOCOL DESCRIPTION
A low-complexity processing algorithm for target tracking, which is based on the sensor measurements, is assumed. The high level LESOP protocol description, which is an iterative procedure, is diagrammatically represented in Figure 3.3. The process is described below.
  • The node distribution can be modelled as POISSON PROCESS.
  • The node that first detects the target is considered as first leader node.
  • The neighbouring nodes are selected based on following two conditions :
  • Sensing measurement of sensor node > detection threshold of sensor node.
  • Distance between leader node and sensor node < range of radios.
NODE DEPLOYMENT ELECTING LEADER NODE1 DETERMINE NEIGHBOURINGNODES CALCULATE FUSION DETECTION CO-EFFICIENTS AT EVERY NODE ELECTING LEADER NODE2 DETERMINING OPTIMIZED SET OF NODES DETERMINING ESTIMATED TARGET CO-ORDINATES LEADER NODE1 SENDS TRACK INFO TO LEADER NODE2 NEW TRACK INFO BY LEADER NODE2 LEADERNODE2 ACTS AS LEADERNODE1 LOOP Figure 3-3 Overall Block Diagram ]
  • The Fusion-Detection Co-efficient are calculated using the following parameters:
  • Sensing Noise Variance
  • Sensing Measurement of sensor node
  • Sensing Gain of Sensor Node
  • The node with highest Fusion-Detection Co-efficient is elected as next Leader node. The “detection information” fusion is done at this newly elected leader node.
  • A selected number of nodes from a node set that have detected the target participate in the fusion by sending the detection information to newly elected node. The nodes participating in the detection fusion are determined based on the Improvement Ratio of Accuracy calculated for set of nodes which is given as
  • min value{ fusion-detection coefficient}
  • Sum of all fusion detection coefficients
  • The estimated target co-ordinates are calculated using Optimal linear combining.
  • The leader node1 sends the old track information to leader node2 which includes a profile of target.
  • The leader node2 generates the new track information.
  • This continues until the target is tracked.
Project Flow:
  • Node detects the target
  • Sets is as the First Leader Node (H1)
  • Sends wake up message1 to RF channel
  • Rf channel sends wake upmessage1 to all nodes
  • The next leader node is elected (H2)
  • And the next leader sends wakeup message1
  • Leader node2 is elected
  • RFC transmits wakeup message 2 to all nodes
  • Target estimation procedure is completed once the message2 has been sent
  • The first leader Node (H1) sends the track information to the RFChannel to be transmitted to the( H2)
  • H2 sends the track information acknowledgement to RFC which then sends to leader node H1
  • H2 now acts as the H1 and sends the wake up msg1 to all other nodes through RFC
  • The next Leader node is elected and the procedure is continued until the target moves out of range
  • The energy consumed by the sensor nodes remains constant at certain period of time. Though the number of nodes increases the network energy consumption is maintained constant.
  • SUMMARY
An LESOP protocol is proposed for target tracking in wireless sensor networks, based on a holistic cross-layer design perspective. Linear processing is employed for target location estimation. Compared with the optimal nonlinear estimation, the proposed linear processing achieves significantly lower complexity, which makes it suitable for sensor networks implementation. A QoS knob coefficient is found in optimizing the fundamental tradeoff. Moreover, the protocol is fully scalable because the fusion coefficient is calculated locally on individual sensor nodes. In the protocol design of LESOP, direct interactions between the top Application layer and the bottom MAC/Physical layers are exploited. The traditional Network layer and Transport layer have been removed, thus simplifying the protocol stack. Some traditional functionality of the two layers is merged into the top and the bottom layers.
  • CHAPTER: IMPLEMENTATION
  • INTRODUCTION
This chapter discusses the implementation of modules and end result. The implementation is performed in simulation software, OMNeT++. OMNeT++ is a public-source, component-based, modular and open-architecture simulation environment with strong GUI support and an embeddable simulation kernel. It requires Microsoft Visual C++. It can be installed both in windows and Linux. The version 3.0 is used in this project.Its primary application area is the simulation of communication networks and because of its generic and flexible architecture, it has been successfully used in other areas like the simulation of IT systems, queuing networks, hardware architectures and business processes as well.
  • Starting process
To implement your first simulation from scratch we need to follow the following steps:[7] 1. The working directory is created, its called as tictoc, and the cd to this directory. 2.By creating a topology file the example network is described. The network's nodes and the links between them can be identified by a topology file, which is in the form of text file. You can generate it with your preferred text editor. Let's call it tictoc1.ned. The file is finest to read from the bottom up. Here's what it defines:
  • A network called tictoc1 is defined, which is an instance the module type Tictoc1 (network..endnetwork);
  • Tictoc1 is assembled from two sub modules, which is tic and toc and it's a compound module. The two sub modules, tic and toc are instances of the identical module type called Txc1. The tic's output gate, which is named as out is connected to toc's input gate, which is named as in, and vice versa (module..endmodule). In both ways there will be a 100ms propagation delay;
  • Txc1 is atomic on NED level,and it will be implemented in C++. So Txc1 is known as a simple module type. Txc1 has one input gate, and one output gate, which is named has in and out respectively (simple..endsimple).
3. By writing a C++ file txc1.cc we can achieve the implementation of the functionality of the simple module Txc1. The C++ class Txc1 represents the Txc1 simple module type, which has to be registered in OMNeT++ with the Define module() macro and sub classed from cSimpleModule. The two methods is redefined from cSimpleModule: handleMessage() and initialize(). From the simulation kernel they are invoked: the first one only once, and the second one whenever a module receives the messages. A message object (cMessage) is created in initialize(), and drive it out via gate out. From the time when this gate is linked to the other module's input gate, After a 100ms propagation delay assigned to the link in the NED file, the simulation kernel will convey this message to the other module in the dispute to handleMessage().It will result in continuous ping-pong because the other module just sends it back(another 100ms delay). CMessage objects (or its subclass) represent the events (timers, timeouts) and Messages (packets, frames, jobs, etc) in OMNeT++. Later than you send or schedule them, They will be held by the simulation kernel in the “future events” or “scheduled events” list in anticipation of their time comes and they are delivered to the modules using handleMessage(). We have to note that there is no stopping condition built into this simulation: it would carry on forever. From the GUI we will be able to stop it. 4. The compile and link our program to generate the executable tictocs done with the help of creating the Makefile. $ opp_makemake In the working directory tictoc, Makefile is now created with the help of this command. Note: Windows+MSVC users: to create a Makefile.vc, the command is opp_nmakemake. 5. Now link our very first simulation by issuing the make command and compile: $ make Note: # Lines beginning with `#' are comments [Parameters] tictoc4.toc.limit = 5 # argument to exponential() is the mean; truncnormal() returns values from # the normal distribution truncated to nonnegative values tictoc6.tic.delayTime = exponential(3) tictoc6.toc.delayTime = truncnormal(3,1) tictoc9.n = 5 tictoc10.n = 5 tictoc11.n = 5 The simulation program asks “which even doesn't specify the network in a dialog when it starts. 7. Once the above steps are completed, by issuing this command you can launch the simulation and expectantly you should now get the OMNeT++ simulation window $ ./tictoc Note: Windows: the command we are used is just tictoc. 8. To start the simulation press the Run button on the toolbar. The tic and toc are exchanging messages with each other is shown below. The simulated time is displayed in the main window toolbar displays. This is known as virtual time, it can't do anything with the wall-clock or actual time that the program takes to perform. The speed of your hardware and even more on the nature and complexity of the simulation model itself determines how many seconds you can simulate one real-world second. Note that the time taken for a node to process the message is zero simulation time. The propagation delay on the connections is the only thing that makes the simulation time pass in this model. 9. We can run by making it quicker with the slider at the top of the graphics window or with slowing down the animation. The simulation can be stopped by hitting F8 (equivalent to the STOP button on the toolbar), F4 is used to single-step through it , F5 used to run it with or F6 is for without animation. F7 is for express mode, which are fully turns off tracing features for maximum speed. Note the event/sec and simsec/sec gauges on the status bar of the main window. 10. By choosing File|Exit or clicking its Close icon you can exit the simulation program.[7]
  • SNAPSHOTS
In the omnet++ 3.0 environment the following command is being used to generate the makefile. % Opp_nmakemake The object files for all the cpp files are generated using the following command. The executable file for running the simulation network is also generated. % nmake -f Makefile.vc The executable file is run which generates the simulation network. The working includes various steps. STEP: 1 The node that first detects the target is considered as first leader node. The node[31] sends wakeup message1 to the RFChannel. Figure 4-1 Sensor node[31] Elected as H1 STEP: 2 The wake up message1 is transmitted to all the other nodes. This is done through RFChannel. Figure 4-2 RFC Transmits Wake up Msgl to all other Nodes STEP: 3 The next leader node is elected. The node[17] sends wakeup message1 to the RFChannel. Figure 4-3 Sensor node{17} Elected as H2 STEP: 4 The first wake up message2 is transmitted to all the other nodes to indicate that the leader node2 has been elected. Figure 4-4 RFC Transmits Wake up Msg2 to all other Nodes STEP: 5 The second wake up message2 is transmitted to the RFChannel from sensor node[17] once it finishes the target location estimation procedure. Figure 4-5 Sensor node[17] sends Second Wakeup MSG2 STEP: 6 The second wake up message2 is transmitted to all the other nodes through RFChannel. Figure 4-6 RFC Transmits Wake up Msg1 to all other Nodes STEP: 7 The sensor node[31], H1 sends the track information to the RFChannel to be transmitted to the H2 which is sensor node[17]. Figure 4-7 Sensor node[31] sends Track info to RFC STEP: 8 The sensor node[17], H2 sends the track information acknowledgement to RFC which then sends to leader node H1, sensor node[31]. Figure 4-8 Sensor node[17] sends Track ack to RFC STEP: 9 The sensor node[17] now acts as the H1 and sends the wake up msg1 to all other nodes through RFC. Figure 4-9 Sensor node[17] acts as H1 STEP: 10 The next Leader node is elected and the procedure is continued until the target moves out of range. STEP: 11 The graph is generated with time along X-axis and the network energy consumption along Y-axis. Figure 4-10 Sensor node[17] acts as H1 The energy consumed by the sensor nodes remains constant at certain period of time. Though the number of nodes increases the network energy consumption is maintained constant.
  • SUMMARIZATION
The OMNeT++ 3.0 is being implemented in window XP to generate the simulation environment. C++ programming language is being used and the desired output is examined.The sensor network is created with 80 nodes and the corresponding messages are transferred between the nodes. The vector graph is generated once the simulation ends. The conclusion and future enhancements are described in the fifth chapter.
  • CHAPTER: CONCLUSION AND FUTURE ENHANCEMENTS
  • GENERAL
This chapter discusses the future enhancements and the conclusion of the target tracking in wireless sensor network.
  • Conclusion
The idea behind sensor networks cross layer design is to optimize the basic trade off in sensor networks - the tradeoff between application specific QoS gain and energy consumption expenditure. As of now cross layer optimizations need to done in a holistic manner as research communities are trying to reach a acceptable new architecture. However the holism may not necessarily be affordable in the future. As complexities in the networks grow it is the hierarchical layers provide long-term efficiency and propagation. The alteration on layer of protocol stacks does not require the rewriting of entire protocol stack. This dissertation research rests heavily on an organized study of the sensor networks cross layer design. The first chapter discusses the overall description of the project and the area in which the project is carried out. The different types of protocols and methods used to perform target tracking are explained in Chapter 2. The LESOP protocol design has been discussed in the third chapter along with algorithm of each module. The implementation of the protocol and the end result has been discussed in the fourth chapter
  • Future enhancements
Future enhancements in wireless sensor networks are seen in Embedded Wireless interconnect (EWI) area. The EWI is used for replacing the existing OSI structures. It is built on two layers which are system layer and wireless link layer respectively. The experimental and theoretical background studies lead to an explanation of the general interface syntax between the two layers and suggests that the separate dealing of source and channel coding in wireless link layer and system layer can achieve optimal twist and energy consumption trade off in reach-far wireless sensor networks, asymptotically.
  • APPENDIX
  • MAIN MODULE
The main module includes sub modules of application layer, mac layer, Physical layer, Sensor, Recorder and the target. // MAC ModuleInterface(SensorHostMac) // parameters: Parameter(txRate, ParType_Numeric ParType_Const) Parameter(TD_MAX, ParType_Numeric ParType_Const) // gates: Gate(toPhy, GateDir_Output) Gate(toApp, GateDir_Output) Gate(fromApp, GateDir_Input) Gate(fromPhy, GateDir_Input) EndInterface Register_ModuleInterface(SensorHostMac) // APP ModuleInterface(SensorHostApp) // parameters: Parameter(Location_X, ParType_Numeric ParType_Const) Parameter(Location_Y, ParType_Numeric ParType_Const) Parameter(Lambda_app, ParType_Numeric ParType_Const) Parameter(Dec_Interval, ParType_Numeric ParType_Const) Parameter(MicNoise, ParType_Numeric ParType_Const) Parameter(DetPkLen, ParType_Numeric ParType_Const) Parameter(MicSample, ParType_Numeric ParType_Const) Parameter(Sensing_Interval, ParType_Numeric ParType_Const) Parameter(DecayComponent, ParType_Numeric ParType_Const) Parameter(TrackPkLen, ParType_Numeric ParType_Const) Parameter(ACKPkLen, ParType_Numeric ParType_Const) // gates: Gate(toMac, GateDir_Output) Gate(toSen, GateDir_Output) Gate(fromMac, GateDir_Input) Gate(fromSen, GateDir_Input) EndInterface Register_ModuleInterface(SensorHostApp) // CHANNEL ModuleInterface(WirelessChannel) // parameters: Parameter(DecayComponent, ParType_Numeric ParType_Const) Parameter(Shadowing, ParType_Numeric) Parameter(Number_Host, ParType_Numeric ParType_Const) Parameter(Propagation_Delay, ParType_Numeric ParType_Const) Parameter(RF_Range, ParType_Numeric ParType_Const) // gates: Gate(In, GateDir_Input) EndInterface Register_ModuleInterface(WirelessChannel) //RECORDER ModuleInterface(SystemRecorder) // parameters: Parameter(Number_Host, ParType_Numeric ParType_Const) Parameter(Interval, ParType_Numeric ParType_Const) // gates: Gate(In, GateDir_Input) EndInterface Register_ModuleInterface(SystemRecorder) // TARGET ModuleInterface(TargetLocation) // parameters: Parameter(Energy, ParType_Numeric) Parameter(DecayComponent, ParType_Numeric ParType_Const) Parameter(Number_Host, ParType_Numeric ParType_Const) Parameter(Interval, ParType_Numeric ParType_Const) Parameter(Max_V, ParType_Numeric ParType_Const) Parameter(Sen_Range, ParType_Numeric ParType_Const) Parameter(Range, ParType_Numeric ParType_Const) Parameter(Propagation_Delay, ParType_Numeric ParType_Const) Parameter(Enter_Time, ParType_Numeric ParType_Const) Parameter(Leave_Time, ParType_Numeric ParType_Const) EndInterface Register_ModuleInterface(TargetLocation) // PHYSICAL ModuleInterface(SensorHostPhy) // parameters: Parameter(RFNoise, ParType_Numeric ParType_Const) Parameter(RFPower, ParType_Numeric ParType_Const) Parameter(P_Activate, ParType_Numeric ParType_Const) Parameter(P_Transmit, ParType_Numeric ParType_Const) Parameter(Threshold, ParType_Numeric ParType_Const) // gates: Gate(fromMac, GateDir_Input) Gate(RFIn, GateDir_Input) Gate(toMac, GateDir_Output) EndInterface Register_ModuleInterface(SensorHostPhy) // SENSOR ModuleInterface(Sensor) // parameters: Parameter(MicNoise, ParType_Numeric ParType_Const) Parameter(MicSample, ParType_Numeric ParType_Const) Parameter(E_SEN, ParType_Numeric ParType_Const) // gates: Gate(SenIn, GateDir_Input) Gate(fromApp, GateDir_Input) Gate(toApp, GateDir_Output) EndInterface Register_ModuleInterface(Sensor) // submodule 'mac': modtype = _getModuleType("SensorHostMac"); cModule *mac_p = modtype->create("mac", mod); int mac_size = 1; // parameter assignments: mac_p->par("txRate") = mod->par("txRate"); mac_p->par("TD_MAX") = mod->par("TD_MAX"); _readModuleParameters(mac_p); // submodule 'app': modtype = _getModuleType("SensorHostApp"); cModule *app_p = modtype->create("app", mod); int app_size = 1; // parameter assignments: app_p->par("Location_X") = mod->par("Location_X"); app_p->par("Location_Y") = mod->par("Location_Y"); app_p->par("MicNoise") = mod->par("MicNoise"); app_p->par("MicSample") = mod->par("MicSample"); app_p->par("DetPkLen") = mod->par("DetPkLen"); app_p->par("TrackPkLen") = mod->par("TrackPkLen"); app_p->par("ACKPkLen") = mod->par("ACKPkLen"); app_p->par("Sensing_Interval") = mod->par("Sensing_Interval"); app_p->par("DecayComponent") = mod->par("DecayComponent"); app_p->par("Lambda_app") = mod->par("Lambda_app"); app_p->par("Dec_Interval") = mod->par("Dec_Interval"); _readModuleParameters(app_p); // submodule 'sen': modtype = _getModuleType("Sensor"); cModule *sen_p = modtype->create("sen", mod); int sen_size = 1; // parameter assignments: sen_p->par("MicNoise") = mod->par("MicNoise"); sen_p->par("MicSample") = mod->par("MicSample"); sen_p->par("E_SEN") = tmpval.setDoubleValue(new Expr0(mod)); _readModuleParameters(sen_p); // submodule 'phy': modtype = _getModuleType("SensorHostPhy"); cModule *phy_p = modtype->create("phy", mod); int phy_size = 1; // parameter assignments: phy_p->par("RFNoise") = mod->par("RFNoise"); phy_p->par("RFPower") = mod->par("RFPower"); phy_p->par("Threshold") = mod->par("RFThreshold"); phy_p->par("P_Activate") = mod->par("P_Activate"); phy_p->par("P_Transmit") = mod->par("P_Transmit"); _readModuleParameters(phy_p); // submodule 'batt': modtype = _getModuleType("Battery"); cModule *batt_p = modtype->create("batt", mod); int batt_size = 1; // parameter assignments: batt_p->par("EnergyLevIni") = mod->par("EnergyLevIni"); _readModuleParameters(batt_p); // connections: cGate *srcgate, *destgate; cChannel *channel; cPar *par; // connection srcgate = _checkGate(phy_p, "toMac"); destgate = _checkGate(mac_p, "fromPhy"); srcgate->connectTo(destgate); // connection srcgate = _checkGate(sen_p, "toApp"); destgate = _checkGate(app_p, "fromSen"); srcgate->connectTo(destgate); // connection srcgate = _checkGate(mac_p, "toPhy"); destgate = _checkGate(phy_p, "fromMac"); srcgate->connectTo(destgate); // connection srcgate = _checkGate(app_p, "toSen"); destgate = _checkGate(sen_p, "fromApp"); srcgate->connectTo(destgate); // connection srcgate = _checkGate(mac_p, "toApp"); destgate = _checkGate(app_p, "fromMac"); srcgate->connectTo(destgate); // connection srcgate = _checkGate(app_p, "toMac"); destgate = _checkGate(mac_p, "fromApp"); srcgate->connectTo(destgate); // connection srcgate = _checkGate(mod, "SenIn"); destgate = _checkGate(sen_p, "SenIn"); srcgate->connectTo(destgate); // connection srcgate = _checkGate(mod, "RFIn"); destgate = _checkGate(phy_p, "RFIn"); srcgate->connectTo(destgate); // this level is done -- recursively build submodules too mac_p->buildInside(); app_p->buildInside(); sen_p->buildInside(); phy_p->buildInside(); batt_p->buildInside(); } // submodule 'target': modtype = _getModuleType("TargetLocation"); cModule *target_p = modtype->create("target", mod); int target_size = 1; // parameter assignments: target_p->par("Energy") = tmpval.setDoubleValue(new Expr1(mod)); target_p->par("Number_Host") = mod->par("Number_Host"); target_p->par("Propagation_Delay") = mod->par("A_Propagation_Delay"); target_p->par("Interval") = mod->par("Sen_Interval"); target_p->par("Max_V") = mod->par("Max_V"); target_p->par("Sen_Range") = mod->par("Sen_Range"); target_p->par("Range") = mod->par("Range_Square"); target_p->par("DecayComponent") = mod->par("A_DecayComponent"); target_p->par("Enter_Time") = mod->par("Target_Enter_Time"); target_p->par("Leave_Time") = mod->par("Target_Leave_Time"); _readModuleParameters(target_p); // submodule 'recorder': modtype = _getModuleType("SystemRecorder"); cModule *recorder_p = modtype->create("recorder", mod); int recorder_size = 1; // parameter assignments: recorder_p->par("Number_Host") = mod->par("Number_Host"); recorder_p->par("Interval") = mod->par("Record_Interval"); _readModuleParameters(recorder_p); // submodule 'rfchannel': modtype = _getModuleType("WirelessChannel"); cModule *rfchannel_p = modtype->create("rfchannel", mod); int rfchannel_size = 1; // parameter assignments: rfchannel_p->par("DecayComponent") = mod->par("DecayComponent"); rfchannel_p->par("Shadowing") = tmpval.setDoubleValue(new Expr2(mod)); rfchannel_p->par("Number_Host") = mod->par("Number_Host"); rfchannel_p->par("RF_Range") = mod->par("RF_Range"); rfchannel_p->par("Propagation_Delay") = mod->par("Propagation_Delay"); _readModuleParameters(rfchannel_p); // submodule 'sensors': modtype = _getModuleType("SensorHost"); int sensors_size = (int)(mod->par("Number_Host")); _checkModuleVectorSize(sensors_size,"sensors"); cModule **sensors_p = new cModule *[sensors_size]; for (submodindex=0; submodindex<sensors_size; submodindex++) { sensors_p[submodindex] = modtype->create("sensors", mod, sensors_size, submodindex); // parameter assignments: sensors_p[submodindex]->par("txRate") = mod->par("R_RF"); sensors_p[submodindex]->par("EnergyLevIni") = mod->par("EnergyLevIni"); sensors_p[submodindex]->par("DetPkLen") = mod->par("L_d"); sensors_p[submodindex]->par("TrackPkLen") = mod->par("L_t"); sensors_p[submodindex]->par("ACKPkLen") = mod->par("L_a"); sensors_p[submodindex]->par("RFNoise") = mod->par("RFNoise"); sensors_p[submodindex]->par("MicNoise") = mod->par("sigma_i2"); sensors_p[submodindex]->par("RFPower") = mod->par("RFPower"); sensors_p[submodindex]->par("RFThreshold") = mod->par("RFThreshold"); sensors_p[submodindex]->par("MicSample") = mod->par("Sen_N"); sensors_p[submodindex]->par("P_Activate") = mod->par("P_Activate"); sensors_p[submodindex]->par("P_Transmit") = mod->par("P_Transmit"); sensors_p[submodindex]->par("Location_X") = tmpval.setDoubleValue(new Expr3(mod)); sensors_p[submodindex]->par("Location_Y") = tmpval.setDoubleValue(new Expr4(mod)); sensors_p[submodindex]->par("P_SEN") = mod->par("P_SEN"); sensors_p[submodindex]->par("TD_MAX") = mod->par("TD_MAX"); sensors_p[submodindex]->par("Lambda_app") = tmpval.setDoubleValue(new Expr5(mod)); sensors_p[submodindex]->par("Sensing_Interval") = mod->par("T_Sen"); sensors_p[submodindex]->par("Dec_Interval") = mod->par("T_Track"); sensors_p[submodindex]->par("DecayComponent") = mod->par("A_DecayComponent"); sensors_p[submodindex]->par("Sensor_Fs") = mod->par("Sensor_Fs"); _readModuleParameters(sensors_p[submodindex]); } // this level is done -- recursively build submodules too target_p->buildInside(); recorder_p->buildInside(); rfchannel_p->buildInside(); for (submodindex=0; submodindex<sensors_size; submodindex++) sensors_p[submodindex]->buildInside(); delete [] sensors_p; }
  • PACKETS DEFINITION
There are several packets, messages, and busy tones that get transferred between the nodes for every event. The definition and specification of the packets include various attributes. The RFPacket includes the following data txRate // Transmission Rate sending_power // Transmitting power receiving_power // Receiving Power location_x // X-Coordinate location_y // Y-coordinate The DecInfoPacket includes the following data Data // Fusion Co-efficient The TrackInfoPacket includes the following data track_x // X-coordinate track_y // Y-coordinate recordtime // Time Of Recording true_x // Original X-coordinate true_y // Original Y-coordinate The above message packets are transmitted between the nodes during corresponding events. For example consider that the sensor node 17 [H1] sends the track information to node 3[H2]. The TrackInfoPacket resembles as given below. //NODE 17 SENDING TRACK_INFO TO NODE 3 double txRate = 20000.000000 double sending_power = 1.000000 double receiving_power = 0.048157 double location_x = 16.652397 double location_y = 19.143103 int number = 1 double track_x = 15.996901 double track_y = 18.059343 int ncount = 3 double recordtime = 30.899702 double true_x = 14.933409 double true_y = 18.713991 Similarly the packet includes various different information depending on the type of packets sent received.
  • REFERENCES
  • W. B. Heinzelman and A. P. Chandrakasan “An application-specific protocol architecture for wireless microsensor networks,” IEEE Trans. Wireless Commun., vol. 1, pp. 660, Oct. 2002.
  • X. Sheng and Y. Hu “Maximum likelihood multiple-source localization using acoustic energy measurements with wireless sensor networks,” IEEE Trans. Signal Process., vol. 53, pp. 44, Jan. 2005.
  • Liang song “A cross layer architecture of wireless sensor networks for Target tracking”, IEEE/ACM Transaction on Networking (TON), vol 15, issue 1, pp 145-158, Feb 2007.
  • K. Jamieson , H. Balakrishnan and Y. C. Tay “Sift: A MAC protocol for event-driven wireless sensor networks,”, 2003.
  • F. Akyildiz , W. Su , Y. Sankasubramaniam and E. Cayirci “A survey on sensor networks,” IEEE Commun. Mag., pp. 102, Aug. 2002.
  • J. Moore , T. Keiser , R. Brooks , S. Phoha , D. Friedlander , J. Koch , A. Reggio and N. Jacobson “Tracking targets with self-organizing distributed ground sensors,” Proc. IEEE Areospace Conf., 2003, p. 5_2113.
  • Available online: https://www.omnetpp.org/doc/tictoc-tutorial/part1.html
  • G. Gui and P. Mohapatra “Power conservation and quality of surveillance in target tracking sensor networks,” Proc. ACM MobiCom, 2004, p. 129.
  • DARPA Connectionless Networking Solicitation. [online] Available: https://www.darpa.mil/ato/solicit/CN/.
  • W. Ye , J. Heidemann and D. Estrin “An energy-efficient MAC protocol for wireless sensor networks,” Proc. IEEE INFOCOM, 2002, p. 1567.
  • C. Guo , L. C. Zhong and J. M. Rabaey “Low power distributed MAC for ad hoc sensor radio networks,” Proc. IEEE GLOBECOM, 2001, p. 2944.
  • S. Haykin Array Signal Processing Englewood-Cliffs, NJ: Prentice-Hall, 1985.
  • Available online: https://networks.cs.ucdavis.edu/~yick/research/TargetTrackingSummary.html
  • C. Guestrin , P. Bodik , R. Thibaux , M. Paskin and S. Madden “Distributed regression: An efficient framework for modeling sensor network data,” Proc. 3rd Int. Symp. Information Processing in Sensor Networks (IPSN 2004), Apr. 2004, p. 1.
  • D. Li , K. Wong , Y. Hu and A. Sayeed “Detection, classification, and tracking of targets,” IEEE Signal Process. Mag., vol. 19, pp. 17, Mar. 2002.
  • V. Kawadia and P. R. Kumar “A cautionary perspective on cross layer design,” IEEE Wireless Commun. Mag., vol. 12, pp. 3, Feb. 2005.
  • Y. Zou and K. Chakrabarty “Target localization based on energy considerations in distributed sensor networks,” Proc. 1st IEEE Workshop on Sensor Network Protocols and Applications (SNPA 2003), May 2003, p. 51.
  • L. Song and D. Hatzinakos “Energy efficiency limits of broadcasting in wireless networks,” IEEE Trans. Wireless Communications, May 2005.
  • R. Brooks , P. Ramanathan and A. Sayeed “Distributed target classification and tracking in sensor networks,” Proc. IEEE, vol. 91, pp. 1163, Aug. 2003.
  • T. Dam and K. Langendoen “An adaptive energy-efficient MAC protocol for wireless sensor networks,” Proc. ACM 1st Int. Conf. Embedded Networked Sensor Systems (SenSys"03), Nov. 2003, p. 171.
  • T. He , J. A. Stankovic , C. Lu and T. Abdelzaher “Speed: A stateless protocol for real-time communication in sensor networks,” Proc. Int. Conf. Distributed Computing Systems (ICDCS), 2003, p. 46.
  • H. Gupta , S. Das and Q. Gu “Connected sensor cover: Self-organization of sensor networks for efficient query execution,” Proc. MobiHoc, 2003, p. 189.
  • J. G. Proakis Digital Communication, 3rd ed. New York: McGraw-Hill, 1995.
  • X. Wang , G. Xing , Y. Zhang , C. Liu , R. Pless and C. Gill “Integrated coverage and connectivity configuration in wireless sensor networks,” Proc. ACM 1st Int. Conf. Embedded Networked Sensor Systems (SenSys) Los Angeles, CA, 2003, p. 28.
  • S. Singh and C. S. Raghavendra “PAMAS: Power aware multi-access protocol with signaling for ad hoc networks,” ACM Comput. Commun. Rev., vol. 28, pp. 5, Jul. 1998.
  • P. Venkitasubramaniam , S. Adireddy and L. Tong “Opportunistic ALOHA and cross-layer design for sensor networks,” Proc. IEEE Military Commun. Conf., Oct. 2003, p. 705.
  • C. Shurgers , V. Tsiatsis , S. Ganeriwal and M. Srivastava “Optimizing sensor networks in the energy-latency-density design space,” IEEE Trans. Mobile Comput., vol. 1, pp. 70, Jan.-Mar. 2002.
  • W. Zhang and G. Cao “Optimizing tree reconfiguration for mobile target tracking in sensor networks,” Proc. IEEE INFOCOM, 2004, p. 2434.
  • F. Zhao , J. Shin and J. Reich “Information-driven dynamic sensor collaboration,” IEEE Signal Process. Mag., vol. 19, pp. 61, Mar. 2002.
  • Scaglione and Y. W. Hong “Opportunistic large arrays: Cooperative transmission in wireless multihop ad hoc networks to reach far distances,” IEEE Trans. Signal Process., vol. 51, pp. 2082, Aug. 2003.
  • IEEE Trans. Acoust., Speech, Signal Process., Special Issue on Time-Delay Estimations, vol. ASSP-29, Jun. 1981.
Did you like this example?

Cite this page

Lower Energy Consumption. (2017, Jun 26). Retrieved March 28, 2024 , from
https://studydriver.com/lower-energy-consumption/

Save time with Studydriver!

Get in touch with our top writers for a non-plagiarized essays written to satisfy your needs

Get custom essay

Stuck on ideas? Struggling with a concept?

A professional writer will make a clear, mistake-free paper for you!

Get help with your assignment
Leave your email and we will send a sample to you.
Stop wasting your time searching for samples!
You can find a skilled professional who can write any paper for you.
Get unique paper

Hi!
I'm Amy :)

I can help you save hours on your homework. Let's start by finding a writer.

Find Writer