Network Sniffer


Network sniffer


Network Sniffer is a influential and use full monitoring tool for Windows. Its mechanism as a Windows Service and check system status and offers information about existing services, running services, stopped services, available drivers, stopped drivers, running drivers make the most of and other administrator responsibilities.

Don't use plagiarized sources. Get your custom essay on

“Network Sniffer”

Get custom essay

It is developed with Visual Basic .NET 2003 and use WMI (Windows Management Instrumentation). It make slow progress over the network for, but can be made to start manually with some configuration settings. Network Sniffer not only gives a attribute to monitor the system but also gives the administrators to arrange various locations of the system.

Network Sniffer uses the integral feature of Windows Management Instrumentation to work together with the lower levels of system arrangement. A system, which is configured as per the configuration, will offer a enhanced presentation.

The warning characteristic enables the administrator to gain information on the subject of the system even when he/she is not at the same location as that of the system.

Literature Review:

Before I go on to write contents regarding my project I want to make a point of what exactly is a sniffer.

Sniffer is a product or device that spy on the network by collecting information by moving over a network. Basically they are kind of Data Interception technology. They are liable to work because the Ethernet was developed on the theory of sharing. All most all the networks utilizes broadcast technology like messages from one computer can be read on the other computer of that network. In general all the different systems except the one for which the message is sent will ignore that message. Anyhow computers can be made to accept messages even if they are not meant for them. This can be done by Sniffer. Many of us think computers connected to a switch are free from sniffing. Computers connected to switches are just in danger as computers connected to those of hub.

Intended purpose of sniffer:

  • Network administrators utilize them to counter troubles on the network.
  • Network security engineers utilize them to experiment security problems.
  • Developers utilize them to debug protocol implementations.
  • Common man uses them to know the techniques of network protocol internals.
  • How sniffer works:

A system connected to the Local Area Network (LAN) has two identities. One is the Media Access control (MAC) that individually identifies single node in a network and is saved on the network card itself. MAC address is useful in getting worked with Ethernet protocol when developing frames to move the information from one system to another. The another address is Internet Protocol (IP) which will be useful in applications. The data link layer uses an Ethernet header with MAC address of the destination machine rather than the IP address. The network layer is in charge for plotting IP network addresses to the MAC address as essential by the data link protocol. It firstly searches the MAC address of the destination computer in a table, usually called the Address Resolution Protocol (ARP) cache. If none of the entry is available for the IP address, the ARP telecasts a request packet to all the systems on the network. System with that address responds to the source system with its MAC address. This MAC address then gets added to the sources machine’s ARP cache. This MAC address is used for all the communications between the source and destination machine.

There are two basic types of Ethernet scenarios and how Sniffers works in both the cases is slightly different.

Shared Ethernet:

In a shared Ethernet, all hosts are linked to the same bus and compete with another network. In this kind of environment packets meant for one system are received by another system. Thus when a machine one wants to communicate with machine two in such an environment, it sends a packet on the network with the destination MAC address of second machine with its own source MAC address. All the computers on the shared Ethernet (system 3 and system 3) evaluate frames destination MAC address with their own. If the two don’t match, the frame is gently unwanted. A machine running a sniffer breaks this rule and accepts all frames. Such a machine is said to have been put into promiscuous mode and can successfully listen to all the traffic on the network. Sniffing in a shared network is totally passive and hence very tricky to detect.

Switched Ethernet:

An Ethernet environment in which the hosts are linked to switch as an alternative of a hub is called switched Ethernet. The switch preserve a table keeping the check of each computer’s MAC address and the physical port on the switch to which the MAC address is connected and sends a packet destined for a exacting machine respectively. It is a clever device which delivers packets to the exact computer only and avoids broadcast to all the other machines on a network. This will yield improved usage of the available bandwidth and improved security. For this reason the process followed before, of putting the systems into promiscuous mode, to gather packets might not work. In result, even many skilled administrators fall into the belief that switched networks are totally secure and protected to sniffing which might not be the case.

Despite the fact that switch is more secure than a hub there are couple of ways to find sniff on a switch:

ARP Spoofing:

I have made a note earlier of how an ARP is used to obtain the MAC address of the destination machine with which one wish to communicate. The ARP is stateless, one could deliver an ARP reply though one has not been called for and such kind of reply will be acknowledged. if at all possible one would want to sniff the traffic making from machine one, one can ARP spoof the gateway of the network. The ARP cache of machine one will now have incorrect entrance for the gateway and is said to be poisoned. This is one way that whole the traffic destined for the gateway will pass through ones machine.

MAC flooding:

Switches maintain a conversion table that locates various MAC addresses to the physical ports on the switch. Due to this it can very expertly route packets from one machine to another. The switch has a restricted memory for this job. MAC flooding makes use of this restriction to bombard the switch with fake MAC addresses till the switch can’t keep up. The switch then enters into what is known as ‘failopen mode’ where it commence acting as a hub by transmitting packets to all the machines on the network. Once this occurs sniffing can be performed easily.

How sniffer can be Detected:

A sniffer is normally passive it just collects information or data. So, it happens to be very difficult to detect sniffers, mainly when running on shared Ethernet. But having said that it can be marginally easier with sniffer is working on switched Ethernet network part. When it is installed in the system it generates some small traffic. There are some methods to detect sniffers they are:

Ping method:

The method used here is to send a ping request with the IP address of the doubted machine but not its MAC address. If possible nobody should view this packet as each Ethernet adapter will decline it as it does not match its MAC address. But if the doubted machine is running a sniffer it will respond, as it does not worry rejecting packets with a different destination MAC address.

ARP method:

A system caches ARPs. So what one can do is send a non transmit ARP. A machine in promiscuous form will store your ARP address. Next one can send a broadcast ping packet with their own IP, but a different MAC address. Just a system that has ones correct MAC address from the sniffed ARP frame will be able to respond to the broadcast ping request.

Latency method:

This process is based on the assumption that most sniffers do same parsing. Normally loads amount of information is delivered on the network and the doubted machine is pinged before and during the data flooding. If the system is in promiscuous mode, it will perse the information, increasing the load on it. Thus it will capture some extra bit of time to respond to the pinged packet. It’s this difference that can be used as an indicator of machine whether it is in promiscuous mode or not. A valuable point noting is that the packets may be postponed because of the load on the wire, resulting in false positives.

ARP watch:

As mentioned above one process to sniff on a switched network is to ARP spoof the gateway. This service is termed as ARP watch and can be used to monitor the ARP storage of a system to see if there is any copying for a system. If there is, it could activate alarms and lead to finding of sniffers. But having said this a network implementing DHCP, this could activate many false alarms. A formal change that can be made is the increase the DHCP lease time. In this way after once the user can come back after a week lay off, they will get same IP address as before and chance of false alarm is reduced to great extent.

Using IDS:

Some Intrusion Detection systems, monitor for ARP spoofing on the network. The open source IDS sniff for instance has an ARP-spoof pre-processor that allows it to record packets on the network with spoofed ARP addresses.

Sniffer Tools:

When a person wants to have secure time in internet its well advised from my side to investigate the tools which are not used by them before by doing this like me they also discover the most influential tools for usage. The top tools that I have discovered in this research are followed:


Sniffing the fix that holds the web world jointly. Wireshark is a superb open source network protocol analyzer for both the operating systems Windows and Unix. From a present online network it allows us to capture the data and examine it. One can communicate, browse and capture data to investigate down into just the level of packet detail one needs. It has influential features which includes wealthy display filter language and has the capacity to look into the rebuilted stream of a TCP session. It shows it supports to hundreds of protocols and media types. A tcpdump like comfort

Version named tethereal is included. One word of circumspect is that Ethereal has struggled from dozens of remotely exploitable security holes.


A powerful wireless sniffer. Kismet is a calm based 802.11 layer2 wirless network detector, sniffer and intrusion detection system. It recognizes networks by passive sniffing and can even deckloak hidden networks if they are in utilize. It can repeatedly spot network internet protocol mass by sniffing TCP, UDP, ARP and DHCP packets, log traffic in wireshark/TCP dump well-matched format and even plot detected networks and estimated ranges on downloaded maps.

TCP Dump:

The standard sniffer for network monitoring and data acquisition. TCP dump is the IP sniffer and well used before ethereal came on the sight and many of users continue to use it regularly. It may not have the bells and whistles that wireshark has, but it does the work well and with smaller amount of security outlets.


A group of dominant network auditing and penetration-testing tools. This well-liked and well engineered suite by Dug song includes many tools. DSniff, filesnarf, mailsnarf, msgsnarf, urlsnarf, and webspy passively moniter a network for fascinating data arpspoof, dnsspoof and macof facilitate the interception of network traffic normally unavailable to an attacker. Monkey in the middle attacks were implemented by sshmitm and webmitm and against redirected ssh and https sessions by make the most of weak bindings in ad-hoc PKI. It is great toolset and it hold pretty much all of users password sniffing needs.


Ettercap is a terminal-based network interceptor for Ethernet LANS. It ropes active and passive classification of numerous protocols. Data insertion is a well-known link and clean on the fly is also possible, keeping the connection synchronized. Most of the sniffing were put in use to give user a powerful and complete sniffing suite. Plugins are supported. It has the capacity to verify whether the user is in a switched LAN or not, and to make use of OS finger points to let the user know the geometry of the LAN.


A network traffic usage monitor. Ntop give you an idea about network custom in a way like to what top does for method. In interactive mode, it shows the network status on the user’s terminal. In Web mode, it acts as a Web server, creating an HTML dump of the network status. It sports a Net Flow/ Flow emitter an HTTP-based client interface for creating ntop-centric monitoring applications, and RRD for persistently storing traffic statistics.


Purpose of the project:

Here I bring the insight what is the main purpose of this project and will give my best possible theory.

  • To offer a particular upbringing from where an administrator can communicate with the system resources with in the network.
  • To offer and give the detail number of services running on the system.
  • To offer an improved support for the resources of the system.
  • To offer an interaction with the system factors like network settings, OS settings, services, and process etc.
  • To offer the luxury of giving the details of the system setting to a individual with help of email.

Problems in the existing system:

  • The present scenario is given by Microsoft Corporation, to understand the methodology it requires an individual to interact with configuration of the system.
  • It doesn’t inform to the user in case of any event produced errors. Not a single interface to check all the events.
  • System assets cannot be freed frequently based on the parameters.
  • If there is any breakage in systems performance there will be no intimation to the administrator.
  • The present system will not allow the remote process of the resource of one structure with other one.

Solution of these problems:

  • In accordance to get the remote access to the resources there should be some settings done by the administrator from the network tool named as network sniffer.
  • Three various kinds are given to know the present status of the services and drivers etc.
  • The individual application will be able to utilize and to configure multiple versions of operating system which is likely by letting through the application settings to get configures and show suitable panel for the administrator.
  • Renewing of components can be done by accessing the components of the system in addition to the resources offered on the websites.
  • Preserving the information in the system folder so that the settings can be made to utilize the luxury of reading and writing directly from the folder.

Scope of the project:

Network Sniffer is an application which grabs the information within the hub and travels on the network. Present scope of my project is to make sure that the state of the processes, services and drivers have been checked. As the present problem is that it does not propose any methodology to deal with these aspects of the management.

Hardware and software specifications:


  • PIII 500MHZ or above
  • 128 MB RAM
  • 100 MB free Hard disk space
  • Colour monitor


  • Windows NT 4 | 2000 | XP
  • Visual studio.Net 2008 Enterprise Edition
  • Visual studio .Net Framework (Minimal of Deployment)

Feasibility Study:

Feasibility study is done in accordance with once the trouble is undoubtedly understood. It is an elevated level casing version of the whole system analysis and design process. The main criterion is to know at low span of time and expense of how to solve a problem. The reason of feasibility is not to solve the problem but to know and determine if the trouble occurred is significance in solving.

The testing of the system on feasibility is done on the following three points:

  • Technical Feasibility
  • Economical Feasibility
  • Operational Feasibility

Technical Feasibility:

The proposed system is allowed to be developed with the help of existing technology or it is allowed to be in the state which it is in. It is intended to implement the proposed system with the help of ASP.NET with C# and databases are SQL server 2000/2005.

The group has already acquires the versions of windows XP and windows 2000 server with internet information server and data bases. It is apparent that the required hardware and software are offered for the betterment and implementation of the proposed system.

Economical Feasibility:

Economical Feasibility is an assessment on betterment of cost ponder in opposition to the ultimate income or advantage got from the developed system. Economic explanation involves a wide spectrum of problems that includes cost benefit assessment, enduring corporate income plans, cost of products in need for the betterment.

The group comprises the required hardware for implementing the proposed system. It is not necessary to pour the finances on the internet connection as the organization already got the host of servers and satellite links. So the group doesn’t need to invest any additional spending.

Operational Feasibility:

This check of Feasibility questions if at all the system would work with slightest of problems when it is improved and installed. The technical body has enough information about the tools that are in use and the users jest need to worry about the access and search or browse the site. Therefore it is over and done with that the system is operationally feasible.


Microsoft.Net Framework

The .Net framework is fresh work out stage which would make things easier for the application development in the extremely distributed surroundings of the internet. The .Net frame work is moulded in such a way that it will perform the following objectives:

To offer a constant object-oriented encoding surrounding whether object code is stored and executed nearby but internet distributed, or executed vaguely.

To offer a code execution surrounding which will reduce the software deployment and versioning variance.

To offer a code execution surrounding which will eradicate the act problems of interpreted environment.

To compose the builder skill constantly across broadly varying types of applications, like windows-based applications and web-based application.

To develop all communication on industry principles to make sure that code based on the .Net Framework can put together with any different code.

The .Net Framework has got couple of main components:

The common language runtime.

.Net Framework class

The common language runtime is the establishment of the .Net Framework. One can imagine of the runtime as a representative which handles code at execution time, providing core services such as memory handling, thread handling and remoting, whereas implementing severe type protection and different forms of code precision which will make sure the safety and toughness. In reality, the theory of code handling is a basic theory of the runtime. Code which will aim the runtime is called as managed code, whereas code which will not aim the runtime is called as unmanaged code.

The class library, the other crucial component of .Net Framework, is a wide-ranging, object oriented compilation of reusable varieties that you can utilize to build application series from the traditional command line or graphical user interface application to applications principled on modern improvement offered by ASP.NET, like as web Forms and XML Web services.

The .Net Framework be able to hosted by unmanaged mechanisms which will fill the common language runtime into their processes and begin the execution of handled code, in doing so generating a software environment which will be able to make use of both managed and unmanaged features. The .Net Framework will be able to offer various runtime hosts and in addition to it also ropes the improvement on third party runtime hosts.

For instance, ASP.NET hosts the runtime to offer a measured server side environment for handled code. ASP.NET mechanisms straight with the runtime to allow Web Forms applications and XML web services. Internet Explorer is a case in point of an unmanaged application which will be able to host the runtime. Utilizing the Internet Explorer to host the runtime allows one to insert managed components or windows forms handles in HTML files. This way of hosting enables managed mobile code possible, other than with important innovations which will only manage code would provide, like as half trusted execution and safety remote file storage.

The followed design displays the bonding of the general language runtime and the class library to the application and to the overall system. It also displays how handled code works inside a broader planning.

Features of the common Language Runtime:

The common language Runtime handles the storage, thread execution, code execution, code safety checking, collection, and some various other jobs. These characteristics are inherent to the handled code which will run on the common language runtime. With look upon to security, managed components are presented altering the amount of faith, depending on the various aspects which also involves their basis (likewise the internet, enterprise network, or local machines). This indicate that a indicative component may or may not be capable to produce file access operations, register-access operations, or sensitive functions, although if it all it is allowed to used in the identical application.

The runtime implement code access security. For instance, users would be able to trust that an executable embedded in a web page and it would be able to play an liveliness on screen and having said this it cannot access data, file system or network. The protective applications of the runtime would now make possible rightful Internet-deployed programme to be extensively application wealthy or rich.

The runtime also implements code robustness by enforcing firm type and code-verification communications called the common type system (CTS). The CTS would assure that all the supervised code is self explanatory. The different Microsoft and third party language compilers generate supervised code that conforms to the CTS. This would mean that supervised code would take into other supervised types and occasions, while severe implementation of enforcing type reliability and type security.

In further, the managed surrounding of the runtime eradicates various simple programming problems. For instance, the runtime usually take cares of the object map and handles references to objects, allowing the flow when they are out of date for the usage. This automatic memory handling determines the couple of most simple application errors, memory leaks and invalid memory references.

The runtime which would also speeds up builder’s productivity. For instance programmers can write request in their building language of choice, full usage of the runtime, class library and components written in other languages by other developers. Any compiler agent one who opts to aims the runtime would be able to do so. Language compilers which aim .NET Framework creates the attributes of the .NET Framework existing to accessible code scripted in that language, immensely simplifying the migration process for live applications. Whereas the runtime is intended for the programming of the future, it does also helps software of today and yesterday. Interoperability among managed code and unmanaged code will allow builders to carry on the usage of essential COM components and DLLs.

The runtime is intended to improve performance. Even if the common language runtime gives various standard runtime services, managed code is never read between the lines. A feature termed as jus-in-time (JIT) compiling allows every managed code to run in the local machine language of the machine on which it is carried out. For the time being the memory manager eradicates the chances of the fragmented memory and boosts memory locality-of-reference to extent increase presentation.

Lastly, the runtime would be able to host by high-performance, server-side applications, such as Microsoft SQL server and Internet Information services (IIS). This attributes allows user to use the managed code to develop ones business tricks, whereas at rest enjoying the bigger performance of the industry’s excellent endeavour servers that support runtime hosting.

.NET Framework class Library:

The .Net framework class library is gathering of reusable forms that strictly join together with simple language runtime. The class library is object oriented; giving form from which ones managed code can derive functionality. It does make the .NET fame work forms uncomplicated for the usage and it also does decrease the time allied with knowledge new attributes of the .NET framework. Further, third party mechanism can put together flawlessly with classes in the .NET Framework.

For instance, the .NET framework gathering classes use set of interfaces that one can take the advantage to build one’s own gathering classes. Ones gathering classes will mix flawlessly among the classes in the .NET Framework.

As one will be able to anticipate from an object-oriented class library, the .NET Framework types will allow user to accomplish a wide area of common software missions, which will indulge missions like as string management, data collection, database connectivity and access for files. Further, these simple missions, the class library adds that helping hand in different forms of specialized improving scenarios. For instance, user can take advantage of the .NET Framework to improve the types of applications and forces:

  • Console requests
  • Written or hosted requests
  • Windows GUI requests
  • ASP. NET
  • XML web services
  • Windows services

For instance, the windows type classes are a complete set of reusable forms that immensely will make simpler windows GUI improvement. If a user can script an ASP.NET form application, and the user can use the web forms classes.

Client Application Development:

Client applications are the nearby usable way of application in windows based programming. These are the forms of requests which will appear in windows or types on the desktop, allowing a user to carry out a job. Client request involves requests like as word processors and spreadsheets, also custom business applications like data entry equipment, reporting equipment etc. Client requests normally utilize windows, menus, buttons and other GUI elements and resources like a file system and devices like printers. Other type of client request is the traditional ActiveX control used on the internet like a webpage. This request is more like other client requests: it is executed natively, contains right to use local resources, and comprises graphical elements.

In recent times, improvers shaped such applications using C and C++ in conjunction with the Microsoft foundation classes (MFC) or with a rapid application development (RAD) surrounding like a Microsoft Visual Basic. The .NET Framework integrate features of these existing products into a single, reliable improvement environment which drastically simplifies the improvement of client applications.

The windows type classes enclosed in the .NET Framework are shaped to be utilized for GUI improvement. One can without difficulty create command windows, buttons, menus, toolbars and other screen elements with the suppleness necessary to lodge shifting business requires.

For instance, the .NET Framework gives common rules to balance visual attributes contained with forms. In rare cases the fundamental operating system which will not help in altering these attributes directly, and in these scenarios the .NET framework automatically recreates the forms. This is one of several ways in which the .NET Framework composes the developer interface and creates coding simpler and more constant.

Unlike ActiveX controls, Windows Forms controls have half reliable access to a user’s computer. This would mean that binary or natively executing code can access some of the assets on the system of the user with no access of other resources. For this reason code access security, numerous applications that once essential to be installed on a user’s system will be allowed to securely make to use through the web. User requests can put into practice the features of local requests while being deployed such as webpage.

Visual Basic .NET

Introduction to windows forms (Visual Basic .NET):

Windows form is the new path for Microsoft Windows application development, relied on the .NET Framework. This gives apparent, object-oriented, extensible set of classes which will allow one to build wealthy applications. Moreover, Window Forms could pretend as a nearby or local user interface in a multi-tier distributed solution. Windows Forms application gives the option to scripted in any language which will be supported by common language runtime.

What Is a Form?

A Form is a bit of screen genuine zone, commonly rectangular, that one can utilize it for the current information to the user and to acknowledge input from the user. Forms could be standard windows, multiple document interface (MDI) windows, dialog boxes, or display surfaces for graphical routines. The uncomplicated way to express the user interface for a form is to locate controls on its surface. Forms are items that expose conditions which characterize their outward show, methods which define their actions, and events which define their communication with user. By setting the conditions of the form and scripting code to react to its events, you customize the object to match the necessary needs of user application.

Since with every objects in the .NET Framework, forms are illustrations of classes. The form one can create with the usage of Windows forms Designer is a class, and when u show the illustration of the form at run time, this class is the template which is utilized to build a form. The framework in addition permits one to inherit from the current present forms to include functionality or modify existing behaviour. When I include a form to my project, I can opt whether it inherits as of the form class provided by the Framework, or from a form I have created before.

Further, forms are controls, since they inherit from the control class. Inside a Windows Forms project, the form is the essential vehicle

Inside a Windows Forms project, the form is the most important vehicle for user interaction. By adding diverse sets of controls and scripting code, one can extract information from the user and acknowledge to it, work with current stores of data, and query write back to the file system and registry on the user’s local computer.

Even if the form can be shaped totally in the Code Editor, it is easier to utilize the windows forms Designer to create and alter forms.

There are few advantages of using windows forms which are followed:

Simplicity and power:

Windows forms is a software model for improving Windows application that mingles with simplicity of Visual Basic 6.0 programming model with the power and flexibility of the common language runtime.

Lower total cost of ownership:

Windows Forms seize advantage of the versioning and use of the features of the common language runtime to propose reduced usage of costs and advanced application robustness over time. This notably lowers the maintenance costs for requests scripted in Windows forms.

Architecture for controls:

Windows form provides an architecture for controls and control containers which is foundation on solid execution of the control and container classes. This preferably lowers control-container interoperability issues.


Windows Forms seizes complete benefit of the security features of the common language runtime. This means that windows forms able to used for executing everything from an untrusted control running in the browser to a fully trusted application fixed on a user’s hard disk.

XML Web services support:

Windows forms presents full lending hand for rapidly and simply connecting to XML web services.

Rich graphics:

Windows forms is one of the first ship vehicles for GDI+, a new version of the windows Graphical Device Interface (GDI) that supports alpha blending, texture brushes, advanced transforms, rich text support, and more.

Flexible controls:

Windows form gives a wealthy set of controls that encompass all of the controls presented by windows. These controls also offer new characteristics, such as “flat look” styles for buttons, radio buttons, and check boxes.

Data awareness:

Windows form offers full support for the ADO data model.

ActiveX control support:

Windows Form presents full support for ActiveX controls. One can simply host ActiveX controls in a Windows Forms application and can also host a windows from control as an ActiveX control.


Windows form takes benefit of the common language runtime enhanced licensing model.


Windows form offers a printing framework that allows applications to give wide area reports.


Windows Form controls execute the interfaces defined by Microsoft Active Accessibility (MSAA), which will shape it simple way to build applications that support accessibility aids, such as screen readers.

Design-time support:

Windows form takes full benefit of the meta-data and component model features offered by the common language runtime to offer careful design time support for both control users and control implementers.

Windows Management instrumentation:

Windows management instrumentation is measurable running infrastructure which can be utilized as a single steady, standards-based, extensible, object-oriented interface. WMI gives one with a benchmark way to communicate among system management information and the fundamental WMI APIs. WMI is utilized mainly by system management application improvers and admin supervisors to contact and operate system management information.

WMI could be utilized to improve tools that put in order and supervise system information so that mangers can watch system actions more closely. For instance, one can improve an application using WMI that pages an supervisor or manager when there will be crashing in web servers.

WMI Architecture:

The parts are sue to design the WMI architecture:

  • Supplier
  • Purchasers
  • Common information model (CIM) repository
  • Common information model object manager (CIMOM)

These parts are improved up on the CIM. It is object-oriented, independent of the system, and independent of the application. It gives up and down schema to classify objects in a managed computing environment. The CIM describes objects in the managed surrounding among classes. These classes comprise ways to define actions and properties to define data. Few objects comprised in the CIM are applications, networks, printers and drivers.

A supplier is an transitional vendor among the system to be handled and the CIMOM. A supplier is pretty much the same way to an event source in visual studio Analyzer. Its main criteria is to grab management information from the system utilizing anything interfaces a system presents for management. Through different COM interfaces, suppliers can provide data to WMI, and manage requests from system management applications, and produce event warnings. The supplier also plots the management information and interfaces to the object classes that are described and saved in the CIM repository.

Purchasers are system management applications, like a Microsoft management server (SMS), or other party applications or scripts. Purchasers should require to know only the classes of the objects which are suitable to get data about. The ways of information grabbed or hided would not play a major importance. Purchasers can get data about the system, OS, applications, and devices and also data existing trough other management protocols.

The CIM repository is an object database where described objects like as static class meanings and examples which are being utilized to access and operate system management information are stored. The CIM repository is handled by the CIMOM, which pretends as a vendor for object requests. The CIMOM, looks up on the existence classes and resolves to which supplier is accountable for supplying examples of these classes. The CIMOM and the CIM repository are symbolic by a system service called WinMgmt and accessed among a set of COM interfaces.

The following picture shows how these components work together.

Once a request for management information comes from a WMI consumer to the CIMOM, the CIMOM assess the call, plots which provider has the data and sends back the information to the purchaser or consumer. The consumer wishes only the information it wants. It never has the idea the correct source of the data or any information of the way it is grabbed from the underlying API.

Application Profiling:

Windows management instrumentation (WMI) is an business initiative to improve a regular technology for right of entry in management information in enterprise surrounding. This data involves the state of system memory, catalogue of presently installed client requests, and other information on client position.

WMI application profiling is the instrumentation of occasions that are produced in the WMI infrastructure. One can utilize the WMI application profiling to produce and gather occasions from the WMI infrastructure and then assess them in visual studio Analyzer. Visual studio analyzer and WMI has the interoperability among them with this event sources can produce application profiling events exclusive of usage of two different infrastructures.

Introduction to web forms (C#.NET)

Web form is the latest pathway for Microsoft web application development, based on the .NET Framework. This Framework gives a exact, object-oriented, extensible set of classes that allows one to build wealthy web applications. Further, web forms can pretend as nearby user interface in a multi-tier distributed solution. Web form is a framework for improving windows client application which uses the common language runtime. Web Forms applications could be scripted in whichever language that the common language runtime supports.

What is a web form?

Web From has the capacity to edit a web programming by introducing a new programming design to improve around server side controls a model in which controls turn into their own UIs by producing HTML to comeback to clients and firing events that are managed by server side scripts. Because every action happens to be on the web server, almost any browser can run a web Forms app. And thankful to visual studio .NET, constructing a web Form app is a very much like using visual Basic: just drop a control against a form then script an event handler. This defines the web forms programming model, how it fits with ASP.NET, and shows the new DataGrid, TextBox, and some different classes in the .NET Framework class library.

As in the company of every objects in the .NET Framework, forms are examples of classes. The form one create with the web forms designer is a class, and when one display an example of the form at runtime, this class is the pattern utilized to produce the form. The framework permits one to take over current forms to add functionality or modify current actions. By adding a form one does have the options whether it takes over from the Form class given by the framework, or from a form created previously.

Further, forms are controls, since they take over from the control class. Inside a web form project, the form is the most essential vehicle for user communication. By adding various sets of controls and scripting code, one can extract data from the user to respond to it, work with current saved data, query and write back to the file system and registry on the user’s local computer.

Even though the form could be produced totally in the code editor, it is simpler to utilize the web forms designer to produce and adjust forms.

Few of the uses of web form include the following:

Simplicity and power:

Web form is a software model for improving web applications which adds the effortlessness of the visual Basic programming model among power and flexibility of the common language runtime.

Lower total cost of ownership:

Web form gives an architecture for control and control containers which is foundation on strong implementation of the control and container classes. This appreciably lessens control container interoperability problems.


Web form grabs complete benefit of the security characteristics of the common language runtime. This in a way suggests that web form could be utilized to implement everything from an untrusted control working in the browser to a completely trusted application setting up on a user’s hard disk.

XML Web services support:

Web form gives complete support for rapidly and with no trouble connecting to XML web services.

Rich graphics:

Web forms is one of the first ship vehicles for GDI+, a latest version of the web Graphical Device Interface (GDI) that stands by alpha mixing, texture brushes, advanced transforms, rich text support and more.

Flexible controls:

Web form gives a wealthy bunch of controls that include all of the controls given by web. These controls give new characteristic, like as a “flat look” styles for switches, radio switches and check boxes.

Data awareness:

Web form gives full hold up for the ADO data model.

ActiveX control support:

Web form gives full hold up for ActiveX controls. One can easily host ActiveX controls in a web form applications and also can host web forms control as an active control.


Web form obtains benefit of the common language runtime enhanced licensing model.


Web form gives a printing framework which will allow applications to give wide range of reports.


Web form controls executing the interfaces described by Microsoft active accessibility (MSAA), which create in simpler way to improve applications that supports accessibility aids, such as screen readers.

Design-time support:

Web form acquires complete benefit of the meta- data and component model characteristics offered by the common language runtime to offer thorough design-time support for both control users and control implementers.

Active X data objects.NET

ADO.NET overview

ADO.NET is an advancement of the ADO data access model that directly shows user necessities for improving measureable applications. It was shaped purposely for the web with scalability, statelessness, and XML in mind.

ADO.NET utilizes few ADO objects, like as the connection and command objects, and also introduces new objects. Key new ADO.NET objects comprises of dataset, data reader and data adapter.

The important difference among this development phase of ADO.NET and old information architectures is that there exists an object -the Dataset—that is separate and different as of any information stores. Due to that the Data set functions as a standalone entity. One can think of the dataset as an always detached record set which hasn’t got the knowledge about the source or destination of the data it contains. Within a Dataset, similar in a database, there are talbes, columns, relationships, constraints, views and so on.

A Data Adapter is the thing which connects to the database to the database to load the Dataset. Then, it links back to the database to renew the data there, depends on operations performed while the Dataset held the data. In the history, data processing has been importantly connection dependent. Currently, in an attempt to build multi-tiered applications are more proficient, data processing is twisting to a message- based approach that rotates over large piece of information. At the centre of this way is the Data adapter, which gives a link to get and store data among a Dataset and its source data store. It achieves this by use of requests to the rightful SQL commands designed among the data store.

Even as the Dataset has got no information of the source of its data, the managed supplier has in depth and exact data. The character of the managed supplier is to link, load and keep on the Dataset to and from data stores. The OLE DB and MSACCESS .NET data suppliers that are component of the .NET Framework offers four basic objects: the command, connection, Data reader and Data adapter.

The following sections will give an idea to some objects that has evolved and some that are new:


For connection and to manage transactions against database.


For presenting SQL commands against a database.

Data readers:

For reading a forward only stream of data records from MSACCESS data source.

Data Sets:

For saving, remote and programming against flat data, XML data and relational data.

Data Adapters:

For pushing data into Dataset, and reconciling data against a database.

When handling with connections to a database, there are two different options: MSACCESS .NET data provider and OLE DB.NET Data provider. In these examples one can use the MSACCESS .NET data provider. These are scripted to speak directly to MS ACCESS. The OLE DB .NET data supplier is used to speak to any OLE DB supplier.


Connections are utilized to ‘talk to’ databases, and are signified by provider-exact classes such as SQL Connection. Commands travel over connections and results are replied back in the form of streams which are readeable by Data Reader object, or pushed into a Dataset object.


Commands include the data that is submitted to a database and are signified by provider-specific classes like the SQL command. A command can be a stored in procedure call, an UPDATE statement, or a statement that returns results. One can in addition use input and output parameters, and return values as part of your command syntax. The instance below gives idea of how to use an INSERT statement against the NORTH WIND database.

Data Readers

The Data Reader object is somewhat synonymous with read only/forward-only cursor over data. The Data Reader API supports flat as well as hierarchical data. A Data Reader object is sent back after executing a command among a database. The set-up of the returned Data Reader object is distinct from record set. For instance one might use the Data reader to show the results of search list in web page.

Datasets and Data adapters


The Dataset object is analogous to the ADO Record set object, but more powerful, and with another vital difference: the Data Set is always disconnected. The Dataset object represents a cache of data, with database-like structures such as tables, columns, relationships, constraints. Nevertheless, a Dataset would and will have to behave much like a database, it is vital to keep in mind that Dataset objects do no contact directly with databases, or other source data. This permits the developer to work with a programming model that is always consistent, regardless of where the source data resides. Data coming from a database, an XML file, from code, or user input will all be moved into Dataset objects. Afterwards as adjusting are made to the Dataset they can monitored and checked before updating the source data. The GetChanges method of the dataset object actually designs a second Dataset that contains only the changes to the data. This dataset is then used by a Dataadapter to update the original data source.

The DataSet has many XML characteristics, including the ability to produce and consume XML data and XML schemas. XML schemas can be used to describe schemas interchanged via WebServices. In fact, a DataSet with a schema can actually be compiled for type safety and statement completion.

DataAdapters (OLEDB/SQL)

The DataAdapter object works as a bridge between the DataSet and the source data. Using the provider-specific SqlDataAdapter (along with its associated SqlCommand and SqlConnection) can increase overall performance when working with a Microsoft MSACCESS databases. For other OLE DB-supported databases, you would use the OleDbDataAdapter object and its associated OleDbCommand and OleDbConnection objects.

The DataAdapter object uses commands to update the data source after changes have been made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT command; using the Update method calls the INSERT, UPDATE or DELETE command for each changed row. You can explicitly set these commands in order to control the statements used at runtime to resolve changes, including the use of stored procedures. For ad-hoc scenarios, a CommandBuilder object can generate these at run-time based upon a select statement. However, this run-time generation requires an extra round-trip to the server in order to gather required metadata, so explicitly providing the INSERT, UPDATE, and DELETE commands at design time will result in better run-time performance.

  • ADO.NET is the next evolution of ADO for the .Net Framework.
  • ADO.NET was created with n-Tier, statelessness and XML in the forefront. Two new objects, the DataSet and DataAdapter, are provided for these scenarios.
  • ADO.NET can be used to get data from a stream, or to store data in a cache for updates.
  • There is a lot more information about ADO.NET in the documentation.
  • Remember, you can execute a command directly against the database in order to do inserts, updates, and deletes. You don’t need to first put data into a DataSet in order to insert, update, or delete it.
  • Also, you can use a DataSet to bind to the data, move through the data, and navigate data relationships


The two design objectives continuously sought by developers are reliability and maintenance.

Reliable System

There are two levels of reliability. The first is meeting the right requirements. A careful and through systems study is needed to satisfy this aspect of reliability. The second level of systems reliability involves the actual working delivered to the user. At this level, the systems reliability is interwoven with software engineering and development. There are three approaches to reliability.

Error avoidance: Prevents errors from occurring in software. Error detection and correction: In this approach errors are recognized whenever they are encountered and correcting the error by effect of error, of the system does not fail. Error tolerance: In this approach errors are recognized whenever they occur, but enables the system to keep running through degraded perform or by applying values that instruct the system to continue process.


The key to reducing need for maintenance, while working, if possible to do essential tasks.

More accurately defining user requirement during system development.

Assembling better systems documentation.

Using more effective methods for designing, processing, login and communicating information with project team members.

Making better use of existing tools and techniques.

Managing system engineering process effectively.

Output Design:

One of the most important factors of an information system for the user is the output the system produces. Without the quality of the output, the entire system may appear unnecessary that will make us avoid using it possibly causing it to fail. Designing the output should process the in an organized well throughout the manner. The right output must be developed while ensuring that each output element is designed so that people will find the system easy to use effectively.

The term output applying to information produced by an information system whether printed or displayed while designing the output we should identify the specific output that is needed to information requirements select a method to present the formation and create a document report or other formats that contains produced by the system.

Types of output:

Whether the output is formatted report or a simple listing of the contents of a file, a computer process will produce the output.

  • A Document
  • A Message
  • Retrieval from a data store
  • Transmission from a process or system activity
  • Directly from an output sources

Layout Design:

It is an arrangement of items on the output medium. The layouts are building a

mock up of the actual reports or document, as it will appear after the system is in operation. The output layout has been designated to cover information. The outputs are presented in the appendix.

Input design and control:

Input specifications describe the manner in which data enter the system for processing. Input design features will ensure the reliability of the systems and produce results from accurate data, or thus can be result in the production of erroneous information. The input design also determines whenever the user can interact efficiently with this system.

Objectives of input design:

Input design consists of developing specifications and procedures for data preparation, the steps necessary to put transaction data into a usable from for processing and data entry, the activity of data into the computer processing. The five objectives of input design are:

  • Controlling the amount of input
  • Avoiding delay
  • Avoiding error in data
  • Avoiding extra steps
  • Keeping the process simple

Controlling the amount of input:

Data preparation and data entry operation depend on people, Because labour costs are high, the cost of preparing and entering data is also high. Reducing data requirement expense. By reducing input requirement the speed of entire process from data capturing to processing to provide results to users.

Avoiding delay:

The processing delay resulting from data preparation or data entry operations is called bottlenecks. Avoiding bottlenecks should be one objective of input.

Avoiding errors:

Through input validation we control the errors in the input data.

Avoiding extra steps:

The designer should avoid the input design that cause extra steps in processing saving or adding a single step in large number of transactions saves a lot of processing time or takes more time to process.

Keeping process simple:

If controls are more people may feel difficult in using the systems. The best-designed system fits the people who use it in a way that is comfortable for them.


It is a process of converting a relation to a standard form. The process is used to handle the problems that can arise due to data redundancy i.e. repetition of data in the database, maintain data integrity as well as handling problems that can arise due to insertion, updation, deletion anomalies.

Decomposing is the process of splitting relations into multiple relations to eliminate anomalies and maintain anomalies and maintain data integrity. To do this we use normal forms or rules for structuring relation.

Insertion anomaly: Inability to add data to the database due to absence of other data.

Deletion anomaly: Unintended loss of data due to deletion of other data.

Update anomaly: Data inconsistency resulting from data redundancy and partial update

Normal Forms: These are the rules for structuring relations that eliminate anomalies.

First Normal Form:

A relation is said to be in first normal form if the values in the relation are atomic for every attribute in the relation. By this we mean simply that no attribute value can be a set of values or, as it is sometimes expressed, a repeating group.

Second Normal Form:

A relation is said to be in second Normal form is it is in first normal form and it should satisfy any one of the following rules.

  • Primary key is a not a composite primary key
  • No non key attributes are present
  • Every non key attribute is fully functionally dependent on full set of primary key.

Third Normal Form:

A relation is said to be in third normal form if their exits no transitive dependencies.

Transitive Dependency: If two non key attributes depend on each other as well as on the primary key then they are said to be transitively dependent.

The above normalization principles were applied to decompose the data in multiple table thereby making the data to be maintained in a consistent state.

Data Dictionary

After carefully understanding the requirements of the client the the entire data storage requirements are divided into tables. The below tables are normalized to avoid any anomalies during the course of data entry.


A data flow diagram is graphical tool used to describe and analyze movement of data through a system. These are the central tool and the basis from which the other components are developed. The transformation of data from input to output, through processed, may be described logically and independently of physical components associated with the system. These are known as the logical data flow diagrams. The physical data flow diagrams show the actual implements and movement of data between people, departments and workstations. A full description of a system actually consists of a set of data flow diagrams. Using two familiar notations Yourdon, Gane and Sarson notation develops the data flow diagrams. Each component in a DFD is labeled with a descriptive name. Process is further identified with a number that will be used for identification purpose. The development of DFD’s is done in several levels. Each process in lower level diagrams can be broken down into a more detailed DFD in the next level. The lop-level diagram is often called context diagram. It consists a single process bit, which plays vital role in studying the current system. The process in the context level diagram is exploded into other process at the first level DFD.

The idea behind the explosion of a process into more process is that understanding at one level of detail is exploded into greater detail at the next level. This is done until further explosion is necessary and an adequate amount of detail is described for analyst to understand the process.

Larry Constantine first developed the DFD as a way of expressing system requirements in a graphical from, this lead to the modular design.

A DFD is also known as a “bubble Chart” has the purpose of clarifying system requirements and identifying major transformations that will become programs in system design. So it is the starting point of the design to the lowest level of detail. A DFD consists of a series of bubbles joined by data flows in the system.


In the DFD, there are four symbols

  • A square defines a source(originator) or destination of system data
  • An arrow identifies data flow. It is the pipeline through which the information flows
  • A circle or a bubble represents a process that transforms incoming data flow into outgoing data flows.
  • An open rectangle is a data store, data at rest or a temporary repository of data


Several rules of thumb are used in drawing DFD’s:

  • Process should be named and numbered for an easy reference. Each name should be representative of the process.
  • The direction of flow is from top to bottom and from left to right. Data Traditionally flow from source to the destination although they may flow back to the source. One way to indicate this is to draw long flow line back to a source. An alternative way is to repeat the source symbol as a destination. Since it is used more than once in the DFD it is marked with a short diagonal.
  • When a process is exploded into lower level details, they are numbered.
  • The names of data stores and destinations are written in capital letters. Process and dataflow names have the first letter of each work capitalized

A DFD typically shows the minimum contents of data store. Each data store should contain all the data elements that flow in and out.

Questionnaires should contain all the data elements that flow in and out. Missing interfaces redundancies and like is then accounted for often through interviews.


  • The DFD shows flow of data, not of control loops and decision are controlled considerations do not appear on a DFD.
  • The DFD does not indicate the time factor involved in any process whether the dataflows take place daily, weekly, monthly or yearly.
  • The sequence of events is not brought out on the DFD.


  • Current Physical
  • Current Logical
  • New Logical
  • New Physical


In Current Physical DFD process label include the name of people or their positions or the names of computer systems that might provide some of the overall system-processing label includes an identification of the technology used to process the data. Similarly data flows and data stores are often labels with the names of the actual physical media on which data are stored such as file folders, computer files, business forms or computer tapes.


The physical aspects at the system are removed as mush as possible so that the current system is reduced to its essence to the data and the processors that transform them regardless of actual physical form.


This is exactly like a current logical model if the user were completely happy with he user were completely happy with the functionality of the current system but had problems with how it was implemented typically through the new logical model will differ from current logical model while having additional functions, absolute function removal and inefficient flows recognized.


The new physical represents only the physical implementation of the new system.



  • No process can have only outputs.
  • No process can have only inputs. If an object has only inputs than it must be a sink.
  • A process has a verb phrase label.


  • Data cannot move directly from one data store to another data store, a process must move data.
  • Data cannot move directly from an outside source to a data store, a process, which receives, must move data from the source and place the data into data store
  • A data store has a noun phrase label.


The origin and /or destination of data.

  • Data cannot move direly from a source to sink it must be moved by a process
  • A source and /or sink has a noun phrase land


  • A Data Flow has only one direction of flow between symbol. It may flow in both directions between a process and a data store to show a read before an update. The later is usually indicated however by two separate arrows since these happen at different type.
  • A join in DFD means that exactly the same data comes from any of two or more different processes data store or sink to a common location.
  • A data flow cannot go directly back to the same process it leads. There must be atleast one other process that handles the data flow produce some other data flow returns the original data into the beginning process.
  • A Data flow to a data store means update ( delete or change).
  • A data Flow from a data store means retrieve or use.

A data flow has a noun phrase label more than one data flow noun phrase can appear on a single arrow as long as all of the flows on the same arrow move together as one package.



The Application Network Sniffer provides a user friendly interface to an administrator in the areas of system services to be handled. But with the current system the systems in the network can be monitored for the different services running on it. Registry can be managed, and hardware can be monitored. Although it provides various features of reporting and managing the network, the currently system doesn’t allow to reflect changes in the areas where it allows to monitor.

It is under constant development, and releases are usually frequent.

The next version of can include the features, which are not available in the current one.



It was a good idea to do our stress testing early on, because it gave us time to fix some of the unexpected deadlocks and stability problems that only occurred when components were exposed to very high end tasks.


This program was successfully loaded and executed. Because of good programming there was no execution error.


The successful output screens are placed in the output screens section.


  • The project has been appreciated by all the users in the organization.
  • It is easy to use, since it uses the GUI provided in the user dialog.
  • User friendly screens are provided.
  • The usage of software increases the efficiency, decreases the effort.
  • It has been efficiently employed as a tool for Monitoring System resources and updating the system with latest software.
  • It also provides the user with variable options in settings.
  • It has been thoroughly tested and implemented.



By Roger.S. Pressman


By Evangeleous Petereous

MSDN 2008 for .Net (April 2008 Edition)

By Microsoft

Did you like this example?

Cite this page

Network sniffer. (2017, Jun 26). Retrieved February 5, 2023 , from

Save time with Studydriver!

Get in touch with our top writers for a non-plagiarized essays written to satisfy your needs

Get custom essay

Stuck on ideas? Struggling with a concept?

A professional writer will make a clear, mistake-free paper for you!

Get help with your assigment
Leave your email and we will send a sample to you.
Stop wasting your time searching for samples!
You can find a skilled professional who can write any paper for you.
Get unique paper

I'm Chatbot Amy :)

I can help you save hours on your homework. Let's start by finding a writer.

Find Writer